More stories

  • in

    Fast-tracking fusion energy’s arrival with AI and accessibility

    As the impacts of climate change continue to grow, so does interest in fusion’s potential as a clean energy source. While fusion reactions have been studied in laboratories since the 1930s, there are still many critical questions scientists must answer to make fusion power a reality, and time is of the essence. As part of their strategy to accelerate fusion energy’s arrival and reach carbon neutrality by 2050, the U.S. Department of Energy (DoE) has announced new funding for a project led by researchers at MIT’s Plasma Science and Fusion Center (PSFC) and four collaborating institutions.

    Cristina Rea, a research scientist and group leader at the PSFC, will serve as the primary investigator for the newly funded three-year collaboration to pilot the integration of fusion data into a system that can be read by AI-powered tools. The PSFC, together with scientists from William & Mary, the University of Wisconsin at Madison, Auburn University, and the nonprofit HDF Group, plan to create a holistic fusion data platform, the elements of which could offer unprecedented access for researchers, especially underrepresented students. The project aims to encourage diverse participation in fusion and data science, both in academia and the workforce, through outreach programs led by the group’s co-investigators, of whom four out of five are women. 

    The DoE’s award, part of a $29 million funding package for seven projects across 19 institutions, will support the group’s efforts to distribute data produced by fusion devices like the PSFC’s Alcator C-Mod, a donut-shaped “tokamak” that utilized powerful magnets to control and confine fusion reactions. Alcator C-Mod operated from 1991 to 2016 and its data are still being studied, thanks in part to the PSFC’s commitment to the free exchange of knowledge.

    Currently, there are nearly 50 public experimental magnetic confinement-type fusion devices; however, both historical and current data from these devices can be difficult to access. Some fusion databases require signing user agreements, and not all data are catalogued and organized the same way. Moreover, it can be difficult to leverage machine learning, a class of AI tools, for data analysis and to enable scientific discovery without time-consuming data reorganization. The result is fewer scientists working on fusion, greater barriers to discovery, and a bottleneck in harnessing AI to accelerate progress.

    The project’s proposed data platform addresses technical barriers by being FAIR — Findable, Interoperable, Accessible, Reusable — and by adhering to UNESCO’s Open Science (OS) recommendations to improve the transparency and inclusivity of science; all of the researchers’ deliverables will adhere to FAIR and OS principles, as required by the DoE. The platform’s databases will be built using MDSplusML, an upgraded version of the MDSplus open-source software developed by PSFC researchers in the 1980s to catalogue the results of Alcator C-Mod’s experiments. Today, nearly 40 fusion research institutes use MDSplus to store and provide external access to their fusion data. The release of MDSplusML aims to continue that legacy of open collaboration.

    The researchers intend to address barriers to participation for women and disadvantaged groups not only by improving general access to fusion data, but also through a subsidized summer school that will focus on topics at the intersection of fusion and machine learning, which will be held at William & Mary for the next three years.

    Of the importance of their research, Rea says, “This project is about responding to the fusion community’s needs and setting ourselves up for success. Scientific advancements in fusion are enabled via multidisciplinary collaboration and cross-pollination, so accessibility is absolutely essential. I think we all understand now that diverse communities have more diverse ideas, and they allow faster problem-solving.”

    The collaboration’s work also aligns with vital areas of research identified in the International Atomic Energy Agency’s “AI for Fusion” Coordinated Research Project (CRP). Rea was selected as the technical coordinator for the IAEA’s CRP emphasizing community engagement and knowledge access to accelerate fusion research and development. In a letter of support written for the group’s proposed project, the IAEA stated that, “the work [the researchers] will carry out […] will be beneficial not only to our CRP but also to the international fusion community in large.”

    PSFC Director and Hitachi America Professor of Engineering Dennis Whyte adds, “I am thrilled to see PSFC and our collaborators be at the forefront of applying new AI tools while simultaneously encouraging and enabling extraction of critical data from our experiments.”

    “Having the opportunity to lead such an important project is extremely meaningful, and I feel a responsibility to show that women are leaders in STEM,” says Rea. “We have an incredible team, strongly motivated to improve our fusion ecosystem and to contribute to making fusion energy a reality.” More

  • in

    Artificial intelligence for augmentation and productivity

    The MIT Stephen A. Schwarzman College of Computing has awarded seed grants to seven projects that are exploring how artificial intelligence and human-computer interaction can be leveraged to enhance modern work spaces to achieve better management and higher productivity.

    Funded by Andrew W. Houston ’05 and Dropbox Inc., the projects are intended to be interdisciplinary and bring together researchers from computing, social sciences, and management.

    The seed grants can enable the project teams to conduct research that leads to bigger endeavors in this rapidly evolving area, as well as build community around questions related to AI-augmented management.

    The seven selected projects and research leads include:

    “LLMex: Implementing Vannevar Bush’s Vision of the Memex Using Large Language Models,” led by Patti Maes of the Media Lab and David Karger of the Department of Electrical Engineering and Computer Science (EECS) and the Computer Science and Artificial Intelligence Laboratory (CSAIL). Inspired by Vannevar Bush’s Memex, this project proposes to design, implement, and test the concept of memory prosthetics using large language models (LLMs). The AI-based system will intelligently help an individual keep track of vast amounts of information, accelerate productivity, and reduce errors by automatically recording their work actions and meetings, supporting retrieval based on metadata and vague descriptions, and suggesting relevant, personalized information proactively based on the user’s current focus and context.

    “Using AI Agents to Simulate Social Scenarios,” led by John Horton of the MIT Sloan School of Management and Jacob Andreas of EECS and CSAIL. This project imagines the ability to easily simulate policies, organizational arrangements, and communication tools with AI agents before implementation. Tapping into the capabilities of modern LLMs to serve as a computational model of humans makes this vision of social simulation more realistic, and potentially more predictive.

    “Human Expertise in the Age of AI: Can We Have Our Cake and Eat it Too?” led by Manish Raghavan of MIT Sloan and EECS, and Devavrat Shah of EECS and the Laboratory for Information and Decision Systems. Progress in machine learning, AI, and in algorithmic decision aids has raised the prospect that algorithms may complement human decision-making in a wide variety of settings. Rather than replacing human professionals, this project sees a future where AI and algorithmic decision aids play a role that is complementary to human expertise.

    “Implementing Generative AI in U.S. Hospitals,” led by Julie Shah of the Department of Aeronautics and Astronautics and CSAIL, Retsef Levi of MIT Sloan and the Operations Research Center, Kate Kellog of MIT Sloan, and Ben Armstrong of the Industrial Performance Center. In recent years, studies have linked a rise in burnout from doctors and nurses in the United States with increased administrative burdens associated with electronic health records and other technologies. This project aims to develop a holistic framework to study how generative AI technologies can both increase productivity for organizations and improve job quality for workers in health care settings.

    “Generative AI Augmented Software Tools to Democratize Programming,” led by Harold Abelson of EECS and CSAIL, Cynthia Breazeal of the Media Lab, and Eric Klopfer of the Comparative Media Studies/Writing. Progress in generative AI over the past year is fomenting an upheaval in assumptions about future careers in software and deprecating the role of coding. This project will stimulate a similar transformation in computing education for those who have no prior technical training by creating a software tool that could eliminate much of the need for learners to deal with code when creating applications.

    “Acquiring Expertise and Societal Productivity in a World of Artificial Intelligence,” led by David Atkin and Martin Beraja of the Department of Economics, and Danielle Li of MIT Sloan. Generative AI is thought to augment the capabilities of workers performing cognitive tasks. This project seeks to better understand how the arrival of AI technologies may impact skill acquisition and productivity, and to explore complementary policy interventions that will allow society to maximize the gains from such technologies.

    “AI Augmented Onboarding and Support,” led by Tim Kraska of EECS and CSAIL, and Christoph Paus of the Department of Physics. While LLMs have made enormous leaps forward in recent years and are poised to fundamentally change the way students and professionals learn about new tools and systems, there is often a steep learning curve which people have to climb in order to make full use of the resource. To help mitigate the issue, this project proposes the development of new LLM-powered onboarding and support systems that will positively impact the way support teams operate and improve the user experience. More

  • in

    Summer research offers a springboard to advanced studies

    Doctoral studies at MIT aren’t a calling for everyone, but they can be for anyone who has had opportunities to discover that science and technology research is their passion and to build the experience and skills to succeed. For Taylor Baum, Josefina Correa Menéndez, and Karla Alejandra Montejo, three graduate students in just one lab of The Picower Institute for Learning and Memory, a pivotal opportunity came via the MIT Summer Research Program in Biology and Neuroscience (MSRP-Bio). When a student finds MSRP-Bio, it helps them find their future in research. 

    In the program, undergraduate STEM majors from outside MIT spend the summer doing full-time research in the departments of Biology, Brain and Cognitive Sciences (BCS), or the Center for Brains, Minds and Machines (CBMM). They gain lab skills, mentoring, preparation for graduate school, and connections that might last a lifetime. Over the last two decades, a total of 215 students from underrepresented minority groups, who are from economically disadvantaged backgrounds, first-generation or nontraditional college students, or students with disabilities have participated in research in BCS or CBMM labs.  

    Like Baum, Correa Menéndez, and Montejo, the vast majority go on to pursue graduate studies, says Diversity and Outreach Coordinator Mandana Sassanfar, who runs the program. For instance, among 91 students who have worked in Picower Institute labs, 81 have completed their undergraduate studies. Of those, 46 enrolled in PhD programs at MIT or other schools such as Cornell, Yale, Stanford, and Princeton universities, and the University of California System. Another 12 have gone to medical school, another seven are in MD/PhD programs, and three have earned master’s degrees. The rest are studying as post-baccalaureates or went straight into the workforce after earning their bachelor’s degree. 

    After participating in the program, Baum, Correa Menéndez, and Montejo each became graduate students in the research group of Emery N. Brown, the Edward Hood Taplin Professor of Computational Neuroscience and Medical Engineering in The Picower Institute and the Institute for Medical Engineering and Science. The lab combines statistical, computational, and experimental neuroscience methods to study how general anesthesia affects the central nervous system to ultimately improve patient care and advance understanding of the brain. Brown says the students have each been doing “off-the-scale” work, in keeping with the excellence he’s seen from MSRP BIO students over the years. For example, on Aug. 10 Baum and Correa Menéndez were honored with MathWorks Fellowships.

    “I think MSRP is fantastic. Mandana does this amazing job of getting students who are quite talented to come to MIT to realize that they can move their game to the next level. They have the capacity to do it. They just need the opportunities,” Brown says. “These students live up to the expectations that you have of them. And now as graduate students, they’re taking on hard problems and they’re solving them.” 

    Paths to PhD studies 

    Pursuing a PhD is hardly a given. Many young students have never considered graduate school or specific fields of study like neuroscience or electrical engineering. But Sassanfar engages students across the country to introduce them to the opportunity MSRP-Bio provides to gain exposure, experience, and mentoring in advanced fields. Every fall, after the program’s students have returned to their undergraduate institutions, she visits schools in places as far flung as Florida, Maryland, Puerto Rico, and Texas and goes to conferences for diverse science communities such as ABRCMS and SACNAS to spread the word. 

    Taylor Baum

    Photo courtesy of Taylor Baum.

    Previous item
    Next item

    When Baum first connected with the program in 2017, she was finding her way at Penn State University. She had been majoring in biology and music composition but had just switched the latter to engineering following a conversation over coffee exposing her to brain-computer interfacing technology, in which detecting brain signals of people with full-body paralysis could improve their quality of life by enabling control of computers or wheelchairs. Baum became enthusiastic about the potential to build similar systems, but as a new engineering student, she struggled to find summer internships and research opportunities. 

    “I got rejected from every single progam except the MIT Center for Brains, Minds and Machines MSRP,” she recalls with a chuckle. 

    Baum thrived in MSRP-Bio, working in Brown’s lab for three successive summers. At each stage, she said, she gained more research skills, experience, and independence. When she graduated, she was sure she wanted to go to graduate school and applied to four of her dream schools. She accepted MIT’s offer to join the Department of Electrical Engineering and Computer Science, where she is co-advised by faculty members there and by Brown. She is now working to develop a system grounded in cardiovascular physiology that can improve blood pressure management. A tool for practicing anesthesiologists, the system automates the dosing of drugs to maintain a patient’s blood pressure at safe levels in the operating room or intensive care unit. 

    More than that, Baum not only is leading an organization advancing STEM education in Puerto Rico, but also is helping to mentor a current MSRP-Bio student in the Brown lab. 

    “MSRP definitely bonds everyone who has participated in it,” Baum says. “If I see anyone who I know participated in MSRP, we could have an immediate conversation. I know that most of us, if we needed help, we’d feel comfortable asking for help from someone from MSRP. With that shared experience, we have a sense of camaraderie, and community.” 

    In fact, a few years ago when a former MSRP-Bio student named Karla Montejo was applying to MIT, Baum provided essential advice and feedback about the application process, Montejo says. Now, as a graduate student, Montejo has become a mentor for the program in her own right, Sassanfar notes. For instance, Montejo serves on program alumni panels that advise new MSRP-Bio students. 

    Karla Alejandra Montejo

    Photo courtesy of Karla Alejandra Montejo.

    Previous item
    Next item

    Montejo’s family immigrated to Miami from Cuba when she was a child. The magnet high school she attended was so new that students were encouraged to help establish the school’s programs. She forged a path into research. 

    “I didn’t even know what research was,” she says. “I wanted to be a doctor, and I thought maybe it would help me on my resume. I thought it would be kind of like shadowing, but no, it was really different. So I got really captured by research when I was in high school.” 

    Despite continuing to pursue research in college at Florida International University, Montejo didn’t get into graduate school on her first attempt because she hadn’t yet learned how to focus her application. But Sassanfar had visited FIU to recruit students and through that relationship Montejo had already gone through MIT’s related Quantitative Methods Workshop (QMW). So Montejo enrolled in MSRP-Bio, working in the CBMM-affiliated lab of Gabriel Kreiman at Boston Children’s Hospital. 

    “I feel like Mandana really helped me out, gave me a break, and the MSRP experience pretty much solidified that I really wanted to come to MIT,” Montejo says. 

    In the QMW, Montejo learned she really liked computational neuroscience, and in Kreiman’s lab she got to try her hand at computational modeling of the cognition involved in making perceptual sense of complex scenes. Montejo realized she wanted to work on more biologically based neuroscience problems. When the summer ended, because she was off the normal graduate school cycle for now, she found a two-year post-baccalaurate program at Mayo Clinic studying the role a brain cell type called astrocytes might have in the Parkinson’s disease treatment deep brain stimulation. 

    When it came time to reapply to graduate schools (with the help of Baum and others in the BCS Application Assistance Program) Montejo applied to MIT and got in, joining the Brown lab. Now she’s working on modeling the role of  metabolic processes in the changing of brain rhythms under anesthesia, taking advantage of how general anesthesia predictably changes brain states. The effects anesthetic drugs have on cell metabolism and the way that ultimately affects levels of consciousness reveals important aspects of how metabolism affects brain circuits and systems. Earlier this month, for instance, Montejo co-led a paper the lab published in The Proceedings of the National Academy of Sciences detailing the neuroscience of a patient’s transition into an especially deep state of unconsciousness called “burst suppression.” 

    Josefina Correa Menendez

    Photo: David Orenstein

    Previous item
    Next item

    A signature of the Brown lab’s work is rigorous statistical analysis and methods, for instance to discern brain arousal states from EEG measures of brain rhythms. A PhD candidate in MIT’s Interdisciplinary Doctoral Program in Statistics, Correa Menéndez is advancing the use of Bayesian hierarchical models for neural data analysis. These statistical models offer a principled way of pooling information across datasets. One of her models can help scientists better understand the way neurons can “spike” with electrical activity when the brain is presented with a stimulus. The other’s power is in discerning critical features such as arousal states of the brain under general anesthesia from electrophysiological recordings. 

    Though she now works with complex equations and computations as a PhD candidate in neuroscience and statistics, Correa Menéndez was mostly interested in music art as a high school student at Academia María Reina in San Juan and then architecture in college at the University of Puerto Rico at Río Piedras. It was discussions at the intersection of epistemology and art during an art theory class that inspired Correa Menéndez to switch her major to biology and to take computer science classes, too. 

    When Sassanfar visited Puerto Rico in 2017, a computer science professor (Patricia Ordóñez) suggested that Correa Menéndez apply for a chance to attend the QMW. She did, and that led her to also participate in MSRP-Bio in the lab of Sherman Fairchild Professor Matt Wilson (a faculty member in BCS, CBMM, and the Picower Institute). She joined in the lab’s studies of how spatial memories are represented in the hippocampus and how the brain makes use of those memories to help understand the world around it. With mentoring from then-postdoc Carmen Varela (now a faculty member at Florida State University), the experience not only exposed her to neuroscience, but also helped her gain skills and experience with lab experiments, building research tools, and conducting statistical analyses. She ended up working in the Wilson lab as a research scholar for a year and began her graduate studies in September 2018.  

    Classes she took with Brown as a research scholar inspired her to join his lab as a graduate student. 

    “Taking the classes with Emery and also doing experiments made me aware of the role of statistics in the scientific process: from the interpretation of results to the analysis and the design of experiments,” she says. “More often than not, in science, statistics becomes this sort of afterthought — this ‘annoying’ thing that people need to do to get their paper published. But statistics as a field is actually a lot more than that. It’s a way of thinking about data. Particularly, Bayesian modeling provides a principled inference framework for combining prior knowledge into a hypothesis that you can test with data.” 

    To be sure, no one starts out with such inspiration about scientific scholarship, but MSRP-Bio helps students find that passion for research and the paths that opens up.   More

  • in

    Making sense of cell fate

    Despite the proliferation of novel therapies such as immunotherapy or targeted therapies, radiation and chemotherapy remain the frontline treatment for cancer patients. About half of all patients still receive radiation and 60-80 percent receive chemotherapy.

    Both radiation and chemotherapy work by damaging DNA, taking advantage of a vulnerability specific to cancer cells. Healthy cells are more likely to survive radiation and chemotherapy since their mechanisms for identifying and repairing DNA damage are intact. In cancer cells, these repair mechanisms are compromised by mutations. When cancer cells cannot adequately respond to the DNA damage caused by radiation and chemotherapy, ideally, they undergo apoptosis or die by other means.

    However, there is another fate for cells after DNA damage: senescence — a state where cells survive, but stop dividing. Senescent cells’ DNA has not been damaged enough to induce apoptosis but is too damaged to support cell division. While senescent cancer cells themselves are unable to proliferate and spread, they are bad actors in the fight against cancer because they seem to enable other cancer cells to develop more aggressively. Although a cancer cell’s fate is not apparent until a few days after treatment, the decision to survive, die, or enter senescence is made much earlier. But, precisely when and how that decision is made has not been well understood.

    In an open-access study of ovarian and osteosarcoma cancer cells appearing July 19 in Cell Systems, MIT researchers show that cell signaling proteins commonly associated with cell proliferation and apoptosis instead commit cancer cells to senescence within 12 hours of treatment with low doses of certain kinds of chemotherapy.

    “When it comes to treating cancer, this study underscores that it’s important not to think too linearly about cell signaling,” says Michael Yaffe, who is a David H. Koch Professor of Science at MIT, the director of the MIT Center for Precision Cancer Medicine, a member of MIT’s Koch Institute for Integrative Cancer Research, and the senior author of the study. “If you assume that a particular treatment will always affect cancer cell signaling in the same way — you may be setting yourself up for many surprises, and treating cancers with the wrong combination of drugs.”

    Using a combination of experiments with cancer cells and computational modeling, the team investigated the cell signaling mechanisms that prompt cancer cells to enter senescence after treatment with a commonly used anti-cancer agent. Their efforts singled out two protein kinases and a component of the AP-1 transcription factor complex as highly associated with the induction of senescence after DNA damage, despite the well-established roles for all of these molecules in promoting cell proliferation in cancer.

    The researchers treated cancer cells with low and high doses of doxorubicin, a chemotherapy that interferes with the function with topoisomerase II, an enzyme that breaks and then repairs DNA strands during replication to fix tangles and other topological problems.

    By measuring the effects of DNA damage on single cells at several time points ranging from six hours to four days after the initial exposure, the team created two datasets. In one dataset, the researchers tracked cell fate over time. For the second set, researchers measured relative cell signaling activity levels across a variety of proteins associated with responses to DNA damage or cellular stress, determination of cell fate, and progress through cell growth and division.

    The two datasets were used to build a computational model that identifies correlations between time, dosage, signal, and cell fate. The model identified the activities of the MAP kinases Erk and JNK, and the transcription factor c-Jun as key components of the AP-1 protein likewise understood to involved in the induction of senescence. The researchers then validated these computational findings by showing that inhibition of JNK and Erk after DNA damage successfully prevented cells from entering senescence.

    The researchers leveraged JNK and Erk inhibition to pinpoint exactly when cells made the decision to enter senescence. Surprisingly, they found that the decision to enter senescence was made within 12 hours of DNA damage, even though it took days to actually see the senescent cells accumulate. The team also found that with the passage of more time, these MAP kinases took on a different function: promoting the secretion of proinflammatory proteins called cytokines that are responsible for making other cancer cells proliferate and develop resistance to chemotherapy.

    “Proteins like cytokines encourage ‘bad behavior’ in neighboring tumor cells that lead to more aggressive cancer progression,” says Tatiana Netterfield, a graduate student in the Yaffe lab and the lead author of the study. “Because of this, it is thought that senescent cells that stay near the tumor for long periods of time are detrimental to treating cancer.”

    This study’s findings apply to cancer cells treated with a commonly used type of chemotherapy that stalls DNA replication after repair. But more broadly, the study emphasizes that “when treating cancer, it’s extremely important to understand the molecular characteristics of cancer cells and the contextual factors such as time and dosing that determine cell fate,” explains Netterfield.

    The study, however, has more immediate implications for treatments that are already in use. One class of Erk inhibitors, MEK inhibitors, are used in the clinic with the expectation that they will curb cancer growth.

    “We must be cautious about administering MEK inhibitors together with chemotherapies,” says Yaffe. “The combination may have the unintended effect of driving cells into proliferation, rather than senescence.”

    In future work, the team will perform studies to understand how and why individual cells choose to proliferate instead of enter senescence. Additionally, the team is employing next-generation sequencing to understand which genes c-Jun is regulating in order to push cells toward senescence.

    This study was funded, in part, by the Charles and Marjorie Holloway Foundation and the MIT Center for Precision Cancer Medicine. More

  • in

    Q&A: Are far-reaching fires the new normal?

    Where there’s smoke, there is fire. But with climate change, larger and longer-burning wildfires are sending smoke farther from their source, often to places that are unaccustomed to the exposure. That’s been the case this week, as smoke continues to drift south from massive wildfires in Canada, prompting warnings of hazardous air quality, and poor visibility in states across New England, the mid-Atlantic, and the Midwest.

    As wildfire season is just getting going, many may be wondering: Are the air-polluting effects of wildfires a new normal?

    MIT News spoke with Professor Colette Heald of the Department of Civil and Environmental Engineering and the Department of Earth, Atmospheric and Planetary Sciences, and Professor Noelle Selin of the Institute for Data, Systems and Society and the Department of Earth, Atmospheric and Planetary Sciences. Heald specializes in atmospheric chemistry and has studied the climate and health effects associated with recent wildfires, while Selin works with atmospheric models to track air pollutants around the world, which she uses to inform policy decisions on mitigating  pollution and climate change. The researchers shared some of their insights on the immediate impacts of Canada’s current wildfires and what downwind regions may expect in the coming months, as the wildfire season stretches into summer.  

    Q: What role has climate change and human activity played in the wildfires we’ve seen so far this year?

    Heald: Unusually warm and dry conditions have dramatically increased fire susceptibility in Canada this year. Human-induced climate change makes such dry and warm conditions more likely. Smoke from fires in Alberta and Nova Scotia in May, and Quebec in early June, has led to some of the worst air quality conditions measured locally in Canada. This same smoke has been transported into the United States and degraded air quality here as well. Local officials have determined that ignitions have been associated with lightning strikes, but human activity has also played a role igniting some of the fires in Alberta.

    Q: What can we expect for the coming months in terms of the pattern of wildfires and their associated air pollution across the United States?

    Heald: The Government of Canada is projecting higher-than-normal fire activity throughout the 2023 fire season. Fire susceptibility will continue to respond to changing weather conditions, and whether the U.S. is impacted will depend on the winds and how air is transported across those regions. So far, the fire season in the United States has been below average, but fire risk is expected to increase modestly through the summer, so we may see local smoke influences as well.

    Q: How has air pollution from wildfires affected human health in the U.S. this year so far?

    Selin: The pollutant of most concern in wildfire smoke is fine particulate matter (PM2.5) – fine particles in the atmosphere that can be inhaled deep into the lungs, causing health damages. Exposure to PM2.5 causes respiratory and cardiovascular damage, including heart attacks and premature deaths. It can also cause symptoms like coughing and difficulty breathing. In New England this week, people have been breathing much higher concentrations of PM2.5 than usual. People who are particularly vulnerable to the effects are likely experiencing more severe impacts, such as older people and people with underlying conditions. But PM2.5 affects everyone. While the number and impact of wildfires varies from year to year, the associated air pollution from them generally lead to tens of thousands of premature deaths in the U.S. overall annually. There is also some evidence that PM2.5 from fires could be particularly damaging to health.

    While we in New England usually have relatively lower levels of pollution, it’s important also to note that some cities around the globe experience very high PM2.5 on a regular basis, not only from wildfires, but other sources such as power plants and industry. So, while we’re feeling the effects over the past few days, we should remember the broader importance of reducing PM2.5 levels overall for human health everywhere.

    Q: While firefighters battle fires directly this wildfire season, what can we do to reduce the effects of associated air pollution? And what can we do in the long-term, to prevent or reduce wildfire impacts?

    Selin: In the short term, protecting yourself from the impacts of PM2.5 is important. Limiting time outdoors, avoiding outdoor exercise, and wearing a high-quality mask are some strategies that can minimize exposure. Air filters can help reduce the concentrations of particles in indoor air. Taking measures to avoid exposure is particularly important for vulnerable groups. It’s also important to note that these strategies aren’t equally possible for everyone (for example, people who work outside) — stressing the importance of developing new strategies to address the underlying causes of increasing wildfires.

    Over the long term, mitigating climate change is important — because warm and dry conditions lead to wildfires, warming increases fire risk. Preventing the fires that are ignited by people or human activities can help.  Another way that damages can be mitigated in the longer term is by exploring land management strategies that could help manage fire intensity. More

  • in

    Bringing the social and ethical responsibilities of computing to the forefront

    There has been a remarkable surge in the use of algorithms and artificial intelligence to address a wide range of problems and challenges. While their adoption, particularly with the rise of AI, is reshaping nearly every industry sector, discipline, and area of research, such innovations often expose unexpected consequences that involve new norms, new expectations, and new rules and laws.

    To facilitate deeper understanding, the Social and Ethical Responsibilities of Computing (SERC), a cross-cutting initiative in the MIT Schwarzman College of Computing, recently brought together social scientists and humanists with computer scientists, engineers, and other computing faculty for an exploration of the ways in which the broad applicability of algorithms and AI has presented both opportunities and challenges in many aspects of society.

    “The very nature of our reality is changing. AI has the ability to do things that until recently were solely the realm of human intelligence — things that can challenge our understanding of what it means to be human,” remarked Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing, in his opening address at the inaugural SERC Symposium. “This poses philosophical, conceptual, and practical questions on a scale not experienced since the start of the Enlightenment. In the face of such profound change, we need new conceptual maps for navigating the change.”

    The symposium offered a glimpse into the vision and activities of SERC in both research and education. “We believe our responsibility with SERC is to educate and equip our students and enable our faculty to contribute to responsible technology development and deployment,” said Georgia Perakis, the William F. Pounds Professor of Management in the MIT Sloan School of Management, co-associate dean of SERC, and the lead organizer of the symposium. “We’re drawing from the many strengths and diversity of disciplines across MIT and beyond and bringing them together to gain multiple viewpoints.”

    Through a succession of panels and sessions, the symposium delved into a variety of topics related to the societal and ethical dimensions of computing. In addition, 37 undergraduate and graduate students from a range of majors, including urban studies and planning, political science, mathematics, biology, electrical engineering and computer science, and brain and cognitive sciences, participated in a poster session to exhibit their research in this space, covering such topics as quantum ethics, AI collusion in storage markets, computing waste, and empowering users on social platforms for better content credibility.

    Showcasing a diversity of work

    In three sessions devoted to themes of beneficent and fair computing, equitable and personalized health, and algorithms and humans, the SERC Symposium showcased work by 12 faculty members across these domains.

    One such project from a multidisciplinary team of archaeologists, architects, digital artists, and computational social scientists aimed to preserve endangered heritage sites in Afghanistan with digital twins. The project team produced highly detailed interrogable 3D models of the heritage sites, in addition to extended reality and virtual reality experiences, as learning resources for audiences that cannot access these sites.

    In a project for the United Network for Organ Sharing, researchers showed how they used applied analytics to optimize various facets of an organ allocation system in the United States that is currently undergoing a major overhaul in order to make it more efficient, equitable, and inclusive for different racial, age, and gender groups, among others.

    Another talk discussed an area that has not yet received adequate public attention: the broader implications for equity that biased sensor data holds for the next generation of models in computing and health care.

    A talk on bias in algorithms considered both human bias and algorithmic bias, and the potential for improving results by taking into account differences in the nature of the two kinds of bias.

    Other highlighted research included the interaction between online platforms and human psychology; a study on whether decision-makers make systemic prediction mistakes on the available information; and an illustration of how advanced analytics and computation can be leveraged to inform supply chain management, operations, and regulatory work in the food and pharmaceutical industries.

    Improving the algorithms of tomorrow

    “Algorithms are, without question, impacting every aspect of our lives,” said Asu Ozdaglar, deputy dean of academics for the MIT Schwarzman College of Computing and head of the Department of Electrical Engineering and Computer Science, in kicking off a panel she moderated on the implications of data and algorithms.

    “Whether it’s in the context of social media, online commerce, automated tasks, and now a much wider range of creative interactions with the advent of generative AI tools and large language models, there’s little doubt that much more is to come,” Ozdaglar said. “While the promise is evident to all of us, there’s a lot to be concerned as well. This is very much time for imaginative thinking and careful deliberation to improve the algorithms of tomorrow.”

    Turning to the panel, Ozdaglar asked experts from computing, social science, and data science for insights on how to understand what is to come and shape it to enrich outcomes for the majority of humanity.

    Sarah Williams, associate professor of technology and urban planning at MIT, emphasized the critical importance of comprehending the process of how datasets are assembled, as data are the foundation for all models. She also stressed the need for research to address the potential implication of biases in algorithms that often find their way in through their creators and the data used in their development. “It’s up to us to think about our own ethical solutions to these problems,” she said. “Just as it’s important to progress with the technology, we need to start the field of looking at these questions of what biases are in the algorithms? What biases are in the data, or in that data’s journey?”

    Shifting focus to generative models and whether the development and use of these technologies should be regulated, the panelists — which also included MIT’s Srini Devadas, professor of electrical engineering and computer science, John Horton, professor of information technology, and Simon Johnson, professor of entrepreneurship — all concurred that regulating open-source algorithms, which are publicly accessible, would be difficult given that regulators are still catching up and struggling to even set guardrails for technology that is now 20 years old.

    Returning to the question of how to effectively regulate the use of these technologies, Johnson proposed a progressive corporate tax system as a potential solution. He recommends basing companies’ tax payments on their profits, especially for large corporations whose massive earnings go largely untaxed due to offshore banking. By doing so, Johnson said that this approach can serve as a regulatory mechanism that discourages companies from trying to “own the entire world” by imposing disincentives.

    The role of ethics in computing education

    As computing continues to advance with no signs of slowing down, it is critical to educate students to be intentional in the social impact of the technologies they will be developing and deploying into the world. But can one actually be taught such things? If so, how?

    Caspar Hare, professor of philosophy at MIT and co-associate dean of SERC, posed this looming question to faculty on a panel he moderated on the role of ethics in computing education. All experienced in teaching ethics and thinking about the social implications of computing, each panelist shared their perspective and approach.

    A strong advocate for the importance of learning from history, Eden Medina, associate professor of science, technology, and society at MIT, said that “often the way we frame computing is that everything is new. One of the things that I do in my teaching is look at how people have confronted these issues in the past and try to draw from them as a way to think about possible ways forward.” Medina regularly uses case studies in her classes and referred to a paper written by Yale University science historian Joanna Radin on the Pima Indian Diabetes Dataset that raised ethical issues on the history of that particular collection of data that many don’t consider as an example of how decisions around technology and data can grow out of very specific contexts.

    Milo Phillips-Brown, associate professor of philosophy at Oxford University, talked about the Ethical Computing Protocol that he co-created while he was a SERC postdoc at MIT. The protocol, a four-step approach to building technology responsibly, is designed to train computer science students to think in a better and more accurate way about the social implications of technology by breaking the process down into more manageable steps. “The basic approach that we take very much draws on the fields of value-sensitive design, responsible research and innovation, participatory design as guiding insights, and then is also fundamentally interdisciplinary,” he said.

    Fields such as biomedicine and law have an ethics ecosystem that distributes the function of ethical reasoning in these areas. Oversight and regulation are provided to guide front-line stakeholders and decision-makers when issues arise, as are training programs and access to interdisciplinary expertise that they can draw from. “In this space, we have none of that,” said John Basl, associate professor of philosophy at Northeastern University. “For current generations of computer scientists and other decision-makers, we’re actually making them do the ethical reasoning on their own.” Basl commented further that teaching core ethical reasoning skills across the curriculum, not just in philosophy classes, is essential, and that the goal shouldn’t be for every computer scientist be a professional ethicist, but for them to know enough of the landscape to be able to ask the right questions and seek out the relevant expertise and resources that exists.

    After the final session, interdisciplinary groups of faculty, students, and researchers engaged in animated discussions related to the issues covered throughout the day during a reception that marked the conclusion of the symposium. More

  • in

    Study doubles the number of known repeating fast radio bursts

    Fast radio bursts (FRBs) are repeating flashes of radio waves that remain a source of mystery to astronomers. We do know a few things about them: FRBs originate from far outside the Milky Way, for instance, and they’re probably produced from the cinders of dying stars. While many astronomical radio waves have been observed to have burst only once, some waves have been seen bursting multiple times — a puzzle that has led astronomers to question if these radio waves are similar in nature and origin.

    Now, a large team of astronomers, including several from the MIT Kavli Institute for Astrophysics and Space Research and the MIT Department of Physics, have collaborated on work to decipher the origin and nature of FRBs. Their recent open-access publication in The Astrophysical Journal reports the discovery of 25 new repeating FRB sources, doubling the known number of these phenomena known to scientists to 50. In addition, the team found that many repeating FRBs are inactive, producing less than one burst per week of observing time.

    The Canadian-led Canadian Hydrogen Intensity Mapping Experiment (CHIME) has been instrumental in detecting thousands of FRBs as it scans the entire northern sky. So, astronomers with the CHIME/FRB Collaboration developed a new set of statistics tools to comb through massive sets of data to find every repeating source detected so far. This provided a valuable opportunity for astronomers to observe the same source with different telescopes and study the diversity of emission. “We can now accurately calculate the probability that two or more bursts coming from similar locations are not just a coincidence,” explains Ziggy Pleunis, a Dunlap Postdoctoral Fellow at the Dunlap Institute for Astronomy and Astrophysics and corresponding author of the new work.

    The team also concluded that all FRBs may eventually repeat. They found that radio waves seen to have burst only once differed from those that were seen to have burst multiple times both in terms of duration of bursts and range of frequencies emitted, which solidifies the idea that these radio bursts have indeed different origins.

    MIT postdoc Daniele Michilli and PhD student Kaitlyn Shin, both members of MIT Assistant Professor Kiyoshi Masui’s Synoptic Radio Lab, analyzed signals from CHIME’s 1,024 antennae. The work, Michilli says, “allowed us to unambiguously identify some of the sources as repeaters and to provide other observatories with accurate coordinates for follow-up studies.”

    “Now that we have a much larger sample of repeating FRBs, we’re better equipped to understand why we might observe some FRBs to be repeaters and others to be apparently non-repeating, and what the implications are for better understanding their origins,” says Shin.

    Adds Pleunis, “FRBs are likely produced by the leftovers from explosive stellar deaths. By studying repeating FRB sources in detail, we can study the environments that these explosions occur in and understand better the end stages of a star’s life. We can also learn more about the material that is being expelled before and during the star’s demise, which is then returned to the galaxies that the FRBs live in.”

    In addition to Michilli, Shin, and Masui, MIT contributors to the study include physics graduate students Calvin Leung and Haochen Wang. More

  • in

    Using data to write songs for progress

    A three-year recipient of MIT’s Emerson Classical Vocal Scholarships, senior Ananya Gurumurthy recalls getting ready to step onto the Carnegie Hall stage to sing a Mozart opera that she once sang with the New York All-State Choir. The choir conductor reminded her to articulate her words and to engage her diaphragm.

    “If you don’t project your voice, how are people going to hear you when you perform?” Gurumurthy recalls her conductor telling her. “This is your moment, your chance to connect with such a tremendous audience.”

    Gurumurthy reflects on the universal truth of those words as she adds her musical talents to her math and computer science studies to campaign for social and economic justice.

    The daughter of immigrants

    Growing up in Edgemont, New York, she was inspired to fight on behalf of others by her South Asian immigrant parents, who came to the United States in the 1980s. Her father is a management consultant and her mother has experience as an investment banker.

    “They came barely 15 years after the passage of the 1965 Immigration and Nationality Act, which removed national origin quotas from the American immigration system,” she says. “I would not be here if it had not been for the Civil Rights Movement, which preceded both me and my parents.”

    Her parents told her about their new home’s anti-immigrant sentiments; for example, her father was a graduate student in Dallas exiting a store when he was pelted with glass bottles and racial slurs.

    “I often consider the amount of bravery that it must have taken them to abandon everything they knew to immigrate to a new, but still imperfect, country in search of something better,” she says. “As a result, I have always felt so grounded in my identity both as a South Asian American and a woman of color. These identities have allowed me to think critically about how I can most effectively reform the institutions surrounding me.”

    Gurumurthy has been singing since she was 11, but in high school, she decided to also build her political voice by working for New York Senator Andrea Stewart-Cousins. At one point, Gurumurthy noted a log was kept for the subjects of constituent calls, such as “affordable housing” and  “infrastructure,” and it was then that she became aware that Stewart-Cousins would address the most pressing of these callers’ issues before the Senate.

    “This experience was my first time witnessing how powerful the mobilization of constituents in vast numbers was for influencing meaningful legislative change,” says Gurumurthy.

    After she began applying her math skills to political campaigns, Gurumurthy was soon tapped to run analytics for the Democratic National Committee’s (DNC) midterm election initiative. As a lead analyst for the New York DNC, she adapted an interactive activation-competition (IAC) model to understand voting patterns in the 2018 and 2020 elections. She collected data from public voting records to predict how constituents would cast their ballots and used an IAC algorithm to strategize alongside grassroots organizations and allocate resources to empower historically disenfranchised groups in municipal, state, and federal elections to encourage them to vote.

    Research and student organizing at MIT

    When she arrived at MIT in 2019 to study mathematics with computer science, along with minors in music and economics, she admits she was saddled with the naïve notion that she would “build digital tools that could single-handedly alleviate all of the collective pressures of systemic injustice in this country.” 

    Since then, she has learned to create what she calls “a more nuanced view.” She picked up data analytics skills to build mobilization platforms for organizations that pursued social and economic justice, including working in Fulton County, Georgia, with Fair Fight Action (through the Kelly-Douglas Fund Scholarship) to analyze patterns of voter suppression, and MIT’s ethics laboratories in the Computer Science and Artificial Intelligence Laboratory to build symbolic artificial intelligence protocols to better understand bias in artificial intelligence algorithms. For her work on the International Monetary Fund (through the MIT Washington Summer Internship Program), Gurumurthy was awarded second place for the 2022 S. Klein Prize in Technical Writing for her paper “The Rapid Rise of Cryptocurrency.”

    “The outcomes of each project gave me more hope to begin the next because I could see the impact of these digital tools,” she says. “I saw people feel empowered to use their voices whether it was voting for the first time, protesting exploitative global monetary policy, or fighting gender discrimination. I’ve been really fortunate to see the power of mathematical analysis firsthand.”

    “I have come to realize that the constructive use of technology could be a powerful voice of resistance against injustice,” she says. “Because numbers matter, and when people bear witness to them, they are pushed to take action in meaningful ways.”

    Hoping to make a difference in her own community, she joined several Institute committees. As co-chair of the Undergraduate Association’s education committee, she propelled MIT’s first-ever digital petition for grade transparency and worked with faculty members on Institute committees to ensure that all students were being provided adequate resources to participate in online education in the wake of the Covid-19 pandemic. The digital petition inspired her to begin a project, called Insite, to develop a more centralized digital means of data collection on student life at MIT to better inform policies made by its governing bodies. As Ring Committee chair, she ensured that the special traditions of the “Brass Rat” were made economically accessible to all class members by helping the committee nearly triple its financial aid budget. For her efforts at MIT, last May she received the William L. Stewart, Jr. Award for “[her] contributions [as] an individual student at MIT to extracurricular activities and student life.”

    Ananya plans on going to law school after graduation, to study constitutional law so that she can use her technical background to build quantitative evidence in cases pertaining to voting rights, social welfare, and ethical technology, and set legal standards ”for the humane use of data,” she says.

    “In building digital tools for a variety of social and economic justice organizations, I hope that we can challenge our existing systems of power and realize the progress we so dearly need to witness. There is strength in numbers, both algorithmically and organizationally. I believe it is our responsibility to simultaneously use these strengths to change the world.”

    Her ambitions, however, began when she began singing lessons when she was 11; without her background as a vocalist, she says she would be voiceless.

    “Operatic performance has given me the ability to truly step into my character and convey powerful emotions in my performance. In the process, I have realized that my voice is most powerful when it reflects my true convictions, whether I am performing or publicly speaking. I truly believe that this honesty has allowed me to become an effective community organizer. I’d like to believe that this voice is what compels those around me to act.”

    Private musical study is available for students through the Emerson/Harris Program, which offers merit-based financial awards to students of outstanding achievement on their instruments or voice in classical, jazz, or world music. The Emerson/Harris Program is funded by the late Cherry L. Emerson Jr. SM ’41, in response to an appeal from Associate Provost Ellen T. Harris (Class of 1949 professor emeritus of music). More