More stories

  • in

    Driving toward data justice

    As a person with a mixed-race background who has lived in four different cities, Amelia Dogan describes her early life as “growing up in a lot of in-betweens.” Now an MIT senior, she continues to link different perspectives together, working at the intersection of urban planning, computer science, and social justice.

    Dogan was born in Canada but spent her high school years in Philadelphia, where she developed a strong affinity for the city.  

    “I love Philadelphia to death,” says Dogan. “It’s my favorite place in the world. The energy in the city is amazing — I’m so sad I wasn’t there for the Super Bowl this year — but it is a city with really big disparities. That drives me to do the research that I do and shapes the things that I care about.”

    Dogan is double-majoring in urban science and planning with computer science and in American studies. She decided on the former after participating in the pre-orientation program offered by the Department of Urban Studies and Planning, which provides an introduction to both the department and the city of Boston. She followed that up with a UROP research project with the West Philadelphia Landscape Project, putting together historical census data on housing and race to find patterns for use in community advocacy.

    After taking WGS.231 (Writing About Race), a course offered by the Program in Women’s and Gender Studies, her first year at MIT, Dogan realized there was a lot of crosstalk between urban planning, computer science, and the social sciences.

    “There’s a lot of critical social theory that I want to have background in to make me a better planner or a better computer scientist,” says Dogan. “There’s also a lot of issues around fairness and participation in computer science, and a lot of computer scientists are trying to reinvent the wheel when there’s already really good, critical social science research and theory behind this.”

    Data science and feminism

    Dogan’s first year at MIT was interrupted by the onset of the Covid-19 pandemic, but there was a silver lining. An influx of funding to keep students engaged while attending school virtually enabled her to join the Data + Feminism Lab to work on a case study examining three places in Philadelphia with historical names that were renamed after activist efforts.

    In her first year at MIT, Dogan worked several UROPs to hone her own skills and find the best research fit. Besides the West Philadelphia Land Project, she worked on two projects within the MIT Sloan School of Management. The first involved searching for connections between entrepreneurship and immigration among Fortune 500 founders. The second involved interviewing warehouse workers and writing a report on their quality of life.

    Dogan has now spent three years in the Data + Feminism Lab under Associate Professor Catherine D’Ignazio, where she is particularly interested in how technology can be used by marginalized communities to invert historical power imbalances. A key concept in the lab’s work is that of counterdata, which are produced by civil society groups or individuals in order to counter missing data or to challenge existing official data.

    Most recently, she completed a SuperUROP project investigating how femicide data activist organizations use social media. She analyzed 600 social media posts by organizations across the U.S. and Canada. The work built off the lab’s greater body of work with these groups, which Dogan has contributed to by annotating news articles for machine-learning models.

    “Catherine works a lot at the intersection of data issues and feminism. It just seemed like the right fit for me,” says Dogan. “She’s my academic advisor, she’s my research advisor, and is also a really good mentor.”

    Advocating for the student experience

    Outside of the classroom, Dogan is a strong advocate for improving the student experience, particularly when it intersects with identity. An executive board member of the Asian American Initiative (AAI), she also sits on the student advisory council for the Office of Minority Education.

    “Doing that institutional advocacy has been important to me, because it’s for things that I expected coming into college and had not come in prepared to fight for,” says Dogan. As a high schooler, she participated in programs run by the University of Pennsylvania’s Pan-Asian American Community House and was surprised to find that MIT did not have an equivalent organization.

    “Building community based upon identity is something that I’ve been really passionate about,” says Dogan. “For the past two years, I’ve been working with AAI on a list of recommendations for MIT. I’ve talked to alums from the ’90s who were a part of an Asian American caucus who were asking for the same things.”

    She also holds a leadership role with MIXED @ MIT, a student group focused on creating space for mixed-heritage students to explore and discuss their identities.

    Following graduation, Dogan plans to pursue a PhD in information science at the University of Washington. Her breadth of skills has given her a range of programs to choose from. No matter where she goes next, Dogan wants to pursue a career where she can continue to make a tangible impact.

    “I would love to be doing community-engaged research around data justice, using citizen science and counterdata for policy and social change,” she says. More

  • in

    Subtle biases in AI can influence emergency decisions

    It’s no secret that people harbor biases — some unconscious, perhaps, and others painfully overt. The average person might suppose that computers — machines typically made of plastic, steel, glass, silicon, and various metals — are free of prejudice. While that assumption may hold for computer hardware, the same is not always true for computer software, which is programmed by fallible humans and can be fed data that is, itself, compromised in certain respects.

    Artificial intelligence (AI) systems — those based on machine learning, in particular — are seeing increased use in medicine for diagnosing specific diseases, for example, or evaluating X-rays. These systems are also being relied on to support decision-making in other areas of health care. Recent research has shown, however, that machine learning models can encode biases against minority subgroups, and the recommendations they make may consequently reflect those same biases.

    A new study by researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the MIT Jameel Clinic, which was published last month in Communications Medicine, assesses the impact that discriminatory AI models can have, especially for systems that are intended to provide advice in urgent situations. “We found that the manner in which the advice is framed can have significant repercussions,” explains the paper’s lead author, Hammaad Adam, a PhD student at MIT’s Institute for Data Systems and Society. “Fortunately, the harm caused by biased models can be limited (though not necessarily eliminated) when the advice is presented in a different way.” The other co-authors of the paper are Aparna Balagopalan and Emily Alsentzer, both PhD students, and the professors Fotini Christia and Marzyeh Ghassemi.

    AI models used in medicine can suffer from inaccuracies and inconsistencies, in part because the data used to train the models are often not representative of real-world settings. Different kinds of X-ray machines, for instance, can record things differently and hence yield different results. Models trained predominately on white people, moreover, may not be as accurate when applied to other groups. The Communications Medicine paper is not focused on issues of that sort but instead addresses problems that stem from biases and on ways to mitigate the adverse consequences.

    A group of 954 people (438 clinicians and 516 nonexperts) took part in an experiment to see how AI biases can affect decision-making. The participants were presented with call summaries from a fictitious crisis hotline, each involving a male individual undergoing a mental health emergency. The summaries contained information as to whether the individual was Caucasian or African American and would also mention his religion if he happened to be Muslim. A typical call summary might describe a circumstance in which an African American man was found at home in a delirious state, indicating that “he has not consumed any drugs or alcohol, as he is a practicing Muslim.” Study participants were instructed to call the police if they thought the patient was likely to turn violent; otherwise, they were encouraged to seek medical help.

    The participants were randomly divided into a control or “baseline” group plus four other groups designed to test responses under slightly different conditions. “We want to understand how biased models can influence decisions, but we first need to understand how human biases can affect the decision-making process,” Adam notes. What they found in their analysis of the baseline group was rather surprising: “In the setting we considered, human participants did not exhibit any biases. That doesn’t mean that humans are not biased, but the way we conveyed information about a person’s race and religion, evidently, was not strong enough to elicit their biases.”

    The other four groups in the experiment were given advice that either came from a biased or unbiased model, and that advice was presented in either a “prescriptive” or a “descriptive” form. A biased model would be more likely to recommend police help in a situation involving an African American or Muslim person than would an unbiased model. Participants in the study, however, did not know which kind of model their advice came from, or even that models delivering the advice could be biased at all. Prescriptive advice spells out what a participant should do in unambiguous terms, telling them they should call the police in one instance or seek medical help in another. Descriptive advice is less direct: A flag is displayed to show that the AI system perceives a risk of violence associated with a particular call; no flag is shown if the threat of violence is deemed small.  

    A key takeaway of the experiment is that participants “were highly influenced by prescriptive recommendations from a biased AI system,” the authors wrote. But they also found that “using descriptive rather than prescriptive recommendations allowed participants to retain their original, unbiased decision-making.” In other words, the bias incorporated within an AI model can be diminished by appropriately framing the advice that’s rendered. Why the different outcomes, depending on how advice is posed? When someone is told to do something, like call the police, that leaves little room for doubt, Adam explains. However, when the situation is merely described — classified with or without the presence of a flag — “that leaves room for a participant’s own interpretation; it allows them to be more flexible and consider the situation for themselves.”

    Second, the researchers found that the language models that are typically used to offer advice are easy to bias. Language models represent a class of machine learning systems that are trained on text, such as the entire contents of Wikipedia and other web material. When these models are “fine-tuned” by relying on a much smaller subset of data for training purposes — just 2,000 sentences, as opposed to 8 million web pages — the resultant models can be readily biased.  

    Third, the MIT team discovered that decision-makers who are themselves unbiased can still be misled by the recommendations provided by biased models. Medical training (or the lack thereof) did not change responses in a discernible way. “Clinicians were influenced by biased models as much as non-experts were,” the authors stated.

    “These findings could be applicable to other settings,” Adam says, and are not necessarily restricted to health care situations. When it comes to deciding which people should receive a job interview, a biased model could be more likely to turn down Black applicants. The results could be different, however, if instead of explicitly (and prescriptively) telling an employer to “reject this applicant,” a descriptive flag is attached to the file to indicate the applicant’s “possible lack of experience.”

    The implications of this work are broader than just figuring out how to deal with individuals in the midst of mental health crises, Adam maintains.  “Our ultimate goal is to make sure that machine learning models are used in a fair, safe, and robust way.” More

  • in

    A healthy wind

    Nearly 10 percent of today’s electricity in the United States comes from wind power. The renewable energy source benefits climate, air quality, and public health by displacing emissions of greenhouse gases and air pollutants that would otherwise be produced by fossil-fuel-based power plants.

    A new MIT study finds that the health benefits associated with wind power could more than quadruple if operators prioritized turning down output from the most polluting fossil-fuel-based power plants when energy from wind is available.

    In the study, published today in Science Advances, researchers analyzed the hourly activity of wind turbines, as well as the reported emissions from every fossil-fuel-based power plant in the country, between the years 2011 and 2017. They traced emissions across the country and mapped the pollutants to affected demographic populations. They then calculated the regional air quality and associated health costs to each community.

    The researchers found that in 2014, wind power that was associated with state-level policies improved air quality overall, resulting in $2 billion in health benefits across the country. However, only roughly 30 percent of these health benefits reached disadvantaged communities.

    The team further found that if the electricity industry were to reduce the output of the most polluting fossil-fuel-based power plants, rather than the most cost-saving plants, in times of wind-generated power, the overall health benefits could quadruple to $8.4 billion nationwide. However, the results would have a similar demographic breakdown.

    “We found that prioritizing health is a great way to maximize benefits in a widespread way across the U.S., which is a very positive thing. But it suggests it’s not going to address disparities,” says study co-author Noelle Selin, a professor in the Institute for Data, Systems, and Society and the Department of Earth, Atmospheric and Planetary Sciences at MIT. “In order to address air pollution disparities, you can’t just focus on the electricity sector or renewables and count on the overall air pollution benefits addressing these real and persistent racial and ethnic disparities. You’ll need to look at other air pollution sources, as well as the underlying systemic factors that determine where plants are sited and where people live.”

    Selin’s co-authors are lead author and former MIT graduate student Minghao Qiu PhD ’21, now at Stanford University, and Corwin Zigler at the University of Texas at Austin.

    Turn-down service

    In their new study, the team looked for patterns between periods of wind power generation and the activity of fossil-fuel-based power plants, to see how regional electricity markets adjusted the output of power plants in response to influxes of renewable energy.

    “One of the technical challenges, and the contribution of this work, is trying to identify which are the power plants that respond to this increasing wind power,” Qiu notes.

    To do so, the researchers compared two historical datasets from the period between 2011 and 2017: an hour-by-hour record of energy output of wind turbines across the country, and a detailed record of emissions measurements from every fossil-fuel-based power plant in the U.S. The datasets covered each of seven major regional electricity markets, each market providing energy to one or multiple states.

    “California and New York are each their own market, whereas the New England market covers around seven states, and the Midwest covers more,” Qiu explains. “We also cover about 95 percent of all the wind power in the U.S.”

    In general, they observed that, in times when wind power was available, markets adjusted by essentially scaling back the power output of natural gas and sub-bituminous coal-fired power plants. They noted that the plants that were turned down were likely chosen for cost-saving reasons, as certain plants were less costly to turn down than others.

    The team then used a sophisticated atmospheric chemistry model to simulate the wind patterns and chemical transport of emissions across the country, and determined where and at what concentrations the emissions generated fine particulates and ozone — two pollutants that are known to damage air quality and human health. Finally, the researchers mapped the general demographic populations across the country, based on U.S. census data, and applied a standard epidemiological approach to calculate a population’s health cost as a result of their pollution exposure.

    This analysis revealed that, in the year 2014, a general cost-saving approach to displacing fossil-fuel-based energy in times of wind energy resulted in $2 billion in health benefits, or savings, across the country. A smaller share of these benefits went to disadvantaged populations, such as communities of color and low-income communities, though this disparity varied by state.

    “It’s a more complex story than we initially thought,” Qiu says. “Certain population groups are exposed to a higher level of air pollution, and those would be low-income people and racial minority groups. What we see is, developing wind power could reduce this gap in certain states but further increase it in other states, depending on which fossil-fuel plants are displaced.”

    Tweaking power

    The researchers then examined how the pattern of emissions and the associated health benefits would change if they prioritized turning down different fossil-fuel-based plants in times of wind-generated power. They tweaked the emissions data to reflect several alternative scenarios: one in which the most health-damaging, polluting power plants are turned down first; and two other scenarios in which plants producing the most sulfur dioxide and carbon dioxide respectively, are first to reduce their output.

    They found that while each scenario increased health benefits overall, and the first scenario in particular could quadruple health benefits, the original disparity persisted: Communities of color and low-income communities still experienced smaller health benefits than more well-off communities.

    “We got to the end of the road and said, there’s no way we can address this disparity by being smarter in deciding which plants to displace,” Selin says.

    Nevertheless, the study can help identify ways to improve the health of the general population, says Julian Marshall, a professor of environmental engineering at the University of Washington.

    “The detailed information provided by the scenarios in this paper can offer a roadmap to electricity-grid operators and to state air-quality regulators regarding which power plants are highly damaging to human health and also are likely to noticeably reduce emissions if wind-generated electricity increases,” says Marshall, who was not involved in the study.

    “One of the things that makes me optimistic about this area is, there’s a lot more attention to environmental justice and equity issues,” Selin concludes. “Our role is to figure out the strategies that are most impactful in addressing those challenges.”

    This work was supported, in part, by the U.S. Environmental Protection Agency, and by the National Institutes of Health. More

  • in

    How artificial intelligence can help combat systemic racism

    In 2020, Detroit police arrested a Black man for shoplifting almost $4,000 worth of watches from an upscale boutique. He was handcuffed in front of his family and spent a night in lockup. After some questioning, however, it became clear that they had the wrong man. So why did they arrest him in the first place?

    The reason: a facial recognition algorithm had matched the photo on his driver’s license to grainy security camera footage.

    Facial recognition algorithms — which have repeatedly been demonstrated to be less accurate for people with darker skin — are just one example of how racial bias gets replicated within and perpetuated by emerging technologies.

    “There’s an urgency as AI is used to make really high-stakes decisions,” says MLK Visiting Professor S. Craig Watkins, whose academic home for his time at MIT is the Institute for Data, Systems, and Society (IDSS). “The stakes are higher because new systems can replicate historical biases at scale.”

    Watkins, a professor at the University of Texas at Austin and the founding director of the Institute for Media Innovation​, researches the impacts of media and data-based systems on human behavior, with a specific concentration on issues related to systemic racism. “One of the fundamental questions of the work is: how do we build AI models that deal with systemic inequality more effectively?”

    Play video

    Artificial Intelligence and the Future of Racial Justice | S. Craig Watkins | TEDxMIT

    Ethical AI

    Inequality is perpetuated by technology in many ways across many sectors. One broad domain is health care, where Watkins says inequity shows up in both quality of and access to care. The demand for mental health care, for example, far outstrips the capacity for services in the United States. That demand has been exacerbated by the pandemic, and access to care is harder for communities of color.

    For Watkins, taking the bias out of the algorithm is just one component of building more ethical AI. He works also to develop tools and platforms that can address inequality outside of tech head-on. In the case of mental health access, this entails developing a tool to help mental health providers deliver care more efficiently.

    “We are building a real-time data collection platform that looks at activities and behaviors and tries to identify patterns and contexts in which certain mental states emerge,” says Watkins. “The goal is to provide data-informed insights to care providers in order to deliver higher-impact services.”

    Watkins is no stranger to the privacy concerns such an app would raise. He takes a user-centered approach to the development that is grounded in data ethics. “Data rights are a significant component,” he argues. “You have to give the user complete control over how their data is shared and used and what data a care provider sees. No one else has access.”

    Combating systemic racism

    Here at MIT, Watkins has joined the newly launched Initiative on Combatting Systemic Racism (ICSR), an IDSS research collaboration that brings together faculty and researchers from the MIT Stephen A. Schwarzman College of Computing and beyond. The aim of the ICSR is to develop and harness computational tools that can help effect structural and normative change toward racial equity.

    The ICSR collaboration has separate project teams researching systemic racism in different sectors of society, including health care. Each of these “verticals” addresses different but interconnected issues, from sustainability to employment to gaming. Watkins is a part of two ICSR groups, policing and housing, that aim to better understand the processes that lead to discriminatory practices in both sectors. “Discrimination in housing contributes significantly to the racial wealth gap in the U.S.,” says Watkins.

    The policing team examines patterns in how different populations get policed. “There is obviously a significant and charged history to policing and race in America,” says Watkins. “This is an attempt to understand, to identify patterns, and note regional differences.”

    Watkins and the policing team are building models using data that details police interventions, responses, and race, among other variables. The ICSR is a good fit for this kind of research, says Watkins, who notes the interdisciplinary focus of both IDSS and the SCC. 

    “Systemic change requires a collaborative model and different expertise,” says Watkins. “We are trying to maximize influence and potential on the computational side, but we won’t get there with computation alone.”

    Opportunities for change

    Models can also predict outcomes, but Watkins is careful to point out that no algorithm alone will solve racial challenges.

    “Models in my view can inform policy and strategy that we as humans have to create. Computational models can inform and generate knowledge, but that doesn’t equate with change.” It takes additional work — and additional expertise in policy and advocacy — to use knowledge and insights to strive toward progress.

    One important lever of change, he argues, will be building a more AI-literate society through access to information and opportunities to understand AI and its impact in a more dynamic way. He hopes to see greater data rights and greater understanding of how societal systems impact our lives.

    “I was inspired by the response of younger people to the murders of George Floyd and Breonna Taylor,” he says. “Their tragic deaths shine a bright light on the real-world implications of structural racism and has forced the broader society to pay more attention to this issue, which creates more opportunities for change.” More

  • in

    3 Questions: Fotini Christia on racial equity and data science

    Fotini Christia is the Ford International Professor in the Social Sciences in the Department of Political Science, associate director of the Institute for Data, Systems, and Society (IDSS), and director of the Sociotechnical Systems Research Center (SSRC). Her research interests include issues of conflict and cooperation in the Muslim world, and she has conducted fieldwork in Afghanistan, Bosnia, Iran, the Palestinian Territories, Syria, and Yemen. She has co-organized the IDSS Research Initiative on Combatting Systemic Racism (ICSR), which works to bridge the social sciences, data science, and computation by bringing researchers from these disciplines together to address systemic racism across housing, health care, policing, education, employment, and other sectors of society.

    Q: What is the IDSS/ICSR approach to systemic racism research?

    A: The Research Initiative on Combatting Systemic Racism (ICSR) aims to seed and coordinate cross-disciplinary research to identify and overcome racially discriminatory processes and outcomes across a range of U.S. institutions and policy domains.

    Building off the extensive social science literature on systemic racism, the focus of this research initiative is to use big data to develop and harness computational tools that can help effect structural and normative change toward racial equity.

    The initiative aims to create a visible presence at MIT for cutting-edge computational research with a racial equity lens, across societal domains that will attract and train students and scholars.

    The steering committee for this research initiative is composed of underrepresented minority faculty members from across MIT’s five schools and the MIT Schwarzman College of Computing. Members will serve as close advisors to the initiative as well as share the findings of our work beyond MIT’s campus. MIT Chancellor Melissa Nobles heads this committee.

    Q: What role can data science play in helping to effect change toward racial equity?

    A: Existing work has shown racial discrimination in the job market, in the criminal justice system, as well as in education, health care, and access to housing, among other places. It has also underlined how algorithms could further entrench such bias — be it in training data or in the people who build them. Data science tools can not only help identify, but also contribute to, proposing fixes on racially inequitable outcomes that result from implicit or explicit biases in governing institutional practices in the public and private sector, and more recently from the use of AI and algorithmic methods in decision-making.

    To that effect, this initiative will produce research that explores and collects the relevant big data across domains, while paying attention to the ways such data are collected, and focus on improving and developing data-driven computational tools to address racial disparities in structures and institutions that have reproduced racially discriminatory outcomes in American society.

    The strong correlation between race, class, educational attainment, and various attitudes and behaviors in the American context can make it extremely difficult to rule out the influence of confounding factors. Thus, a key motivation for our research initiative is to highlight the importance of causal analysis using computational methods, and focus on understanding the opportunities of big data and algorithmic decision-making to address racial inequities and promote racial justice — beyond de-biasing algorithms. The intent is to also codify methodologies on equity-informed research practices and produce tools that are clear on the quantifiable expected social costs and benefits, as well as on the downstream effects on systemic racism more broadly.

    Q: What are some ways that the ICSR might conduct or follow-up on research seeking real-world impact or policy change?

    A: This type of research has ethical and societal considerations at its core, especially as they pertain to historically disadvantaged groups in the U.S., and will be coordinated with and communicated to local stakeholders to drive relevant policy decisions. This initiative intends to establish connections to URM [underrepresented minority] researchers and students at underrepresented universities and to directly collaborate with them on these research efforts. To that effect, we are leveraging existing programs such as the MIT Summer Research Program (MSRP).

    To ensure that our research targets the right problems bringing a racial equity lens with an interest to effect policy change, we will also connect with community organizations in minority neighborhoods who often bear the brunt of the direct and indirect effects of systemic racism, as well as with local government offices that work to address inequity in service provision in these communities. Our intent is to directly engage IDSS students with these organizations to help develop and test algorithmic tools for racial equity. More

  • in

    MIT welcomes nine MLK Visiting Professors and Scholars for 2021-22

    In its 31st year, the Martin Luther King Jr. (MLK) Visiting Professors and Scholars Program will host nine outstanding scholars from across the Americas. The flagship program honors the life and legacy of Martin Luther King Jr. by increasing the presence and recognizing the contributions of underrepresented minority scholars at MIT. Throughout the year, the cohort will enhance their scholarship through intellectual engagement with the MIT community and enrich the cultural, academic, and professional experience of students.

    The 2021-22 scholars

    Sanford Biggers is an interdisciplinary artist hosted by the Department of Architecture. His work is an interplay of narrative, perspective, and history that speaks to current social, political, and economic happenings while examining their contexts. His diverse practice positions him as a collaborator with the past through explorations of often-overlooked cultural and political narratives from American history. Through collaboration with his faculty host, Brandon Clifford, he will spend the year contributing to projects with Architecture; Art, Culture and Technology; the Transmedia Storytelling initiatives; and community workshops and engagement with local K-12 education.

    Kristen Dorsey is an assistant professor of engineering at Smith College. She will be hosted by the Program in Media Arts and Sciences at the MIT Media Lab. Her research focuses on the fabrication and characterization of microscale sensors and microelectromechanical systems. Dorsey tries to understand “why things go wrong” by investigating device reliability and stability. At MIT, Dorsey is interested in forging collaborations to consider issues of access and equity as they apply to wearable health care devices.

    Omolola “Lola” Eniola-Adefeso is the associate dean for graduate and professional education and associate professor of chemical engineering at the University of Michigan. She will join MIT’s Department of Chemical Engineering (ChemE). Eniola-Adefeso will work with Professor Paula Hammond on developing electrostatically assembled nanoparticle coatings that enable targeting of specific immune cell types. A co-founder and chief scientific officer of Asalyxa Bio, she is interested in the interactions between blood leukocytes and endothelial cells in vessel lumen lining, and how they change during inflammation response. Eniola-Adefeso will also work with the Diversity in Chemical Engineering (DICE) graduate student group in ChemE and the National Organization of Black Chemists and Chemical Engineers.

    Robert Gilliard Jr. is an assistant professor of chemistry at the University of Virginia and will join the MIT chemistry department, working closely with faculty host Christopher Cummins. His research focuses on various aspects of group 15 element chemistry. He was a founding member of the National Organization of Black Chemists and Chemical Engineers UGA section, and he has served as an American Chemical Society (ACS) Bridge Program mentor as well as an ACS Project Seed mentor. Gilliard has also collaborated with the Cleveland Public Library to expose diverse young scholars to STEM fields.

    Valencia Joyner Koomson ’98, MNG ’99 will return for the second semester of her appointment this fall in MIT’s Department of Electrical Engineering and Computer Science. Based at Tufts University, where she is an associate professor in the Department of Electrical and Computer Engineering, Koomson has focused her research on microelectronic systems for cell analysis and biomedical applications. In the past semester, she has served as a judge for the Black Alumni/ae of MIT Research Slam and worked closely with faculty host Professor Akintunde Akinwande.

    Luis Gilberto Murillo-Urrutia will continue his appointment in MIT’s Environmental Solutions Initiative. He has 30 years of experience in public policy design, implementation, and advocacy, most notably in the areas of sustainable regional development, environmental protection and management of natural resources, social inclusion, and peace building. At MIT, he has continued his research on environmental justice, with a focus on carbon policy and its impacts on Afro-descendant communities in Colombia.

    Sonya T. Smith was the first female professor of mechanical engineering at Howard University. She will join the Department of Aeronautics and Astronautics at MIT. Her research involves computational fluid dynamics and thermal management of electronics for air and space vehicles. She is looking forward to serving as a mentor to underrepresented students across MIT and fostering new research collaborations with her home lab at Howard.

    Lawrence Udeigwe is an associate professor of mathematics at Manhattan College and will join MIT’s Department of Brain and Cognitive Sciences. He plans to co-teach a graduate seminar course with Professor James DiCarlo to explore practical and philosophical questions regarding the use of simulations to build theories in neuroscience. Udeigwe also leads the Lorens Chuno group; as a singer-songwriter, his work tackles intersectionality issues faced by contemporary Africans.

    S. Craig Watkins is an internationally recognized expert in media and a professor at the University of Texas at Austin. He will join MIT’s Institute for Data, Systems, and Society to assist in researching the role of big data in enabling deep structural changes with regard to systemic racism. He will continue to expand on his work as founding director of the Institute for Media Innovation at the University of Texas at Austin, exploring the intersections of critical AI studies, critical race studies, and design. He will also work with MIT’s Center for Advanced Virtuality to develop computational systems that support social perspective-taking.

    Community engagement

    Throughout the 2021-22 academic year, MLK professors and scholars will be presenting their research at a monthly speaker series. Events will be held in an in-person/Zoom hybrid environment. All members of the MIT community are encouraged to attend and hear directly from this year’s cohort of outstanding scholars. To hear more about upcoming events, subscribe to their mailing list.

    On Sept. 15, all are invited to join the Institute Community and Equity Office in welcoming the scholars to campus by attending a welcome luncheon. More

  • in

    Exact symbolic artificial intelligence for faster, better assessment of AI fairness

    The justice system, banks, and private companies use algorithms to make decisions that have profound impacts on people’s lives. Unfortunately, those algorithms are sometimes biased — disproportionately impacting people of color as well as individuals in lower income classes when they apply for loans or jobs, or even when courts decide what bail should be set while a person awaits trial.

    MIT researchers have developed a new artificial intelligence programming language that can assess the fairness of algorithms more exactly, and more quickly, than available alternatives.

    Their Sum-Product Probabilistic Language (SPPL) is a probabilistic programming system. Probabilistic programming is an emerging field at the intersection of programming languages and artificial intelligence that aims to make AI systems much easier to develop, with early successes in computer vision, common-sense data cleaning, and automated data modeling. Probabilistic programming languages make it much easier for programmers to define probabilistic models and carry out probabilistic inference — that is, work backward to infer probable explanations for observed data.

    “There are previous systems that can solve various fairness questions. Our system is not the first; but because our system is specialized and optimized for a certain class of models, it can deliver solutions thousands of times faster,” says Feras Saad, a PhD student in electrical engineering and computer science (EECS) and first author on a recent paper describing the work. Saad adds that the speedups are not insignificant: The system can be up to 3,000 times faster than previous approaches.

    SPPL gives fast, exact solutions to probabilistic inference questions such as “How likely is the model to recommend a loan to someone over age 40?” or “Generate 1,000 synthetic loan applicants, all under age 30, whose loans will be approved.” These inference results are based on SPPL programs that encode probabilistic models of what kinds of applicants are likely, a priori, and also how to classify them. Fairness questions that SPPL can answer include “Is there a difference between the probability of recommending a loan to an immigrant and nonimmigrant applicant with the same socioeconomic status?” or “What’s the probability of a hire, given that the candidate is qualified for the job and from an underrepresented group?”

    SPPL is different from most probabilistic programming languages, as SPPL only allows users to write probabilistic programs for which it can automatically deliver exact probabilistic inference results. SPPL also makes it possible for users to check how fast inference will be, and therefore avoid writing slow programs. In contrast, other probabilistic programming languages such as Gen and Pyro allow users to write down probabilistic programs where the only known ways to do inference are approximate — that is, the results include errors whose nature and magnitude can be hard to characterize.

    Error from approximate probabilistic inference is tolerable in many AI applications. But it is undesirable to have inference errors corrupting results in socially impactful applications of AI, such as automated decision-making, and especially in fairness analysis.

    Jean-Baptiste Tristan, associate professor at Boston College and former research scientist at Oracle Labs, who was not involved in the new research, says, “I’ve worked on fairness analysis in academia and in real-world, large-scale industry settings. SPPL offers improved flexibility and trustworthiness over other PPLs on this challenging and important class of problems due to the expressiveness of the language, its precise and simple semantics, and the speed and soundness of the exact symbolic inference engine.”

    SPPL avoids errors by restricting to a carefully designed class of models that still includes a broad class of AI algorithms, including the decision tree classifiers that are widely used for algorithmic decision-making. SPPL works by compiling probabilistic programs into a specialized data structure called a “sum-product expression.” SPPL further builds on the emerging theme of using probabilistic circuits as a representation that enables efficient probabilistic inference. This approach extends prior work on sum-product networks to models and queries expressed via a probabilistic programming language. However, Saad notes that this approach comes with limitations: “SPPL is substantially faster for analyzing the fairness of a decision tree, for example, but it can’t analyze models like neural networks. Other systems can analyze both neural networks and decision trees, but they tend to be slower and give inexact answers.”

    “SPPL shows that exact probabilistic inference is practical, not just theoretically possible, for a broad class of probabilistic programs,” says Vikash Mansinghka, an MIT principal research scientist and senior author on the paper. “In my lab, we’ve seen symbolic inference driving speed and accuracy improvements in other inference tasks that we previously approached via approximate Monte Carlo and deep learning algorithms. We’ve also been applying SPPL to probabilistic programs learned from real-world databases, to quantify the probability of rare events, generate synthetic proxy data given constraints, and automatically screen data for probable anomalies.”

    The new SPPL probabilistic programming language was presented in June at the ACM SIGPLAN International Conference on Programming Language Design and Implementation (PLDI), in a paper that Saad co-authored with MIT EECS Professor Martin Rinard and Mansinghka. SPPL is implemented in Python and is available open source. More

  • in

    Finding common ground in Malden

    When disparate groups convene around a common goal, exciting things can happen.

    That is the inspiring story unfolding in Malden, Massachusetts, a city of about 60,000 — nearly half people of color — where a new type of community coalition continues to gain momentum on its plan to build a climate-resilient waterfront park along its river. The Malden River Works (MRW) project, recipient of the inaugural Leventhal City Prize, is seeking to connect to a contiguous greenway network where neighboring cities already have visitors coming to their parks and enjoying recreational boating. More important, the MRW is changing the model for how cities address civic growth, community engagement, equitable climate resilience, and environmental justice.                                                                                        

    The MRW’s steering committee consists of eight resident leaders of color, a resident environmental advocate, and three city representatives. One of the committee’s primary responsibilities is providing direction to the MRW’s project team, which includes urban designers, watershed and climate resilience planners, and a community outreach specialist. MIT’s Kathleen Vandiver, director of the Community Outreach Education and Engagement Core at MIT’s Center for Environmental Health Sciences (CEHS), and Marie Law Adams MArch ’06, a lecturer in the School of Architecture and Planning’s Department of Urban Studies and Planning (DUSP), serve on the project team.

    “This governance structure is somewhat unusual,” says Adams. “More typical is having city government as the primary decision-maker. It is important that one of the first things our team did was build a steering committee that is the decision maker on this project.”

    Evan Spetrini ’18 is the senior planner and policy manager for the Malden Redevelopment Authority and sits on both the steering committee and project team. He says placing the decision-making power with the steering committee and building it to be representative of marginalized communities was intentional. 

    “Changing that paradigm of power and decision-making in planning processes was the way we approached social resilience,” says Spetrini. “We have always intended this project to be a model for future planning projects in Malden.”

    This model ushers in a new history chapter for a city founded in 1640.

    Located about six miles north of Boston, Malden was home to mills and factories that used the Malden River for power, and a site for industrial waste over the last two centuries. Decades after the city’s industrial decline, there is little to no public access to the river. Many residents were not even aware there was a river in their city. Before the project was under way, Vandiver initiated a collaborative effort to evaluate the quality of the river’s water. Working with the Mystic River Watershed Association, Gradient Corporation, and CEHS, water samples were tested and a risk analysis conducted.

    “Having the study done made it clear the public could safely enjoy boating on the water,” says Vandiver. “It was a breakthrough that allowed people to see the river as an amenity.”

    A team effort

    Marcia Manong had never seen the river, but the Malden resident was persuaded to join the steering committee with the promise the project would be inclusive and of value to the community. Manong has been involved with civic engagement most of her life in the United States and for 20 years in South Africa.

    “It wasn’t going to be a marginalized, token-ized engagement,” says Manong. “It was clear to me that they were looking for people that would actually be sitting at the table.”

    Manong agreed to recruit additional people of color to join the team. From the beginning, she says, language was a huge barrier, given that nearly half of Malden’s residents do not speak English at home. Finding the translation efforts at their public events to be inadequate, the steering committee directed more funds to be made available for translation in several languages when public meetings began being held over Zoom this past year.

    “It’s unusual for most cities to spend this money, but our population is so diverse that we require it,” says Manong. “We have to do it. If the steering committee wasn’t raising this issue with the rest of the team, perhaps this would be overlooked.”

    Another alteration the steering committee has made is how the project engages with the community. While public attendance at meetings had been successful before the pandemic, Manong says they are “constantly working” to reach new people. One method has been to request invitations to attend the virtual meetings of other organizations to keep them apprised of the project.

    “We’ve said that people feel most comfortable when they’re in their own surroundings, so why not go where the people are instead of trying to get them to where we are,” says Manong.

    Buoyed by the $100,000 grant from MIT’s Norman B. Leventhal Center for Advanced Urbanism (LCAU) in 2019, the project team worked with Malden’s Department of Public Works, which is located along the river, to redesign its site and buildings and to study how to create a flood-resistant public open space as well as an elevated greenway path, connecting with other neighboring cities’ paths. The park’s plans also call for 75 new trees to reduce urban heat island effect, open lawn for gathering, and a dock for boating on the river.

    “The storm water infrastructure in these cities is old and isn’t going to be able to keep up with increased precipitation,” says Adams. “We’re looking for ways to store as much water as possible on the DPW site so we can hold it and release it more gradually into the river to avoid flooding.”

    The project along the 2.3-mile-long river continues to receive attention. Recently, the city of Malden was awarded a 2021 Accelerating Climate Resilience Grant of more than $50,000 from the state’s Metropolitan Area Planning Council and the Barr Foundation to support the project. Last fall, the project was awarded a $150,015 Municipal Vulnerability Preparedness Action Grant. Both awards are being directed to fund engineering work to refine the project’s design.

    “We — and in general, the planning profession — are striving to create more community empowerment in decision-making as to what happens to their community,” says Spetrini. “Putting the power in the community ensures that it’s actually responding to the needs of the community.”

    Contagious enthusiasm

    Manong says she’s happy she got involved with the project and believes the new governance structure is making a difference.

    “This project is definitely engaging with communities of color in a manner that is transformative and that is looking to build a long-lasting power dynamic built on trust,” she says. “It’s a new energized civic engagement and we’re making that happen. It’s very exciting.”

    Spetrini finds the challenge of creating an open space that’s publicly accessible and alongside an active work site professionally compelling.

    “There is a way to preserve the industrial employment base while also giving the public greater access to this natural resource,” he says. “It has real implications for other communities to follow this type of model.”

    Despite the pandemic this past year, enthusiasm for the project is palpable. For Spetrini, a Malden resident, it’s building “the first significant piece of what has been envisioned as the Malden River Greenway.” Adams sees the total project as a way to build social resilience as well as garnering community interest in climate resilience. For Vandiver, it’s the implications for improved community access.

    “From a health standpoint, everybody has learned from Covid-19 that the health aspects of walking in nature are really restorative,” says Vandiver. “Creating greater green space gives more attention to health issues. These are seemingly small side benefits, but they’re huge for mental health benefits.”

    Leventhal City Prize’s next cycle

    The Leventhal City Prize was established by the LCAU to catalyze innovative, interdisciplinary urban design, and planning approaches worldwide to improve both the environment and the quality of life for residents. Support for the LCAU was provided by the Muriel and Norman B. Leventhal Family Foundation and the Sherry and Alan Leventhal Family Foundation.

    “We’re thrilled with inaugural recipients of the award and the extensive work they’ve undertaken that is being held up as an exemplary model for others to learn from,” says Sarah Williams, LCAU director and a professor in DUSP. “Their work reflects the prize’s intent. We look forward to catalyzing these types of collaborative partnership in the next prize cycle.”

    Submissions for the next cycle of the Leventhal City Prize will open in early 2022.    More