More stories

  • in

    Urbanization: No fast lane to transformation

    Accra, Ghana, “is a city I’ve come to know as well as any place in the U.S,” says Associate Professor Noah Nathan, who has conducted research there over the past 15 years. The booming capital of 4 million is an ideal laboratory for investigating the rapid urbanization of nations in Africa and beyond, believes Nathan, who joined the MIT Department of Political Science in July.

    “Accra is vibrant and exciting, with gleaming glass office buildings, shopping centers, and an emerging middle class,” he says. “But at the same time there is enormous poverty, with slums and a mixing pot of ethnic groups.” Cities like Accra that have emerged in developing countries around the world are “hybrid spaces” that provoke a multitude of questions for Nathan.

    “Rich and poor are in incredibly close proximity and I want to know how this dramatic inequality can be sustainable, and what politics looks like with such ethnic and class diversity living side-by-side,” he says.

    With his singular approach to data collection and deep understanding of Accra, its neighborhoods, and increasingly, its built environment, Nathan is generating a body of scholarship on the political impacts of urbanization throughout the global South.

    A trap in the urban transition

    Nathan’s early studies of Accra challenged common expectations about how urbanization shifts political behavior.

    “Modernization theory states that as people become more ‘modern’ and move to cities, ethnicity fades and class becomes the dominant dynamic in political behavior,” explains Nathan. “It predicts that the process of urbanization transforms the relationship between politicians and voters, and elections become more ideologically and policy oriented,” says Nathan.  

    But in Accra, the heart of one of the fastest-growing economies in the developing world, Nathan found “a type of politics stuck in an old equilibrium, hard to dislodge, and not updated by newly wealthy voters,” he says. Using census data revealing the demographic composition of every neighborhood in Accra, Nathan determined that there were many enclaves in which forms of patronage politics and ethnic competition persist. He conducted sample surveys and collected polling-station level results on residents’ voting across the city. “I was able to merge spatial data on where people lived and their answers to survey questions, and determine how different neighborhoods voted,” says Nathan.

    Among his findings: Ethnic politics were thriving in many parts of Accra, and many middle-class voters were withdrawing from politics entirely in reaction to the well-established practice of patronage rather than pressuring politicians to change their approach. “They decided it was better to look out for themselves,” he explains.

    In Nathan’s 2019 book, “Electoral Politics and Africa’s Urban Transition: Class and Ethnicity in Ghana,” he described this situation as a trap. “As the wealthy exit from the state, politicians double down on patronage politics with poor voters, which the middle class views as further evidence of corruption,” he explains. The wealthier citizens “want more public goods, and big policy reforms, such as changes in the health-care and tax systems, while poor voters focus on immediate needs such as jobs, homes, better schools in their communities.”

    In Ghana and other developing countries where the state’s capacity is limited, politicians can’t deliver on the broad-scale changes desired by the middle class. Motivated by their own political survival, they continue dealing with poor voters as clients, trading services for votes. “I connect urban politics in Ghana to the early 20th-century urban machines in the United States, run by party bosses,” says Nathan.

    This may prove sobering news for many engaged with the developing world. “There’s enormous enthusiasm among foreign aid organizations, in the popular press and policy circles, for the idea that urbanization will usher in big, radical political change,” notes Nathan. “But these kinds of transformations will only come about with structural change such as civil service reforms and nonpartisan welfare programs that can push politicians beyond just delivering targeted services to poor voters.”

    Falling in love with Ghana

    For most of his youth, Nathan was a committed jazz saxophonist, toying with going professional. But he had long cultivated another fascination as well. “I was a huge fan of ‘The West Wing’ in middle school” and got into American politics through that,” he says. He volunteered in Hillary Clinton’s 2008 primary campaign during college, but soon realized work in politics was “both more boring and not as idealistic” as he’d hoped.

    As an undergraduate at Harvard University, where he concentrated in government, he “signed up for African history on a lark — because American high schools didn’t teach anything on the subject — and I loved it,” Nathan says. He took another African history course, and then found his way to classes taught by Harvard political scientist Robert H. Bates PhD ’69 that focused on the political economy of development, ethnic conflict, and state failure in Africa. In the summer before his senior year, he served as a research assistant for one of his professors in Ghana, and then stayed longer, hoping to map out a senior thesis on ethnic conflict.

    “Once I got to Ghana, I was fascinated by the place — the dynamism of this rapidly transforming society,” he recalls. “Growing up in the U.S., there are a lot of stereotypes about the developing world, and I quickly realized how much more complicated everything is.”

    These initial experiences living in Ghana shaped Nathan’s ideas for what became his doctoral dissertation at Harvard and first book on the ethnic and class dynamics driving the nation’s politics. His frequent return visits to that country sparked a wealth of research that built on and branched out from this work.

    One set of studies examines the historical development of Ghana’s rural north in its colonial and post-colonial periods, the center of ethnic conflict in the 1990s. These are communities “where the state delivers few resources, doesn’t seem to do much, yet figures as a central actor in people’s lives,” he says.

    Part of this region had been a German colony, and the other part was originally under British rule, and Nathan compared the political trajectories of these two areas, focusing on differences in early state efforts to impose new forms of local political leadership and gradually build a formal education system.

    “The colonial legacy in the British areas was elite families who came to dominate, entrenching themselves and creating political dynasties and economic inequality,” says Nathan. But similar ethnic groups exposed to different state policies in the original German colony were not riven with the same class inequalities, and enjoy better access to government services today. “This research is changing how we think about state weakness in the developing world, how we tend to see the emergence of inequality where societal elites come into power,” he says. The results of Nathan’s research will be published in a forthcoming book, “The Scarce State: Inequality and Political Power in the Hinterland.”

    Politics of built spaces

    At MIT, Nathan is pivoting to a fresh new framing for questions on urbanization. Wielding a public source map of cities around the world, he is scrutinizing the geometry of street grids in 1,000 of sub-Saharan Africa’s largest cities “to think about urban order,” he says. Digitizing historical street maps of African cities from the Library of Congress’s map collection, he can look at how these cities were built and evolved physically. “When cities emerge based on grids, rather than tangles, they are more legible to governments,” he says. “This means that it’s easier to find people, easier to govern, tax, repress, and politically mobilize them.”  

    Nathan has begun to demonstrate that in the post-colonial period, “cities that were built under authoritarian regimes tend to be most legible, with even low-capacity regimes trying to impose control and make them gridded.” Democratic governments, he says, “lead to more tangled and chaotic built environments, with people doing what they want.” He also draws comparisons to how state policies shaped urban growth in the United States, with local and federal governments exerting control over neighborhood development, leading to redlining and segregation in many cities.

    Nathan’s interests naturally pull him toward the MIT Governance Lab and Global Diversity Lab. “I’m hoping to dive into both,” he says. “One big attraction of the department is the really interesting research that’s being done on developing countries.”  He also plans to use the stature he has built over many years of research in Africa to help “open doors” to African researchers and students, who may not always get the same kind of access to institutions and data that he has had. “I’m hoping to build connections to researchers in the global South,” he says. More

  • in

    Coordinating climate and air-quality policies to improve public health

    As America’s largest investment to fight climate change, the Inflation Reduction Act positions the country to reduce its greenhouse gas emissions by an estimated 40 percent below 2005 levels by 2030. But as it edges the United States closer to achieving its international climate commitment, the legislation is also expected to yield significant — and more immediate — improvements in the nation’s health. If successful in accelerating the transition from fossil fuels to clean energy alternatives, the IRA will sharply reduce atmospheric concentrations of fine particulates known to exacerbate respiratory and cardiovascular disease and cause premature deaths, along with other air pollutants that degrade human health. One recent study shows that eliminating air pollution from fossil fuels in the contiguous United States would prevent more than 50,000 premature deaths and avoid more than $600 billion in health costs each year.

    While national climate policies such as those advanced by the IRA can simultaneously help mitigate climate change and improve air quality, their results may vary widely when it comes to improving public health. That’s because the potential health benefits associated with air quality improvements are much greater in some regions and economic sectors than in others. Those benefits can be maximized, however, through a prudent combination of climate and air-quality policies.

    Several past studies have evaluated the likely health impacts of various policy combinations, but their usefulness has been limited due to a reliance on a small set of standard policy scenarios. More versatile tools are needed to model a wide range of climate and air-quality policy combinations and assess their collective effects on air quality and human health. Now researchers at the MIT Joint Program on the Science and Policy of Global Change and MIT Institute for Data, Systems and Society (IDSS) have developed a publicly available, flexible scenario tool that does just that.

    In a study published in the journal Geoscientific Model Development, the MIT team introduces its Tool for Air Pollution Scenarios (TAPS), which can be used to estimate the likely air-quality and health outcomes of a wide range of climate and air-quality policies at the regional, sectoral, and fuel-based level. 

    “This tool can help integrate the siloed sustainability issues of air pollution and climate action,” says the study’s lead author William Atkinson, who recently served as a Biogen Graduate Fellow and research assistant at the IDSS Technology and Policy Program’s (TPP) Research to Policy Engagement Initiative. “Climate action does not guarantee a clean air future, and vice versa — but the issues have similar sources that imply shared solutions if done right.”

    The study’s initial application of TAPS shows that with current air-quality policies and near-term Paris Agreement climate pledges alone, short-term pollution reductions give way to long-term increases — given the expected growth of emissions-intensive industrial and agricultural processes in developing regions. More ambitious climate and air-quality policies could be complementary, each reducing different pollutants substantially to give tremendous near- and long-term health benefits worldwide.

    “The significance of this work is that we can more confidently identify the long-term emission reduction strategies that also support air quality improvements,” says MIT Joint Program Deputy Director C. Adam Schlosser, a co-author of the study. “This is a win-win for setting climate targets that are also healthy targets.”

    TAPS projects air quality and health outcomes based on three integrated components: a recent global inventory of detailed emissions resulting from human activities (e.g., fossil fuel combustion, land-use change, industrial processes); multiple scenarios of emissions-generating human activities between now and the year 2100, produced by the MIT Economic Projection and Policy Analysis model; and emissions intensity (emissions per unit of activity) scenarios based on recent data from the Greenhouse Gas and Air Pollution Interactions and Synergies model.

    “We see the climate crisis as a health crisis, and believe that evidence-based approaches are key to making the most of this historic investment in the future, particularly for vulnerable communities,” says Johanna Jobin, global head of corporate reputation and responsibility at Biogen. “The scientific community has spoken with unanimity and alarm that not all climate-related actions deliver equal health benefits. We’re proud of our collaboration with the MIT Joint Program to develop this tool that can be used to bridge research-to-policy gaps, support policy decisions to promote health among vulnerable communities, and train the next generation of scientists and leaders for far-reaching impact.”

    The tool can inform decision-makers about a wide range of climate and air-quality policies. Policy scenarios can be applied to specific regions, sectors, or fuels to investigate policy combinations at a more granular level, or to target short-term actions with high-impact benefits.

    TAPS could be further developed to account for additional emissions sources and trends.

    “Our new tool could be used to examine a large range of both climate and air quality scenarios. As the framework is expanded, we can add detail for specific regions, as well as additional pollutants such as air toxics,” says study supervising co-author Noelle Selin, professor at IDSS and the MIT Department of Earth, Atmospheric and Planetary Sciences, and director of TPP.    

    This research was supported by the U.S. Environmental Protection Agency and its Science to Achieve Results (STAR) program; Biogen; TPP’s Leading Technology and Policy Initiative; and TPP’s Research to Policy Engagement Initiative. More

  • in

    Making each vote count

    Graduate student Jacob Jaffe wants to improve the administration of American elections. To do that, he is posing “questions in political science that we haven’t been asking enough,” he says, “and solving them with methods we haven’t been using enough.”

    Considerable research has been devoted to understanding “who votes, and what makes people vote or not vote,” says Jaffe. He is training his attention on questions of a different nature: Does providing practical information to voters about how to cast their ballots change how they will vote? Is it possible to increase the accuracy of vote-counting, on a state-by-state and even precinct-by-precinct basis? How do voters experience polling places? These problems form the core of his dissertation.

    Taking advantage of the resources at the MIT Election Data and Science Lab, where he serves as a researcher, Jaffe conducts novel field experiments to gather highly detailed information on local, state, and federal elections, and analyzes this trove with advanced statistical techniques. Whether investigating the probability of miscounts in voting, or the possibility of changing a voter’s mode of voting, Jaffe intends to strengthen the scaffolding that supports representative government. “Elections are both theoretically and normatively important; they’re the basis of our belief in the moral rightness of the state to do the things the state does,” he says.

    Click this link

    For one of his keystone projects, Jaffe seized a unique opportunity to run a big field experiment. In summer 2020, at the height of the Covid-19 pandemic, he emailed 80,000 Floridians instructions on how to vote in an upcoming primary by mail. His email contained a link enabling recipients to fill out two simple questions to receive a ballot. “I wanted to learn if this was an effective method for getting people to vote by mail, and I proved it is, statistically,” he says. “This is important to know because if elections are held in times when we might need people to vote nonlocally or vote using one method over another — if they’re displaced by a hurricane or another emergency, for instance — I learned that we can effect a new vote mode practically and quickly.”

    One of Jaffe’s insights from this experiment is that “people do read their voting-related emails, but the content of the email has to be something they can act on proximately,” he says. “A message reminding them to vote two weeks from now is not so helpful.” The lower the burden on an individual to participate in voting, whether due to proximity to a polling site or instructions on how to receive and cast a ballot, the greater the likelihood of that person engaging in the election.

    “If we want people to vote by mail, we need to reduce the informational cost so it’s easier for voters to understand how the system works,” he says.

    Another significant research thrust for Jaffe involves scrutinizing accuracy in vote counting, using instances of recounts in presidential elections. Ensuring each vote counts, he says, “is one of the most fundamental questions in democracy,” he says.

    With access to 20 elections in 2020, Jaffe is comparing original vote totals for each candidate to the recounted, correct tally, on a precinct-level basis. “Using original combinatorial techniques, I can estimate the probability of miscounting ballots,” he says. The ultimate goal is to generate a granular picture of the efficacy of election administration across the country.

    “It varies a lot by state, and most states do a good job,” he says. States that take their time in counting perform better. “There’s a phenomenon where some towns race to get results in as quickly as possible, and this affects their accuracy.”

    In spite of the bright spots, Jaffe sees chronic underfunding of American elections. “We need to give local administrators the resources, the time and money to fund employees to do their jobs,” he says. The worse the situation is, “the more likely that elections will be called wrong, with no one knowing.” Jaffe believes that his analysis can offer states useful information for improving election administration. “Determining how good a place is historically at counting ballots can help determine the likelihood of needing costly recounts in future elections,” he says.

    The ballot box and beyond

    It didn’t take Jaffe long to decide on a life dedicated to studying politics. Part of a Boston-area family who, he says, “liked discussing what was going on in the world,” he had his own subscriptions to Time magazine at age 9, and to The Economist in middle school. During high school, he volunteered for then-Massachusetts Representative Barney Frank and Senator John Kerry, working on constituent services. At Rice University, he interned all four years with political scientist Robert M. Stein, an expert on voting and elections. With Stein’s help, Jaffe landed a position the summer before his senior year with the Department of Justice (DOJ), researching voting rights cases.

    “The experience was fascinating, and the work felt super important,” says Jaffe. His portfolio involved determining whether legal challenges to particular elections met the statistical standard for racial gerrymandering. “I had to answer hard quantitative questions about the relationship between race and voting in an area, and whether minority candidates were systematically prevented from winning,” he says.

    But while Jaffe cared a lot about this work, he didn’t feel adequately challenged. “As a 21-year-old at DOJ, I learned that I could address problems in the world using statistics,” he says. “But I felt I could have a greater impact addressing tougher questions outside of voting rights.”

    Jaffe was drawn to political science at MIT, and specifically to the research of Charles Stewart III, the Kenan Sahin Distinguished Professor of Political Science, director of the MIT Election Lab, and head of Jaffe’s thesis committee. It wasn’t just the opportunity to plumb the lab’s singular repository of voting data that attracted Jaffe, but its commitment to making every vote count. For Jaffe, this was a call to arms to investigate the many, and sometimes quotidian, obstacles, between citizens and ballot boxes.

    To this end, he has been analyzing, with the help of mathematical methods from queuing theory, why some elections involve wait lines of six hours and longer at polling sites. “We know that simpler ballots mean people move don’t get stuck in these lines, where they might potentially give up before voting,” he says. “Looking at the content of ballots and the interval between voter check-in and check-out, I learned that adding races, rather than candidates, to a ballot, means that people take more time completing ballots, leading to interminable lines.”

    A key takeaway from his ensemble of studies is that “while it’s relatively rare that elections are bad, we shouldn’t think that we’re good to go,” he says. “Instead, we need to be asking under what conditions do things get bad, and how can we make them better.” More

  • in

    Q&A: Global challenges surrounding the deployment of AI

    The AI Policy Forum (AIPF) is an initiative of the MIT Schwarzman College of Computing to move the global conversation about the impact of artificial intelligence from principles to practical policy implementation. Formed in late 2020, AIPF brings together leaders in government, business, and academia to develop approaches to address the societal challenges posed by the rapid advances and increasing applicability of AI.

    The co-chairs of the AI Policy Forum are Aleksander Madry, the Cadence Design Systems Professor; Asu Ozdaglar, deputy dean of academics for the MIT Schwarzman College of Computing and head of the Department of Electrical Engineering and Computer Science; and Luis Videgaray, senior lecturer at MIT Sloan School of Management and director of MIT AI Policy for the World Project. Here, they discuss talk some of the key issues facing the AI policy landscape today and the challenges surrounding the deployment of AI. The three are co-organizers of the upcoming AI Policy Forum Summit on Sept. 28, which will further explore the issues discussed here.

    Q: Can you talk about the ­ongoing work of the AI Policy Forum and the AI policy landscape generally?

    Ozdaglar: There is no shortage of discussion about AI at different venues, but conversations are often high-level, focused on questions of ethics and principles, or on policy problems alone. The approach the AIPF takes to its work is to target specific questions with actionable policy solutions and engage with the stakeholders working directly in these areas. We work “behind the scenes” with smaller focus groups to tackle these challenges and aim to bring visibility to some potential solutions alongside the players working directly on them through larger gatherings.

    Q: AI impacts many sectors, which makes us naturally worry about its trustworthiness. Are there any emerging best practices for development and deployment of trustworthy AI?

    Madry: The most important thing to understand regarding deploying trustworthy AI is that AI technology isn’t some natural, preordained phenomenon. It is something built by people. People who are making certain design decisions.

    We thus need to advance research that can guide these decisions as well as provide more desirable solutions. But we also need to be deliberate and think carefully about the incentives that drive these decisions. 

    Now, these incentives stem largely from the business considerations, but not exclusively so. That is, we should also recognize that proper laws and regulations, as well as establishing thoughtful industry standards have a big role to play here too.

    Indeed, governments can put in place rules that prioritize the value of deploying AI while being keenly aware of the corresponding downsides, pitfalls, and impossibilities. The design of such rules will be an ongoing and evolving process as the technology continues to improve and change, and we need to adapt to socio-political realities as well.

    Q: Perhaps one of the most rapidly evolving domains in AI deployment is in the financial sector. From a policy perspective, how should governments, regulators, and lawmakers make AI work best for consumers in finance?

    Videgaray: The financial sector is seeing a number of trends that present policy challenges at the intersection of AI systems. For one, there is the issue of explainability. By law (in the U.S. and in many other countries), lenders need to provide explanations to customers when they take actions deleterious in whatever way, like denial of a loan, to a customer’s interest. However, as financial services increasingly rely on automated systems and machine learning models, the capacity of banks to unpack the “black box” of machine learning to provide that level of mandated explanation becomes tenuous. So how should the finance industry and its regulators adapt to this advance in technology? Perhaps we need new standards and expectations, as well as tools to meet these legal requirements.

    Meanwhile, economies of scale and data network effects are leading to a proliferation of AI outsourcing, and more broadly, AI-as-a-service is becoming increasingly common in the finance industry. In particular, we are seeing fintech companies provide the tools for underwriting to other financial institutions — be it large banks or small, local credit unions. What does this segmentation of the supply chain mean for the industry? Who is accountable for the potential problems in AI systems deployed through several layers of outsourcing? How can regulators adapt to guarantee their mandates of financial stability, fairness, and other societal standards?

    Q: Social media is one of the most controversial sectors of the economy, resulting in many societal shifts and disruptions around the world. What policies or reforms might be needed to best ensure social media is a force for public good and not public harm?

    Ozdaglar: The role of social media in society is of growing concern to many, but the nature of these concerns can vary quite a bit — with some seeing social media as not doing enough to prevent, for example, misinformation and extremism, and others seeing it as unduly silencing certain viewpoints. This lack of unified view on what the problem is impacts the capacity to enact any change. All of that is additionally coupled with the complexities of the legal framework in the U.S. spanning the First Amendment, Section 230 of the Communications Decency Act, and trade laws.

    However, these difficulties in regulating social media do not mean that there is nothing to be done. Indeed, regulators have begun to tighten their control over social media companies, both in the United States and abroad, be it through antitrust procedures or other means. In particular, Ofcom in the U.K. and the European Union is already introducing new layers of oversight to platforms. Additionally, some have proposed taxes on online advertising to address the negative externalities caused by current social media business model. So, the policy tools are there, if the political will and proper guidance exists to implement them. More

  • in

    Empowering Cambridge youth through data activism

    For over 40 years, the Mayor’s Summer Youth Employment Program (MSYEP, or the Mayor’s Program) in Cambridge, Massachusetts, has been providing teenagers with their first work experience, but 2022 brought a new offering. Collaborating with MIT’s Personal Robots research group (PRG) and Responsible AI for Social Empowerment and Education (RAISE) this summer, MSYEP created a STEAM-focused learning site at the Institute. Eleven students joined the program to learn coding and programming skills through the lens of “Data Activism.”

    MSYEP’s partnership with MIT provides an opportunity for Cambridge high schoolers to gain exposure to more pathways for their future careers and education. The Mayor’s Program aims to respect students’ time and show the value of their work, so participants are compensated with an hourly wage as they learn workforce skills at MSYEP worksites. In conjunction with two ongoing research studies at MIT, PRG and RAISE developed the six-week Data Activism curriculum to equip students with critical-thinking skills so they feel prepared to utilize data science to challenge social injustice and empower their community.

    Rohan Kundargi, K-12 Community Outreach Administrator for MIT Office of Government and Community Relations (OGCR), says, “I see this as a model for a new type of partnership between MIT and Cambridge MSYEP. Specifically, an MIT research project that involves students from Cambridge getting paid to learn, research, and develop their own skills!”

    Cross-Cambridge collaboration

    Cambridge’s Office of Workforce Development initially contacted MIT OGCR about hosting a potential MSYEP worksite that taught Cambridge teens how to code. When Kundargi reached out to MIT pK-12 collaborators, MIT PRG’s graduate research assistant Raechel Walker proposed the Data Activism curriculum. Walker defines “data activism” as utilizing data, computing, and art to analyze how power operates in the world, challenge power, and empathize with people who are oppressed.

    Walker says, “I wanted students to feel empowered to incorporate their own expertise, talents, and interests into every activity. In order for students to fully embrace their academic abilities, they must remain comfortable with bringing their full selves into data activism.”

    As Kundargi and Walker recruited students for the Data Activism learning site, they wanted to make sure the cohort of students — the majority of whom are individuals of color — felt represented at MIT and felt they had the agency for their voice to be heard. “The pioneers in this field are people who look like them,” Walker says, speaking of well-known data activists Timnit Gebru, Rediet Abebe, and Joy Buolamwini.

    When the program began this summer, some of the students were not aware of the ways data science and artificial intelligence exacerbate systemic oppression in society, or some of the tools currently being used to mitigate those societal harms. As a result, Walker says, the students wanted to learn more about discriminatory design in every aspect of life. They were also interested in creating responsible machine learning algorithms and AI fairness metrics.

    A different side of STEAM

    The development and execution of the Data Activism curriculum contributed to Walker’s and postdoc Xiaoxue Du’s respective research at PRG. Walker is studying AI education, specifically creating and teaching data activism curricula for minoritized communities. Du’s research explores processes, assessments, and curriculum design that prepares educators to use, adapt, and integrate AI literacy curricula. Additionally, her research targets how to leverage more opportunities for students with diverse learning needs.

    The Data Activism curriculum utilizes a “libertatory computing” framework, a term Walker coined in her position paper with Professor Cynthia Breazeal, director of MIT RAISE, dean for digital learning, and head of PRG, and Eman Sherif, a then-undergraduate researcher from University of California at San Diego, titled “Liberty Computing for African American Students.” This framework ensures that students, especially minoritized students, acquire a sound racial identity, critical consciousness, collective obligation, liberation centered academic/achievement identity, as well as the activism skills to use computing to transform a multi-layered system of barriers in which racism persists. Walker says, “We encouraged students to demonstrate competency in every pillar because all of the pillars are interconnected and build upon each other.”

    Walker developed a series of interactive coding and project-based activities that focused on understanding systemic racism, utilizing data science to analyze systemic oppression, data drawing, responsible machine learning, how racism can be embedded into AI, and different AI fairness metrics.

    This was the students’ first time learning how to create data visualizations using the programming language Python and the data analysis tool Pandas. In one project meant to examine how different systems of oppression can affect different aspects of students’ own identities, students created datasets with data from their respective intersectional identities. Another activity highlighted African American achievements, where students analyzed two datasets about African American scientists, activists, artists, scholars, and athletes. Using the data visualizations, students then created zines about the African Americans who inspired them.

    RAISE hired Olivia Dias, Sophia Brady, Lina Henriquez, and Zeynep Yalcin through the MIT Undergraduate Research Opportunity Program (UROP) and PRG hired freelancer Matt Taylor to work with Walker on developing the curriculum and designing interdisciplinary experience projects. Walker and the four undergraduate researchers constructed an intersectional data analysis activity about different examples of systemic oppression. PRG also hired three high school students to test activities and offer insights about making the curriculum engaging for program participants. Throughout the program, the Data Activism team taught students in small groups, continually asked students how to improve each activity, and structured each lesson based on the students’ interests. Walker says Dias, Brady, Henriquez, and Yalcin were invaluable to cultivating a supportive classroom environment and helping students complete their projects.

    Cambridge Rindge and Latin School senior Nina works on her rubber block stamp that depicts the importance of representation in media and greater representation in the tech industry.

    Photo: Katherine Ouellette

    Previous item
    Next item

    Student Nina says, “It’s opened my eyes to a different side of STEM. I didn’t know what ‘data’ meant before this program, or how intersectionality can affect AI and data.” Before MSYEP, Nina took Intro to Computer Science and AP Computer Science, but she has been coding since Girls Who Code first sparked her interest in middle school. “The community was really nice. I could talk with other girls. I saw there needs to be more women in STEM, especially in coding.” Now she’s interested in applying to colleges with strong computer science programs so she can pursue a coding-related career.

    From MSYEP to the mayor’s office

    Mayor Sumbul Siddiqui visited the Data Activism learning site on Aug. 9, accompanied by Breazeal. A graduate of MSYEP herself, Siddiqui says, “Through hands-on learning through computer programming, Cambridge high school students have the unique opportunity to see themselves as data scientists. Students were able learn ways to combat discrimination that occurs through artificial intelligence.” In an Instagram post, Siddiqui also said, “I had a blast visiting the students and learning about their projects.”

    Students worked on an activity that asked them to envision how data science might be used to support marginalized communities. They transformed their answers into block-printed T-shirt designs, carving pictures of their hopes into rubber block stamps. Some students focused on the importance of data privacy, like Jacob T., who drew a birdcage to represent data stored and locked away by third party apps. He says, “I want to open that cage and restore my data to myself and see what can be done with it.”

    The subject of Cambridge Community Charter School student Jacob T.’s project was the importance of data privacy. For his T-shirt design, he drew a birdcage to represent data stored and locked away by third party apps. (From right to left:) Breazeal, Jacob T. Kiki, Raechel Walker, and Zeynep Yalcin.

    Photo: Katherine Ouellette

    Previous item
    Next item

    Many students wanted to see more representation in both the media they consume and across various professional fields. Nina talked about the importance of representation in media and how that could contribute to greater representation in the tech industry, while Kiki talked about encouraging more women to pursue STEM fields. Jesmin said, “I wanted to show that data science is accessible to everyone, no matter their origin or language you speak. I wrote ‘hello’ in Bangla, Arabic, and English, because I speak all three languages and they all resonate with me.”

    Student Jesmin (left) explains the concept of her T-shirt design to Mayor Siddiqui. She wants data science to be accessible to everyone, no matter their origin or language, so she drew a globe and wrote ‘hello’ in the three languages she speaks: Bangla, Arabic, and English.

    Photo: Katherine Ouellette

    Previous item
    Next item

    “Overall, I hope the students continue to use their data activism skills to re-envision a society that supports marginalized groups,” says Walker. “Moreover, I hope they are empowered to become data scientists and understand how their race can be a positive part of their identity.” More

  • in

    Exploring emerging topics in artificial intelligence policy

    Members of the public sector, private sector, and academia convened for the second AI Policy Forum Symposium last month to explore critical directions and questions posed by artificial intelligence in our economies and societies.

    The virtual event, hosted by the AI Policy Forum (AIPF) — an undertaking by the MIT Schwarzman College of Computing to bridge high-level principles of AI policy with the practices and trade-offs of governing — brought together an array of distinguished panelists to delve into four cross-cutting topics: law, auditing, health care, and mobility.

    In the last year there have been substantial changes in the regulatory and policy landscape around AI in several countries — most notably in Europe with the development of the European Union Artificial Intelligence Act, the first attempt by a major regulator to propose a law on artificial intelligence. In the United States, the National AI Initiative Act of 2020, which became law in January 2021, is providing a coordinated program across federal government to accelerate AI research and application for economic prosperity and security gains. Finally, China recently advanced several new regulations of its own.

    Each of these developments represents a different approach to legislating AI, but what makes a good AI law? And when should AI legislation be based on binding rules with penalties versus establishing voluntary guidelines?

    Jonathan Zittrain, professor of international law at Harvard Law School and director of the Berkman Klein Center for Internet and Society, says the self-regulatory approach taken during the expansion of the internet had its limitations with companies struggling to balance their interests with those of their industry and the public.

    “One lesson might be that actually having representative government take an active role early on is a good idea,” he says. “It’s just that they’re challenged by the fact that there appears to be two phases in this environment of regulation. One, too early to tell, and two, too late to do anything about it. In AI I think a lot of people would say we’re still in the ‘too early to tell’ stage but given that there’s no middle zone before it’s too late, it might still call for some regulation.”

    A theme that came up repeatedly throughout the first panel on AI laws — a conversation moderated by Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and chair of the AI Policy Forum — was the notion of trust. “If you told me the truth consistently, I would say you are an honest person. If AI could provide something similar, something that I can say is consistent and is the same, then I would say it’s trusted AI,” says Bitange Ndemo, professor of entrepreneurship at the University of Nairobi and the former permanent secretary of Kenya’s Ministry of Information and Communication.

    Eva Kaili, vice president of the European Parliament, adds that “In Europe, whenever you use something, like any medication, you know that it has been checked. You know you can trust it. You know the controls are there. We have to achieve the same with AI.” Kalli further stresses that building trust in AI systems will not only lead to people using more applications in a safe manner, but that AI itself will reap benefits as greater amounts of data will be generated as a result.

    The rapidly increasing applicability of AI across fields has prompted the need to address both the opportunities and challenges of emerging technologies and the impact they have on social and ethical issues such as privacy, fairness, bias, transparency, and accountability. In health care, for example, new techniques in machine learning have shown enormous promise for improving quality and efficiency, but questions of equity, data access and privacy, safety and reliability, and immunology and global health surveillance remain at large.

    MIT’s Marzyeh Ghassemi, an assistant professor in the Department of Electrical Engineering and Computer Science and the Institute for Medical Engineering and Science, and David Sontag, an associate professor of electrical engineering and computer science, collaborated with Ziad Obermeyer, an associate professor of health policy and management at the University of California Berkeley School of Public Health, to organize AIPF Health Wide Reach, a series of sessions to discuss issues of data sharing and privacy in clinical AI. The organizers assembled experts devoted to AI, policy, and health from around the world with the goal of understanding what can be done to decrease barriers to access to high-quality health data to advance more innovative, robust, and inclusive research results while being respectful of patient privacy.

    Over the course of the series, members of the group presented on a topic of expertise and were tasked with proposing concrete policy approaches to the challenge discussed. Drawing on these wide-ranging conversations, participants unveiled their findings during the symposium, covering nonprofit and government success stories and limited access models; upside demonstrations; legal frameworks, regulation, and funding; technical approaches to privacy; and infrastructure and data sharing. The group then discussed some of their recommendations that are summarized in a report that will be released soon.

    One of the findings calls for the need to make more data available for research use. Recommendations that stem from this finding include updating regulations to promote data sharing to enable easier access to safe harbors such as the Health Insurance Portability and Accountability Act (HIPAA) has for de-identification, as well as expanding funding for private health institutions to curate datasets, amongst others. Another finding, to remove barriers to data for researchers, supports a recommendation to decrease obstacles to research and development on federally created health data. “If this is data that should be accessible because it’s funded by some federal entity, we should easily establish the steps that are going to be part of gaining access to that so that it’s a more inclusive and equitable set of research opportunities for all,” says Ghassemi. The group also recommends taking a careful look at the ethical principles that govern data sharing. While there are already many principles proposed around this, Ghassemi says that “obviously you can’t satisfy all levers or buttons at once, but we think that this is a trade-off that’s very important to think through intelligently.”

    In addition to law and health care, other facets of AI policy explored during the event included auditing and monitoring AI systems at scale, and the role AI plays in mobility and the range of technical, business, and policy challenges for autonomous vehicles in particular.

    The AI Policy Forum Symposium was an effort to bring together communities of practice with the shared aim of designing the next chapter of AI. In his closing remarks, Aleksander Madry, the Cadence Designs Systems Professor of Computing at MIT and faculty co-lead of the AI Policy Forum, emphasized the importance of collaboration and the need for different communities to communicate with each other in order to truly make an impact in the AI policy space.

    “The dream here is that we all can meet together — researchers, industry, policymakers, and other stakeholders — and really talk to each other, understand each other’s concerns, and think together about solutions,” Madry said. “This is the mission of the AI Policy Forum and this is what we want to enable.” More

  • in

    Zero-trust architecture may hold the answer to cybersecurity insider threats

    For years, organizations have taken a defensive “castle-and-moat” approach to cybersecurity, seeking to secure the perimeters of their networks to block out any malicious actors. Individuals with the right credentials were assumed to be trustworthy and allowed access to a network’s systems and data without having to reauthorize themselves at each access attempt. However, organizations today increasingly store data in the cloud and allow employees to connect to the network remotely, both of which create vulnerabilities to this traditional approach. A more secure future may require a “zero-trust architecture,” in which users must prove their authenticity each time they access a network application or data.

    In May 2021, President Joe Biden’s Executive Order on Improving the Nation’s Cybersecurity outlined a goal for federal agencies to implement zero-trust security. Since then, MIT Lincoln Laboratory has been performing a study on zero-trust architectures, with the goals of reviewing their implementation in government and industry, identifying technical gaps and opportunities, and developing a set of recommendations for the United States’ approach to a zero-trust system.

    The study team’s first step was to define the term “zero trust” and understand the misperceptions in the field surrounding the concept. Some of these misperceptions suggest that a zero-trust architecture requires entirely new equipment to implement, or that it makes systems so “locked down” they’re not usable. 

    “Part of the reason why there is a lot of confusion about what zero trust is, is because it takes what the cybersecurity world has known about for many years and applies it in a different way,” says Jeffrey Gottschalk, the assistant head of Lincoln Laboratory’s Cyber Security and Information Sciences Division and study’s co-lead. “It is a paradigm shift in terms of how to think about security, but holistically it takes a lot of things that we already know how to do — such as multi-factor authentication, encryption, and software-defined networking­ — and combines them in different ways.”

    Play video

    Presentation: Overview of Zero Trust Architectures

    Recent high-profile cybersecurity incidents — such as those involving the National Security Agency, the U.S. Office of Personnel Management, Colonial Pipeline, SolarWinds, and Sony Pictures — highlight the vulnerability of systems and the need to rethink cybersecurity approaches.

    The study team reviewed recent, impactful cybersecurity incidents to identify which security principles were most responsible for the scale and impact of the attack. “We noticed that while a number of these attacks exploited previously unknown implementation vulnerabilities (also known as ‘zero-days’), the vast majority actually were due to the exploitation of operational security principles,” says Christopher Roeser, study co-lead and the assistant head of the Homeland Protection and Air Traffic Control Division, “that is, the gaining of individuals’ credentials, and the movement within a well-connected network that allows users to gather a significant amount of information or have very widespread effects.”

    In other words, the malicious actor had “breached the moat” and effectively became an insider.

    Zero-trust security principles could protect against this type of insider threat by treating every component, service, and user of a system as continuously exposed to and potentially compromised by a malicious actor. A user’s identity is verified each time that they request to access a new resource, and every access is mediated, logged, and analyzed. It’s like putting trip wires all over the inside of a network system, says Gottschalk. “So, when an adversary trips over that trip wire, you’ll get a signal and can validate that signal and see what’s going on.”

    In practice, a zero-trust approach could look like replacing a single-sign-on system, which lets users sign in just once for access to multiple applications, with a cloud-based identity that is known and verified. “Today, a lot of organizations have different ways that people authenticate and log onto systems, and many of those have been aggregated for expediency into single-sign-on capabilities, just to make it easier for people to log onto their systems. But we envision a future state that embraces zero trust, where identity verification is enabled by cloud-based identity that’s portable and ubiquitous, and very secure itself.”

    While conducting their study, the team spoke to approximately 10 companies and government organizations that have adopted zero-trust implementations — either through cloud services, in-house management, or a combination of both. They found the hybrid approach to be a good model for government organizations to adopt. They also found that the implementation could take from three to five years. “We talked to organizations that have actually done implementations of zero trust, and all of them have indicated that significant organizational commitment and change was required to be able to implement them,” Gottschalk says.

    But a key takeaway from the study is that there isn’t a one-size-fits-all approach to zero trust. “It’s why we think that having test-bed and pilot efforts are going to be very important to balance out zero-trust security with the mission needs of those systems,” Gottschalk says. The team also recognizes the importance of conducting ongoing research and development beyond initial zero-trust implementations, to continue to address evolving threats.

    Lincoln Laboratory will present further findings from the study at its upcoming Cyber Technology for National Security conference, which will be held June 28-29. The conference will also offer a short course for attendees to learn more about the benefits and implementations of zero-trust architectures.  More

  • in

    Frequent encounters build familiarity

    Do better spatial networks make for better neighbors? There is evidence that they do, according to Paige Bollen, a sixth-year political science graduate student at MIT. The networks Bollen works with are not virtual but physical, part of the built environment in which we are all embedded. Her research on urban spaces suggests that the routes bringing people together or keeping them apart factor significantly in whether individuals see each other as friend or foe.

    “We all live in networks of streets, and come across different types of people,” says Bollen. “Just passing by others provides information that informs our political and social views of the world.” In her doctoral research, Bollen is revealing how physical context matters in determining whether such ordinary encounters engender suspicion or even hostility, while others can lead to cooperation and tolerance.

    Through her in-depth studies mapping the movement of people in urban communities in Ghana and South Africa, Bollen is demonstrating that even in diverse communities, “when people repeatedly come into contact, even if that contact is casual, they can build understanding that can lead to cooperation and positive outcomes,” she says. “My argument is that frequent, casual contact, facilitated by street networks, can make people feel more comfortable with those unlike themselves,” she says.

    Mapping urban networks

    Bollen’s case for the benefits of casual contact emerged from her pursuit of several related questions: Why do people in urban areas who regard other ethnic groups with prejudice and economic envy nevertheless manage to collaborate for a collective good? How do you reduce fears that arise from differences? How do the configuration of space and the built environment influence contact patterns among people?

    While other social science research suggests that there are weak ties in ethnically mixed urban communities, with casual contact exacerbating hostility, Bollen noted that there were plenty of examples of “cooperation across ethnic divisions in ethnically mixed communities.” She absorbed the work of psychologist Stanley Milgram, whose 1972 research showed that strangers seen frequently in certain places become familiar — less anonymous or threatening. So she set out to understand precisely how “the built environment of a neighborhood interacts with its demography to create distinct patterns of contact between social groups.”

    With the support of MIT Global Diversity Lab and MIT GOV/LAB, Bollen set out to develop measures of intergroup contact in cities in Ghana and South Africa. She uses street network data to predict contact patterns based on features of the built environment and then combines these measures with mobility data on peoples’ actual movement.

    “I created a huge dataset for every intersection in these cities, to determine the central nodes where many people are passing through,” she says. She combined these datasets with census data to determine which social groups were most likely to use specific intersections based on their position in a particular street network. She mapped these measures of casual contact to outcomes, such as inter-ethnic cooperation in Ghana and voting behavior in South Africa.

    “My analysis [in Ghana] showed that in areas that are more ethnically heterogeneous and where there are more people passing through intersections, we find more interconnections among people and more cooperation within communities in community development efforts,” she says.

    In a related survey experiment conducted on Facebook with 1,200 subjects, Bollen asked Accra residents if they would help an unknown non-co-ethnic in need with a financial gift. She found that the likelihood of offering such help was strongly linked to the frequency of interactions. “Helping behavior occurred when the subjects believed they would see this person again, even when they did not know the person in need well,” says Bollen. “They figured if they helped, they could count on this person’s reciprocity in the future.”

    For Bollen, this was “a powerful gut check” for her hypothesis that “frequency builds familiarity, because frequency provides information and drives expectations, which means it can reduce uncertainty and fear of the other.”

    In research underway in South Africa, a nation increasingly dealing with anti-immigrant violence, Bollen is investigating whether frequency of contact reduces prejudice against foreigners. Using her detailed street maps, 1.1 billion unique geolocated cellphone pings, and election data, she finds that frequent contact opportunities with immigrants are associated with lower support for anti-immigrant party voting.    Passion for places and spaces

    Bollen never anticipated becoming a political scientist. The daughter of two academics, she was “bent on becoming a data scientist.” But she was also “always interested in why people behave in certain ways and how this influences macro trends.”

    As an undergraduate at Tufts University, she became interested in international affairs. But it was her 2013 fieldwork studying women-only carriages in Delhi, India’s metro system, that proved formative. “I interviewed women for a month, talking to them about how these cars enabled them to participate in public life,” she recalls. Another project involving informal transportation routes in Cape Town, South Africa, immersed her more deeply in the questions of people’s experience of public space. “I left college thinking about mobility and public space, and I discovered how much I love geographic information systems,” she says.

    A gig with the Commonwealth of Massachusetts to improve the 911 emergency service — updating and cleaning geolocations of addresses using Google Street View — further piqued her interest. “The job was tedious, but I realized you can really understand a place, and how people move around, from these images.” Bollen began thinking about a career in urban planning.

    Then a two-year stint as a researcher at MIT GOV/LAB brought Bollen firmly into the political science fold. Working with Lily Tsai, the Ford Professor of Political Science, on civil society partnerships in the developing world, Bollen realized that “political science wasn’t what I thought it was,” she says. “You could bring psychology, economics, and sociology into thinking about politics.” Her decision to join the doctoral program was simple: “I knew and loved the people I was with at MIT.”

    Bollen has not regretted that decision. “All the things I’ve been interested in are finally coming together in my dissertation,” she says. Due to the pandemic, questions involving space, mobility, and contact became sharper to her. “I shifted my research emphasis from asking people about inter-ethnic differences and inequality through surveys, to using contact and context information to measure these variables.”

    She sees a number of applications for her work, including working with civil society organizations in communities touched by ethnic or other frictions “to rethink what we know about contact, challenging some of the classic things we think we know.”

    As she moves into the final phases of her dissertation, which she hopes to publish as a book, Bollen also relishes teaching comparative politics to undergraduates. “There’s something so fun engaging with them, and making their arguments stronger,” she says. With the long process of earning a PhD, this helps her “enjoy what she is doing every single day.” More