More stories

  • in

    Q&A: Global challenges surrounding the deployment of AI

    The AI Policy Forum (AIPF) is an initiative of the MIT Schwarzman College of Computing to move the global conversation about the impact of artificial intelligence from principles to practical policy implementation. Formed in late 2020, AIPF brings together leaders in government, business, and academia to develop approaches to address the societal challenges posed by the rapid advances and increasing applicability of AI.

    The co-chairs of the AI Policy Forum are Aleksander Madry, the Cadence Design Systems Professor; Asu Ozdaglar, deputy dean of academics for the MIT Schwarzman College of Computing and head of the Department of Electrical Engineering and Computer Science; and Luis Videgaray, senior lecturer at MIT Sloan School of Management and director of MIT AI Policy for the World Project. Here, they discuss talk some of the key issues facing the AI policy landscape today and the challenges surrounding the deployment of AI. The three are co-organizers of the upcoming AI Policy Forum Summit on Sept. 28, which will further explore the issues discussed here.

    Q: Can you talk about the ­ongoing work of the AI Policy Forum and the AI policy landscape generally?

    Ozdaglar: There is no shortage of discussion about AI at different venues, but conversations are often high-level, focused on questions of ethics and principles, or on policy problems alone. The approach the AIPF takes to its work is to target specific questions with actionable policy solutions and engage with the stakeholders working directly in these areas. We work “behind the scenes” with smaller focus groups to tackle these challenges and aim to bring visibility to some potential solutions alongside the players working directly on them through larger gatherings.

    Q: AI impacts many sectors, which makes us naturally worry about its trustworthiness. Are there any emerging best practices for development and deployment of trustworthy AI?

    Madry: The most important thing to understand regarding deploying trustworthy AI is that AI technology isn’t some natural, preordained phenomenon. It is something built by people. People who are making certain design decisions.

    We thus need to advance research that can guide these decisions as well as provide more desirable solutions. But we also need to be deliberate and think carefully about the incentives that drive these decisions. 

    Now, these incentives stem largely from the business considerations, but not exclusively so. That is, we should also recognize that proper laws and regulations, as well as establishing thoughtful industry standards have a big role to play here too.

    Indeed, governments can put in place rules that prioritize the value of deploying AI while being keenly aware of the corresponding downsides, pitfalls, and impossibilities. The design of such rules will be an ongoing and evolving process as the technology continues to improve and change, and we need to adapt to socio-political realities as well.

    Q: Perhaps one of the most rapidly evolving domains in AI deployment is in the financial sector. From a policy perspective, how should governments, regulators, and lawmakers make AI work best for consumers in finance?

    Videgaray: The financial sector is seeing a number of trends that present policy challenges at the intersection of AI systems. For one, there is the issue of explainability. By law (in the U.S. and in many other countries), lenders need to provide explanations to customers when they take actions deleterious in whatever way, like denial of a loan, to a customer’s interest. However, as financial services increasingly rely on automated systems and machine learning models, the capacity of banks to unpack the “black box” of machine learning to provide that level of mandated explanation becomes tenuous. So how should the finance industry and its regulators adapt to this advance in technology? Perhaps we need new standards and expectations, as well as tools to meet these legal requirements.

    Meanwhile, economies of scale and data network effects are leading to a proliferation of AI outsourcing, and more broadly, AI-as-a-service is becoming increasingly common in the finance industry. In particular, we are seeing fintech companies provide the tools for underwriting to other financial institutions — be it large banks or small, local credit unions. What does this segmentation of the supply chain mean for the industry? Who is accountable for the potential problems in AI systems deployed through several layers of outsourcing? How can regulators adapt to guarantee their mandates of financial stability, fairness, and other societal standards?

    Q: Social media is one of the most controversial sectors of the economy, resulting in many societal shifts and disruptions around the world. What policies or reforms might be needed to best ensure social media is a force for public good and not public harm?

    Ozdaglar: The role of social media in society is of growing concern to many, but the nature of these concerns can vary quite a bit — with some seeing social media as not doing enough to prevent, for example, misinformation and extremism, and others seeing it as unduly silencing certain viewpoints. This lack of unified view on what the problem is impacts the capacity to enact any change. All of that is additionally coupled with the complexities of the legal framework in the U.S. spanning the First Amendment, Section 230 of the Communications Decency Act, and trade laws.

    However, these difficulties in regulating social media do not mean that there is nothing to be done. Indeed, regulators have begun to tighten their control over social media companies, both in the United States and abroad, be it through antitrust procedures or other means. In particular, Ofcom in the U.K. and the European Union is already introducing new layers of oversight to platforms. Additionally, some have proposed taxes on online advertising to address the negative externalities caused by current social media business model. So, the policy tools are there, if the political will and proper guidance exists to implement them. More

  • in

    Visualizing migration stories

    On July 27, 2020, 51 people migrating to the United States were found dead in an overheated trailer near the Mexican border. Understanding why migrants willingly take such risks is the topic of a recent exhibition and report, co-authored by researchers at MIT’s Civic Data Design Lab (CDDL). The research has been used by the U.S. Senate and the United Nations to develop new policies to address the challenges, dangers, and opportunities presented by migration in the Americas.

    To illustrate these motivations and risks, researchers at CDDL have designed an exhibition featuring digital and physical visualizations that encourage visitors to engage with migrants’ experiences more fully. “Distance Unknown” made its debut at the United Nations World Food Program (WFP) executive board meeting in Rome earlier this summer, with plans for additional exhibition stops over the next year.

    The exhibition is inspired by the 2021 report about migration, co-authored by CDDL, that highlighted economic distress as the main factor pushing migrants from Central America to the United States. The report’s findings were cited in a January 2022 letter from 35 U.S. senators to Homeland Security Secretary Alejandro Mayorkas and Secretary of State Antony Blinken (who leads the Biden administration’s migration task force) that advocated for addressing humanitarian needs in Central America. In June, the United States joined 20 countries in issuing the Los Angeles Declaration on Migration and Protection, which proposed expanded legal avenues to migration.

    “This exhibition takes a unique approach to visualizing migration stories by humanizing the data. Visitors to the exhibition can see the data in aggregate, but then they can dive deeper and learn migrants’ individual motivations,” says Sarah Williams, associate professor of technology and urban planning, director of the Civic Data Design Lab and the Norman B. Leventhal Center for Advanced Urbanism, and the lead designer of the exhibition.

    The data for the exhibition were taken from a survey of over 5,000 people in El Salvador, Guatemala, and Honduras conducted by the WFP and analyzed in the subsequent report. The report showed that approximately 43 percent of people surveyed in 2021 were considering migrating in the prior year, compared to 8 percent in 2019 — a change that comes after nearly two years of impacts from a global pandemic and as food insecurity dramatically increased in that region. Survey respondents cited low wages, unemployment, and minimal income levels as factors increasing their desire to migrate — ahead of reasons such as violence or natural disasters. 

    On the wall of the exhibition is a vibrant tapestry made of paper currency woven by 13 Latin American immigrants. Approximately 15-by-8 feet, this physical data visualization explains the root causes of migration from Central America documented by CDDL research. Each bill in the tapestry represents one migrant; visitors are invited to take a piece of the tapestry and scan it at a touch-screen station, where the story of that migrant appears. This allows visitors to dive deeper into the causes of migration by learning more about why an individual migrant family in the study left home, their household circumstances, and their personal stories.

    Another feature of the exhibition is an interactive map that allows visitors to explore the journeys and barriers that migrants face along the way. Created from a unique dataset collected by researchers from internet hotspots along the migration trail, the data showed that migrants from 43 countries (some as distant as China and Afghanistan) used this Latin American trail. The map highlights the Darien Gap region of Central America, one of the most dangerous and costly migration routes. The area is remote, without roads, and consists of swamps and dense jungle.

    The “Distance Unknown” exhibition represented data taken from internet hotspots on the migration pathway from the Darien Gap in Colombia to the Mexican border. This image shows migrant routes from 43 countries.

    Image courtesy of the Civic Data Design Lab.

    Previous item
    Next item

    The intense multimedia exhibition demonstrates the approach that Williams takes with her research. “One of the exciting features of the exhibition is that it shows that artistic forms of data visualization start new conversations, which create the dialogue necessary for policy change. We couldn’t be more thrilled with the way the exhibition helped influence the hearts and minds of people who have the political will to impact policy,” says Williams.

    In his opening remarks to the exhibition, David Beasley, executive director of WFP, explained that “when people have to migrate because they have no choice, it creates political problems on all sides,” and emphasized the importance of proposing solutions. Citing the 2021 report, Beasley noted that migrants from El Salvador, Guatemala, and Honduras collectively spent $2.2 billion to migrate to the United States in 2021, which is comparable to what their respective governments spend on primary education.

    The WFP hopes to bring the exhibition to other locations, including Washington, Geneva, New York, Madrid, Buenos Aires, and Panama. More

  • in

    Emma Gibson: Optimizing health care logistics in Africa

    Growing up in South Africa at the turn of the century, Emma Gibson saw the rise of the HIV/AIDS epidemic and its devastating impact on her home country, where many people lacked life-saving health care. At the time, Gibson was too young to understand what a sexually transmitted infection was, but she knew that HIV was infecting millions of South Africans and AIDS was taking hundreds of thousands of lives. “As a child, I was terrified by this monster that was HIV and felt so powerless to do anything about it,” she says.

    Now, as an adult, her childhood fear of the HIV epidemic has evolved into a desire to fight it. Gibson seeks to improve health care for HIV and other diseases in regions with limited resources, including South Africa. She wants to help health care facilities in these areas to use their resources more effectively so that patients can more easily obtain care.

    To help reach her goal, Gibson sought mathematics and logistics training through higher education in South Africa. She first earned her bachelor’s degree in mathematical sciences at the University of the Witwatersrand, and then her master’s degree in operations research at Stellenbosch University. There, she learned to tackle complex decision-making problems using math, statistics, and computer simulations.

    During her master’s, Gibson studied the operational challenges faced in rural South African health care facilities by working with staff at Zithulele Hospital in the Eastern Cape, one of the country’s poorest provinces. Her research focused on ways to reduce hours-long wait times for patients seeking same-day care. In the end, she developed a software tool to model patient congestion throughout the day and optimize staff schedules accordingly, enabling the hospital to care for its patients more efficiently.

    After completing her master’s, Gibson wanted to further her education outside of South Africa and left to pursue a PhD in operations research at MIT. Upon arrival, she branched out in her research and worked on a project to improve breast cancer treatment in U.S. health care, a very different environment from what she was used to.

    Two years later, Gibson had the opportunity to return to researching health care in resource-limited settings and began working with Jónas Jónasson, an associate professor at the MIT Sloan School of Management, on a new project to improve diagnostic services in sub-Saharan Africa. For the past four years, she has been working diligently on this project in collaboration with researchers at the Indian School of Business and Northwestern University. “My love language is time,” she says. “If I’m investing a lot of time in something, I really value it.”

    Scheduling sample transport

    Diagnostic testing is an essential tool that allows medical professionals to identify new diagnoses in patients and monitor patients’ conditions as they undergo treatment. For example, people living with HIV require regular blood tests to ensure that their prescribed treatments are working effectively and provide an early warning of potential treatment failures.

    For Gibson’s current project, she’s trying to improve diagnostic services in Malawi, a landlocked country in southeast Africa. “We have the tools” to diagnose and treat diseases like HIV, she says. “But in resource-limited settings, we often lack the money, the staff, and the infrastructure to reach every patient that needs them.”

    When diagnostic testing is needed, clinicians collect samples from patients and send the samples to be tested at a laboratory, which then returns the results to the facility where the patient is treated. To move these items between facilities and laboratories, Malawi has developed a national sample transportation network. The transportation system plays an important role in linking remote, rural facilities to laboratory services and ensuring that patients in these areas can access diagnostic testing through community clinics. Samples collected at these clinics are first transported to nearby district hubs, and then forwarded to laboratories located in urban areas. Since most facilities do not have computers or communications infrastructure, laboratories print copies of test results and send them back to facilities through the same transportation process.

    The sample transportation cycle is onerous, but it’s a practical solution to a difficult problem. “During the Covid pandemic, we saw how hard it was to scale up diagnostic infrastructure,” Gibson says. Diagnostic services in sub-Saharan Africa face “similar challenges, but in a much poorer setting.”

    In Malawi, sample transportation is managed by a  nongovernment organization called Riders 4 Health. The organization has around 80 couriers on motorcycles who transport samples and test results between facilities. “When we started working with [Riders], the couriers operated on fixed weekly schedules, visiting each site once or twice a week,” Gibson says. But that led to “a lot of unnecessary trips and delays.”

    To make sample transportation more efficient, Gibson developed a dynamic scheduling system that adapts to the current demand for diagnostic testing. The system consists of two main parts: an information sharing platform that aggregates sample transportation data, and an algorithm that uses the data to generate optimized routes and schedules for sample transport couriers.

    In 2019, Gibson ran a four-month-long pilot test for this system in three out of the 27 districts in Malawi. During the pilot study, six couriers transported over 20,000 samples and results across 51 health care facilities, and 150 health care workers participated in data sharing.

    The pilot was a success. Gibson’s dynamic scheduling system eliminated about half the unnecessary trips and reduced transportation delays by 25 percent — a delay that used to be four days was reduced to three. Now, Riders 4 Health is developing their own version of Gibson’s system to operate nationally in Malawi. Throughout this project, “we focused on making sure this was something that could grow with the organization,” she says. “It’s gratifying to see that actually happening.”

    Leveraging patient data

    Gibson is completing her MIT degree this September but will continue working to improve health care in Africa. After graduation, she will join the technology and analytics health care practice of an established company in South Africa. Her initial focus will be on public health care institutions, including Chris Hani Baragwanath Academic Hospital in Johannesburg, the third-largest hospital in the world.

    In this role, Gibson will work to fill in gaps in African patient data for medical operational research and develop ways to use this data more effectively to improve health care in resource-limited areas. For example, better data systems can help to monitor the prevalence and impact of different diseases, guiding where health care workers and researchers put their efforts to help the most people. “You can’t make good decisions if you don’t have all the information,” Gibson says.

    To best leverage patient data for improving health care, Gibson plans to reevaluate how data systems are structured and used in the hospital. For ideas on upgrading the current system, she’ll look to existing data systems in other countries to see what works and what doesn’t, while also drawing upon her past research experience in U.S. health care. Ultimately, she’ll tailor the new hospital data system to South African needs to accurately inform future directions in health care.

    Gibson’s new job — her “dream job” — will be based in the United Kingdom, but she anticipates spending a significant amount of time in Johannesburg. “I have so many opportunities in the wider world, but the ones that appeal to me are always back in the place I came from,” she says. More

  • in

    Exploring emerging topics in artificial intelligence policy

    Members of the public sector, private sector, and academia convened for the second AI Policy Forum Symposium last month to explore critical directions and questions posed by artificial intelligence in our economies and societies.

    The virtual event, hosted by the AI Policy Forum (AIPF) — an undertaking by the MIT Schwarzman College of Computing to bridge high-level principles of AI policy with the practices and trade-offs of governing — brought together an array of distinguished panelists to delve into four cross-cutting topics: law, auditing, health care, and mobility.

    In the last year there have been substantial changes in the regulatory and policy landscape around AI in several countries — most notably in Europe with the development of the European Union Artificial Intelligence Act, the first attempt by a major regulator to propose a law on artificial intelligence. In the United States, the National AI Initiative Act of 2020, which became law in January 2021, is providing a coordinated program across federal government to accelerate AI research and application for economic prosperity and security gains. Finally, China recently advanced several new regulations of its own.

    Each of these developments represents a different approach to legislating AI, but what makes a good AI law? And when should AI legislation be based on binding rules with penalties versus establishing voluntary guidelines?

    Jonathan Zittrain, professor of international law at Harvard Law School and director of the Berkman Klein Center for Internet and Society, says the self-regulatory approach taken during the expansion of the internet had its limitations with companies struggling to balance their interests with those of their industry and the public.

    “One lesson might be that actually having representative government take an active role early on is a good idea,” he says. “It’s just that they’re challenged by the fact that there appears to be two phases in this environment of regulation. One, too early to tell, and two, too late to do anything about it. In AI I think a lot of people would say we’re still in the ‘too early to tell’ stage but given that there’s no middle zone before it’s too late, it might still call for some regulation.”

    A theme that came up repeatedly throughout the first panel on AI laws — a conversation moderated by Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and chair of the AI Policy Forum — was the notion of trust. “If you told me the truth consistently, I would say you are an honest person. If AI could provide something similar, something that I can say is consistent and is the same, then I would say it’s trusted AI,” says Bitange Ndemo, professor of entrepreneurship at the University of Nairobi and the former permanent secretary of Kenya’s Ministry of Information and Communication.

    Eva Kaili, vice president of the European Parliament, adds that “In Europe, whenever you use something, like any medication, you know that it has been checked. You know you can trust it. You know the controls are there. We have to achieve the same with AI.” Kalli further stresses that building trust in AI systems will not only lead to people using more applications in a safe manner, but that AI itself will reap benefits as greater amounts of data will be generated as a result.

    The rapidly increasing applicability of AI across fields has prompted the need to address both the opportunities and challenges of emerging technologies and the impact they have on social and ethical issues such as privacy, fairness, bias, transparency, and accountability. In health care, for example, new techniques in machine learning have shown enormous promise for improving quality and efficiency, but questions of equity, data access and privacy, safety and reliability, and immunology and global health surveillance remain at large.

    MIT’s Marzyeh Ghassemi, an assistant professor in the Department of Electrical Engineering and Computer Science and the Institute for Medical Engineering and Science, and David Sontag, an associate professor of electrical engineering and computer science, collaborated with Ziad Obermeyer, an associate professor of health policy and management at the University of California Berkeley School of Public Health, to organize AIPF Health Wide Reach, a series of sessions to discuss issues of data sharing and privacy in clinical AI. The organizers assembled experts devoted to AI, policy, and health from around the world with the goal of understanding what can be done to decrease barriers to access to high-quality health data to advance more innovative, robust, and inclusive research results while being respectful of patient privacy.

    Over the course of the series, members of the group presented on a topic of expertise and were tasked with proposing concrete policy approaches to the challenge discussed. Drawing on these wide-ranging conversations, participants unveiled their findings during the symposium, covering nonprofit and government success stories and limited access models; upside demonstrations; legal frameworks, regulation, and funding; technical approaches to privacy; and infrastructure and data sharing. The group then discussed some of their recommendations that are summarized in a report that will be released soon.

    One of the findings calls for the need to make more data available for research use. Recommendations that stem from this finding include updating regulations to promote data sharing to enable easier access to safe harbors such as the Health Insurance Portability and Accountability Act (HIPAA) has for de-identification, as well as expanding funding for private health institutions to curate datasets, amongst others. Another finding, to remove barriers to data for researchers, supports a recommendation to decrease obstacles to research and development on federally created health data. “If this is data that should be accessible because it’s funded by some federal entity, we should easily establish the steps that are going to be part of gaining access to that so that it’s a more inclusive and equitable set of research opportunities for all,” says Ghassemi. The group also recommends taking a careful look at the ethical principles that govern data sharing. While there are already many principles proposed around this, Ghassemi says that “obviously you can’t satisfy all levers or buttons at once, but we think that this is a trade-off that’s very important to think through intelligently.”

    In addition to law and health care, other facets of AI policy explored during the event included auditing and monitoring AI systems at scale, and the role AI plays in mobility and the range of technical, business, and policy challenges for autonomous vehicles in particular.

    The AI Policy Forum Symposium was an effort to bring together communities of practice with the shared aim of designing the next chapter of AI. In his closing remarks, Aleksander Madry, the Cadence Designs Systems Professor of Computing at MIT and faculty co-lead of the AI Policy Forum, emphasized the importance of collaboration and the need for different communities to communicate with each other in order to truly make an impact in the AI policy space.

    “The dream here is that we all can meet together — researchers, industry, policymakers, and other stakeholders — and really talk to each other, understand each other’s concerns, and think together about solutions,” Madry said. “This is the mission of the AI Policy Forum and this is what we want to enable.” More

  • in

    Mining social media data for social good

    For Erin Walk, who has loved school since she was a little girl, pursuing a graduate degree always seemed like a given. As a mechanical engineering major at Harvard University with a minor in government, she figured that going to graduate school in engineering would be the next logical step. However, during her senior year, a class on the “Technology of War” changed her trajectory, sparking her interest in technology and policy.

    “[Warfare] seems like a very dark reason for this interest to blossom … but I was so interested in how these technological developments including cyberwar had such a large impact on the entire course of world history,” Walk says. The class took a starkly different perspective from her engineering classes, which often focused on how a revolutionary technology was built. Instead, Walk was challenged to think about “the implications of what this [technology] could do.” 

    Now, Walk is studying the intersection between data science, policy, and technology as a graduate student in the Social and Engineering Systems program (SES), part of the Institute for Data, Systems, and Society (IDSS). Her research has demonstrated the value and bias inherent in social media data, with a focus on how to mine social media data to better understand the conflict in Syria. 

    Using data for social good

    With a newfound interest in policy developing just as college was drawing to a close, Walk says, “I realized I did not know what I wanted to do research on for five whole years, and the idea of getting a PhD started to feel very daunting.” Instead, she decided to work for a web security company in Washington, as a member of the policy team. “Being in school can be this fast process where you feel like you are being pushed through a tube and all of a sudden you come out the other end. Work gave me a lot more mental time to think about what I enjoyed and what was important to me,” she says.

    Walk served as a liaison between thinktanks and nonprofits in Washington that worked to provide services and encourage policies that enable equitable technology distribution. The role helped her identify what held her interest: corporate social responsibility projects that addressed access to technology, in this case, by donating free web security services to nonprofit organizations and to election websites. She became curious about how access to data and to the Internet can be beneficial for education, and how such access can be leveraged to establish connections to populations that are otherwise hard-to-reach, such as refugees, marginalized groups, or activist communities that rely on anonymity for safety.

    Walk knew she wanted to pursue this kind of tech activism work, but she also recognized that staying with a company driven by profits would not be the best avenue to fulfill her personal career aspirations. Graduate school seemed like the best option to both learn the data science skills she needed, and pursue full-time research focusing on technology and policy.

    Finding new ways to tap social media data

    With these goals in mind, Walk joined the SES graduate program in IDSS. “This program for me had the most balance,” she says. “I have a lot of leeway to explore whatever kind of research I want, provided it has an impact component and a data component.”

    During her first year, she intended to explore a variety of research advisors to find the right fit. Instead, during her first few months on MIT’s campus, she sat down for an introductory meeting with her now-research advisor, Fotini Christia, the Ford International Professor in the Social Sciences, and walked out with a project. Her new task: analyzing “how different social media sources are used differently by groups within the conflict, and how those different narratives present themselves online. So much social science research tends to use just Twitter, or just Facebook, to draw conclusions. It is important to understand how your data set might be skewed,” she says.

    Walk’s current research focuses on another novel way to tap social media. Scholars traditionally use geographic data to understand population movements, but her research has demonstrated that social media can also be a ripe data source. She is analyzing how social media discussions differ in places with and without refugees, with a particular focus on places where refugees have returned to their homelands, including Syria.

    “Now that the [Syrian] civil war has been going on for so long, there is a lot of discussion on how to bring refugees back in [to their homelands],” Walk says. Her research adds to this discussion by using social media sources to understand and predict the factors that encourage refugees to return, such as economic opportunities and decreases in local violence. Her goal is to harness some of the social media data to provide policymakers and nonprofits with information on how to address repatriation and related issues.

    Walk attributes much of her growth as a graduate student to the influence of collaborators, especially Professor Kiran Garimella at Rutgers’ Department of Library and Information Science. “So much of being a graduate student is feeling like you have a stupid question and figuring out who you can be vulnerable with in asking that stupid question,” she says. “I am very lucky to have a lot of those people in my life.”

    Encouraging the next generation

    Now, as a third-year student, Walk is the one whom others go to with their “stupid questions.” This desire to mentor and share her knowledge extends beyond the laboratory. “Something I discovered is that I really like talking to and advising people who are in a similar position to where I was. It is fulfilling to work with smart people close to my age who are just trying to figure out the answers to these meaty life issues that I have also struggled with,” she says.

    This realization led Walk to a position as a resident advisor at Harvard University’s Mather House, an undergraduate dormitory and community center. Walk became a faculty dean aide during her first year at MIT, and since then has served as a full-time Mather House resident tutor. “Every year I advise a new class of students, and I just become invested in their process. I get to talk to people about their lives, about their classes, about what is making them excited and about what is making them sad,” she says.

    After she graduates, Walk plans to explore issues that have a positive, tangible impact on policy outcomes and people, perhaps in an academic lab or in a nonprofit organization. Two such issues that particularly intrigue her are internet access and privacy for underserved populations. Regardless of the issues, she will continue to draw from both political science and data science. “One of my favorite things about being a part of interdisciplinary research is that [experts in] political science and computer science approach these issues so differently, and it is very grounding to have both of those perspectives. Political science thinks so carefully about measurement, population selection, and research design … [while] computer science has so many interesting methods that should be used in other disciplines,” she says.

    No matter what the future holds, Walk already has a sense of contentment. She admits that “my path was much less linear than I expected. I don’t think I even realized that a field like this existed.” Nevertheless, she says with a laugh, “I think that little-girl me would be very proud of present-day me.” More

  • in

    Hallucinating to better text translation

    As babies, we babble and imitate our way to learning languages. We don’t start off reading raw text, which requires fundamental knowledge and understanding about the world, as well as the advanced ability to interpret and infer descriptions and relationships. Rather, humans begin our language journey slowly, by pointing and interacting with our environment, basing our words and perceiving their meaning through the context of the physical and social world. Eventually, we can craft full sentences to communicate complex ideas.

    Similarly, when humans begin learning and translating into another language, the incorporation of other sensory information, like multimedia, paired with the new and unfamiliar words, like flashcards with images, improves language acquisition and retention. Then, with enough practice, humans can accurately translate new, unseen sentences in context without the accompanying media; however, imagining a picture based on the original text helps.

    This is the basis of a new machine learning model, called VALHALLA, by researchers from MIT, IBM, and the University of California at San Diego, in which a trained neural network sees a source sentence in one language, hallucinates an image of what it looks like, and then uses both to translate into a target language. The team found that their method demonstrates improved accuracy of machine translation over text-only translation. Further, it provided an additional boost for cases with long sentences, under-resourced languages, and instances where part of the source sentence is inaccessible to the machine translator.

    As a core task within the AI field of natural language processing (NLP), machine translation is an “eminently practical technology that’s being used by millions of people every day,” says study co-author Yoon Kim, assistant professor in MIT’s Department of Electrical Engineering and Computer Science with affiliations in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the MIT-IBM Watson AI Lab. With recent, significant advances in deep learning, “there’s been an interesting development in how one might use non-text information — for example, images, audio, or other grounding information — to tackle practical tasks involving language” says Kim, because “when humans are performing language processing tasks, we’re doing so within a grounded, situated world.” The pairing of hallucinated images and text during inference, the team postulated, imitates that process, providing context for improved performance over current state-of-the-art techniques, which utilize text-only data.

    This research will be presented at the IEEE / CVF Computer Vision and Pattern Recognition Conference this month. Kim’s co-authors are UC San Diego graduate student Yi Li and Professor Nuno Vasconcelos, along with research staff members Rameswar Panda, Chun-fu “Richard” Chen, Rogerio Feris, and IBM Director David Cox of IBM Research and the MIT-IBM Watson AI Lab.

    Learning to hallucinate from images

    When we learn new languages and to translate, we’re often provided with examples and practice before venturing out on our own. The same is true for machine-translation systems; however, if images are used during training, these AI methods also require visual aids for testing, limiting their applicability, says Panda.

    “In real-world scenarios, you might not have an image with respect to the source sentence. So, our motivation was basically: Instead of using an external image during inference as input, can we use visual hallucination — the ability to imagine visual scenes — to improve machine translation systems?” says Panda.

    To do this, the team used an encoder-decoder architecture with two transformers, a type of neural network model that’s suited for sequence-dependent data, like language, that can pay attention key words and semantics of a sentence. One transformer generates a visual hallucination, and the other performs multimodal translation using outputs from the first transformer.

    During training, there are two streams of translation: a source sentence and a ground-truth image that is paired with it, and the same source sentence that is visually hallucinated to make a text-image pair. First the ground-truth image and sentence are tokenized into representations that can be handled by transformers; for the case of the sentence, each word is a token. The source sentence is tokenized again, but this time passed through the visual hallucination transformer, outputting a hallucination, a discrete image representation of the sentence. The researchers incorporated an autoregression that compares the ground-truth and hallucinated representations for congruency — e.g., homonyms: a reference to an animal “bat” isn’t hallucinated as a baseball bat. The hallucination transformer then uses the difference between them to optimize its predictions and visual output, making sure the context is consistent.

    The two sets of tokens are then simultaneously passed through the multimodal translation transformer, each containing the sentence representation and either the hallucinated or ground-truth image. The tokenized text translation outputs are compared with the goal of being similar to each other and to the target sentence in another language. Any differences are then relayed back to the translation transformer for further optimization.

    For testing, the ground-truth image stream drops off, since images likely wouldn’t be available in everyday scenarios.

    “To the best of our knowledge, we haven’t seen any work which actually uses a hallucination transformer jointly with a multimodal translation system to improve machine translation performance,” says Panda.

    Visualizing the target text

    To test their method, the team put VALHALLA up against other state-of-the-art multimodal and text-only translation methods. They used public benchmark datasets containing ground-truth images with source sentences, and a dataset for translating text-only news articles. The researchers measured its performance over 13 tasks, ranging from translation on well-resourced languages (like English, German, and French), under-resourced languages (like English to Romanian) and non-English (like Spanish to French). The group also tested varying transformer model sizes, how accuracy changes with the sentence length, and translation under limited textual context, where portions of the text were hidden from the machine translators.

    The team observed significant improvements over text-only translation methods, improving data efficiency, and that smaller models performed better than the larger base model. As sentences became longer, VALHALLA’s performance over other methods grew, which the researchers attributed to the addition of more ambiguous words. In cases where part of the sentence was masked, VALHALLA could recover and translate the original text, which the team found surprising.

    Further unexpected findings arose: “Where there weren’t as many training [image and] text pairs, [like for under-resourced languages], improvements were more significant, which indicates that grounding in images helps in low-data regimes,” says Kim. “Another thing that was quite surprising to me was this improved performance, even on types of text that aren’t necessarily easily connectable to images. For example, maybe it’s not so surprising if this helps in translating visually salient sentences, like the ‘there is a red car in front of the house.’ [However], even in text-only [news article] domains, the approach was able to improve upon text-only systems.”

    While VALHALLA performs well, the researchers note that it does have limitations, requiring pairs of sentences to be annotated with an image, which could make it more expensive to obtain. It also performs better in its ground domain and not the text-only news articles. Moreover, Kim and Panda note, a technique like VALHALLA is still a black box, with the assumption that hallucinated images are providing helpful information, and the team plans to investigate what and how the model is learning in order to validate their methods.

    In the future, the team plans to explore other means of improving translation. “Here, we only focus on images, but there are other types of a multimodal information — for example, speech, video or touch, or other sensory modalities,” says Panda. “We believe such multimodal grounding can lead to even more efficient machine translation models, potentially benefiting translation across many low-resource languages spoken in the world.”

    This research was supported, in part, by the MIT-IBM Watson AI Lab and the National Science Foundation. More

  • in

    Frequent encounters build familiarity

    Do better spatial networks make for better neighbors? There is evidence that they do, according to Paige Bollen, a sixth-year political science graduate student at MIT. The networks Bollen works with are not virtual but physical, part of the built environment in which we are all embedded. Her research on urban spaces suggests that the routes bringing people together or keeping them apart factor significantly in whether individuals see each other as friend or foe.

    “We all live in networks of streets, and come across different types of people,” says Bollen. “Just passing by others provides information that informs our political and social views of the world.” In her doctoral research, Bollen is revealing how physical context matters in determining whether such ordinary encounters engender suspicion or even hostility, while others can lead to cooperation and tolerance.

    Through her in-depth studies mapping the movement of people in urban communities in Ghana and South Africa, Bollen is demonstrating that even in diverse communities, “when people repeatedly come into contact, even if that contact is casual, they can build understanding that can lead to cooperation and positive outcomes,” she says. “My argument is that frequent, casual contact, facilitated by street networks, can make people feel more comfortable with those unlike themselves,” she says.

    Mapping urban networks

    Bollen’s case for the benefits of casual contact emerged from her pursuit of several related questions: Why do people in urban areas who regard other ethnic groups with prejudice and economic envy nevertheless manage to collaborate for a collective good? How do you reduce fears that arise from differences? How do the configuration of space and the built environment influence contact patterns among people?

    While other social science research suggests that there are weak ties in ethnically mixed urban communities, with casual contact exacerbating hostility, Bollen noted that there were plenty of examples of “cooperation across ethnic divisions in ethnically mixed communities.” She absorbed the work of psychologist Stanley Milgram, whose 1972 research showed that strangers seen frequently in certain places become familiar — less anonymous or threatening. So she set out to understand precisely how “the built environment of a neighborhood interacts with its demography to create distinct patterns of contact between social groups.”

    With the support of MIT Global Diversity Lab and MIT GOV/LAB, Bollen set out to develop measures of intergroup contact in cities in Ghana and South Africa. She uses street network data to predict contact patterns based on features of the built environment and then combines these measures with mobility data on peoples’ actual movement.

    “I created a huge dataset for every intersection in these cities, to determine the central nodes where many people are passing through,” she says. She combined these datasets with census data to determine which social groups were most likely to use specific intersections based on their position in a particular street network. She mapped these measures of casual contact to outcomes, such as inter-ethnic cooperation in Ghana and voting behavior in South Africa.

    “My analysis [in Ghana] showed that in areas that are more ethnically heterogeneous and where there are more people passing through intersections, we find more interconnections among people and more cooperation within communities in community development efforts,” she says.

    In a related survey experiment conducted on Facebook with 1,200 subjects, Bollen asked Accra residents if they would help an unknown non-co-ethnic in need with a financial gift. She found that the likelihood of offering such help was strongly linked to the frequency of interactions. “Helping behavior occurred when the subjects believed they would see this person again, even when they did not know the person in need well,” says Bollen. “They figured if they helped, they could count on this person’s reciprocity in the future.”

    For Bollen, this was “a powerful gut check” for her hypothesis that “frequency builds familiarity, because frequency provides information and drives expectations, which means it can reduce uncertainty and fear of the other.”

    In research underway in South Africa, a nation increasingly dealing with anti-immigrant violence, Bollen is investigating whether frequency of contact reduces prejudice against foreigners. Using her detailed street maps, 1.1 billion unique geolocated cellphone pings, and election data, she finds that frequent contact opportunities with immigrants are associated with lower support for anti-immigrant party voting.    Passion for places and spaces

    Bollen never anticipated becoming a political scientist. The daughter of two academics, she was “bent on becoming a data scientist.” But she was also “always interested in why people behave in certain ways and how this influences macro trends.”

    As an undergraduate at Tufts University, she became interested in international affairs. But it was her 2013 fieldwork studying women-only carriages in Delhi, India’s metro system, that proved formative. “I interviewed women for a month, talking to them about how these cars enabled them to participate in public life,” she recalls. Another project involving informal transportation routes in Cape Town, South Africa, immersed her more deeply in the questions of people’s experience of public space. “I left college thinking about mobility and public space, and I discovered how much I love geographic information systems,” she says.

    A gig with the Commonwealth of Massachusetts to improve the 911 emergency service — updating and cleaning geolocations of addresses using Google Street View — further piqued her interest. “The job was tedious, but I realized you can really understand a place, and how people move around, from these images.” Bollen began thinking about a career in urban planning.

    Then a two-year stint as a researcher at MIT GOV/LAB brought Bollen firmly into the political science fold. Working with Lily Tsai, the Ford Professor of Political Science, on civil society partnerships in the developing world, Bollen realized that “political science wasn’t what I thought it was,” she says. “You could bring psychology, economics, and sociology into thinking about politics.” Her decision to join the doctoral program was simple: “I knew and loved the people I was with at MIT.”

    Bollen has not regretted that decision. “All the things I’ve been interested in are finally coming together in my dissertation,” she says. Due to the pandemic, questions involving space, mobility, and contact became sharper to her. “I shifted my research emphasis from asking people about inter-ethnic differences and inequality through surveys, to using contact and context information to measure these variables.”

    She sees a number of applications for her work, including working with civil society organizations in communities touched by ethnic or other frictions “to rethink what we know about contact, challenging some of the classic things we think we know.”

    As she moves into the final phases of her dissertation, which she hopes to publish as a book, Bollen also relishes teaching comparative politics to undergraduates. “There’s something so fun engaging with them, and making their arguments stronger,” she says. With the long process of earning a PhD, this helps her “enjoy what she is doing every single day.” More

  • in

    MIT announces five flagship projects in first-ever Climate Grand Challenges competition

    MIT today announced the five flagship projects selected in its first-ever Climate Grand Challenges competition. These multiyear projects will define a dynamic research agenda focused on unraveling some of the toughest unsolved climate problems and bringing high-impact, science-based solutions to the world on an accelerated basis.

    Representing the most promising concepts to emerge from the two-year competition, the five flagship projects will receive additional funding and resources from MIT and others to develop their ideas and swiftly transform them into practical solutions at scale.

    “Climate Grand Challenges represents a whole-of-MIT drive to develop game-changing advances to confront the escalating climate crisis, in time to make a difference,” says MIT President L. Rafael Reif. “We are inspired by the creativity and boldness of the flagship ideas and by their potential to make a significant contribution to the global climate response. But given the planet-wide scale of the challenge, success depends on partnership. We are eager to work with visionary leaders in every sector to accelerate this impact-oriented research, implement serious solutions at scale, and inspire others to join us in confronting this urgent challenge for humankind.”

    Brief descriptions of the five Climate Grand Challenges flagship projects are provided below.

    Bringing Computation to the Climate Challenge

    This project leverages advances in artificial intelligence, machine learning, and data sciences to improve the accuracy of climate models and make them more useful to a variety of stakeholders — from communities to industry. The team is developing a digital twin of the Earth that harnesses more data than ever before to reduce and quantify uncertainties in climate projections.

    Research leads: Raffaele Ferrari, the Cecil and Ida Green Professor of Oceanography in the Department of Earth, Atmospheric and Planetary Sciences, and director of the Program in Atmospheres, Oceans, and Climate; and Noelle Eckley Selin, director of the Technology and Policy Program and professor with a joint appointment in the Institute for Data, Systems, and Society and the Department of Earth, Atmospheric and Planetary Sciences

    Center for Electrification and Decarbonization of Industry

    This project seeks to reinvent and electrify the processes and materials behind hard-to-decarbonize industries like steel, cement, ammonia, and ethylene production. A new innovation hub will perform targeted fundamental research and engineering with urgency, pushing the technological envelope on electricity-driven chemical transformations.

    Research leads: Yet-Ming Chiang, the Kyocera Professor of Materials Science and Engineering, and Bilge Yıldız, the Breene M. Kerr Professor in the Department of Nuclear Science and Engineering and professor in the Department of Materials Science and Engineering

    Preparing for a new world of weather and climate extremes

    This project addresses key gaps in knowledge about intensifying extreme events such as floods, hurricanes, and heat waves, and quantifies their long-term risk in a changing climate. The team is developing a scalable climate-change adaptation toolkit to help vulnerable communities and low-carbon energy providers prepare for these extreme weather events.

    Research leads: Kerry Emanuel, the Cecil and Ida Green Professor of Atmospheric Science in the Department of Earth, Atmospheric and Planetary Sciences and co-director of the MIT Lorenz Center; Miho Mazereeuw, associate professor of architecture and urbanism in the Department of Architecture and director of the Urban Risk Lab; and Paul O’Gorman, professor in the Program in Atmospheres, Oceans, and Climate in the Department of Earth, Atmospheric and Planetary Sciences

    The Climate Resilience Early Warning System

    The CREWSnet project seeks to reinvent climate change adaptation with a novel forecasting system that empowers underserved communities to interpret local climate risk, proactively plan for their futures incorporating resilience strategies, and minimize losses. CREWSnet will initially be demonstrated in southwestern Bangladesh, serving as a model for similarly threatened regions around the world.

    Research leads: John Aldridge, assistant leader of the Humanitarian Assistance and Disaster Relief Systems Group at MIT Lincoln Laboratory, and Elfatih Eltahir, the H.M. King Bhumibol Professor of Hydrology and Climate in the Department of Civil and Environmental Engineering

    Revolutionizing agriculture with low-emissions, resilient crops

    This project works to revolutionize the agricultural sector with climate-resilient crops and fertilizers that have the ability to dramatically reduce greenhouse gas emissions from food production.

    Research lead: Christopher Voigt, the Daniel I.C. Wang Professor in the Department of Biological Engineering

    “As one of the world’s leading institutions of research and innovation, it is incumbent upon MIT to draw on our depth of knowledge, ingenuity, and ambition to tackle the hard climate problems now confronting the world,” says Richard Lester, MIT associate provost for international activities. “Together with collaborators across industry, finance, community, and government, the Climate Grand Challenges teams are looking to develop and implement high-impact, path-breaking climate solutions rapidly and at a grand scale.”

    The initial call for ideas in 2020 yielded nearly 100 letters of interest from almost 400 faculty members and senior researchers, representing 90 percent of MIT departments. After an extensive evaluation, 27 finalist teams received a total of $2.7 million to develop comprehensive research and innovation plans. The projects address four broad research themes:

    To select the winning projects, research plans were reviewed by panels of international experts representing relevant scientific and technical domains as well as experts in processes and policies for innovation and scalability.

    “In response to climate change, the world really needs to do two things quickly: deploy the solutions we already have much more widely, and develop new solutions that are urgently needed to tackle this intensifying threat,” says Maria Zuber, MIT vice president for research. “These five flagship projects exemplify MIT’s strong determination to bring its knowledge and expertise to bear in generating new ideas and solutions that will help solve the climate problem.”

    “The Climate Grand Challenges flagship projects set a new standard for inclusive climate solutions that can be adapted and implemented across the globe,” says MIT Chancellor Melissa Nobles. “This competition propels the entire MIT research community — faculty, students, postdocs, and staff — to act with urgency around a worsening climate crisis, and I look forward to seeing the difference these projects can make.”

    “MIT’s efforts on climate research amid the climate crisis was a primary reason that I chose to attend MIT, and remains a reason that I view the Institute favorably. MIT has a clear opportunity to be a thought leader in the climate space in our own MIT way, which is why CGC fits in so well,” says senior Megan Xu, who served on the Climate Grand Challenges student committee and is studying ways to make the food system more sustainable.

    The Climate Grand Challenges competition is a key initiative of “Fast Forward: MIT’s Climate Action Plan for the Decade,” which the Institute published in May 2021. Fast Forward outlines MIT’s comprehensive plan for helping the world address the climate crisis. It consists of five broad areas of action: sparking innovation, educating future generations, informing and leveraging government action, reducing MIT’s own climate impact, and uniting and coordinating all of MIT’s climate efforts. More