More stories

  • in

    Frequent encounters build familiarity

    Do better spatial networks make for better neighbors? There is evidence that they do, according to Paige Bollen, a sixth-year political science graduate student at MIT. The networks Bollen works with are not virtual but physical, part of the built environment in which we are all embedded. Her research on urban spaces suggests that the routes bringing people together or keeping them apart factor significantly in whether individuals see each other as friend or foe.

    “We all live in networks of streets, and come across different types of people,” says Bollen. “Just passing by others provides information that informs our political and social views of the world.” In her doctoral research, Bollen is revealing how physical context matters in determining whether such ordinary encounters engender suspicion or even hostility, while others can lead to cooperation and tolerance.

    Through her in-depth studies mapping the movement of people in urban communities in Ghana and South Africa, Bollen is demonstrating that even in diverse communities, “when people repeatedly come into contact, even if that contact is casual, they can build understanding that can lead to cooperation and positive outcomes,” she says. “My argument is that frequent, casual contact, facilitated by street networks, can make people feel more comfortable with those unlike themselves,” she says.

    Mapping urban networks

    Bollen’s case for the benefits of casual contact emerged from her pursuit of several related questions: Why do people in urban areas who regard other ethnic groups with prejudice and economic envy nevertheless manage to collaborate for a collective good? How do you reduce fears that arise from differences? How do the configuration of space and the built environment influence contact patterns among people?

    While other social science research suggests that there are weak ties in ethnically mixed urban communities, with casual contact exacerbating hostility, Bollen noted that there were plenty of examples of “cooperation across ethnic divisions in ethnically mixed communities.” She absorbed the work of psychologist Stanley Milgram, whose 1972 research showed that strangers seen frequently in certain places become familiar — less anonymous or threatening. So she set out to understand precisely how “the built environment of a neighborhood interacts with its demography to create distinct patterns of contact between social groups.”

    With the support of MIT Global Diversity Lab and MIT GOV/LAB, Bollen set out to develop measures of intergroup contact in cities in Ghana and South Africa. She uses street network data to predict contact patterns based on features of the built environment and then combines these measures with mobility data on peoples’ actual movement.

    “I created a huge dataset for every intersection in these cities, to determine the central nodes where many people are passing through,” she says. She combined these datasets with census data to determine which social groups were most likely to use specific intersections based on their position in a particular street network. She mapped these measures of casual contact to outcomes, such as inter-ethnic cooperation in Ghana and voting behavior in South Africa.

    “My analysis [in Ghana] showed that in areas that are more ethnically heterogeneous and where there are more people passing through intersections, we find more interconnections among people and more cooperation within communities in community development efforts,” she says.

    In a related survey experiment conducted on Facebook with 1,200 subjects, Bollen asked Accra residents if they would help an unknown non-co-ethnic in need with a financial gift. She found that the likelihood of offering such help was strongly linked to the frequency of interactions. “Helping behavior occurred when the subjects believed they would see this person again, even when they did not know the person in need well,” says Bollen. “They figured if they helped, they could count on this person’s reciprocity in the future.”

    For Bollen, this was “a powerful gut check” for her hypothesis that “frequency builds familiarity, because frequency provides information and drives expectations, which means it can reduce uncertainty and fear of the other.”

    In research underway in South Africa, a nation increasingly dealing with anti-immigrant violence, Bollen is investigating whether frequency of contact reduces prejudice against foreigners. Using her detailed street maps, 1.1 billion unique geolocated cellphone pings, and election data, she finds that frequent contact opportunities with immigrants are associated with lower support for anti-immigrant party voting.    Passion for places and spaces

    Bollen never anticipated becoming a political scientist. The daughter of two academics, she was “bent on becoming a data scientist.” But she was also “always interested in why people behave in certain ways and how this influences macro trends.”

    As an undergraduate at Tufts University, she became interested in international affairs. But it was her 2013 fieldwork studying women-only carriages in Delhi, India’s metro system, that proved formative. “I interviewed women for a month, talking to them about how these cars enabled them to participate in public life,” she recalls. Another project involving informal transportation routes in Cape Town, South Africa, immersed her more deeply in the questions of people’s experience of public space. “I left college thinking about mobility and public space, and I discovered how much I love geographic information systems,” she says.

    A gig with the Commonwealth of Massachusetts to improve the 911 emergency service — updating and cleaning geolocations of addresses using Google Street View — further piqued her interest. “The job was tedious, but I realized you can really understand a place, and how people move around, from these images.” Bollen began thinking about a career in urban planning.

    Then a two-year stint as a researcher at MIT GOV/LAB brought Bollen firmly into the political science fold. Working with Lily Tsai, the Ford Professor of Political Science, on civil society partnerships in the developing world, Bollen realized that “political science wasn’t what I thought it was,” she says. “You could bring psychology, economics, and sociology into thinking about politics.” Her decision to join the doctoral program was simple: “I knew and loved the people I was with at MIT.”

    Bollen has not regretted that decision. “All the things I’ve been interested in are finally coming together in my dissertation,” she says. Due to the pandemic, questions involving space, mobility, and contact became sharper to her. “I shifted my research emphasis from asking people about inter-ethnic differences and inequality through surveys, to using contact and context information to measure these variables.”

    She sees a number of applications for her work, including working with civil society organizations in communities touched by ethnic or other frictions “to rethink what we know about contact, challenging some of the classic things we think we know.”

    As she moves into the final phases of her dissertation, which she hopes to publish as a book, Bollen also relishes teaching comparative politics to undergraduates. “There’s something so fun engaging with them, and making their arguments stronger,” she says. With the long process of earning a PhD, this helps her “enjoy what she is doing every single day.” More

  • in

    Systems scientists find clues to why false news snowballs on social media

    The spread of misinformation on social media is a pressing societal problem that tech companies and policymakers continue to grapple with, yet those who study this issue still don’t have a deep understanding of why and how false news spreads.

    To shed some light on this murky topic, researchers at MIT developed a theoretical model of a Twitter-like social network to study how news is shared and explore situations where a non-credible news item will spread more widely than the truth. Agents in the model are driven by a desire to persuade others to take on their point of view: The key assumption in the model is that people bother to share something with their followers if they think it is persuasive and likely to move others closer to their mindset. Otherwise they won’t share.

    The researchers found that in such a setting, when a network is highly connected or the views of its members are sharply polarized, news that is likely to be false will spread more widely and travel deeper into the network than news with higher credibility.

    This theoretical work could inform empirical studies of the relationship between news credibility and the size of its spread, which might help social media companies adapt networks to limit the spread of false information.

    “We show that, even if people are rational in how they decide to share the news, this could still lead to the amplification of information with low credibility. With this persuasion motive, no matter how extreme my beliefs are — given that the more extreme they are the more I gain by moving others’ opinions — there is always someone who would amplify [the information],” says senior author Ali Jadbabaie, professor and head of the Department of Civil and Environmental Engineering and a core faculty member of the Institute for Data, Systems, and Society (IDSS) and a principal investigator in the Laboratory for Information and Decision Systems (LIDS).

    Joining Jadbabaie on the paper are first author Chin-Chia Hsu, a graduate student in the Social and Engineering Systems program in IDSS, and Amir Ajorlou, a LIDS research scientist. The research will be presented this week at the IEEE Conference on Decision and Control.

    Pondering persuasion

    This research draws on a 2018 study by Sinan Aral, the David Austin Professor of Management at the MIT Sloan School of Management; Deb Roy, an associate professor of media arts and sciences at the Media Lab; and former postdoc Soroush Vosoughi (now an assistant professor of computer science at Dartmouth University). Their empirical study of data from Twitter found that false news spreads wider, faster, and deeper than real news.

    Jadbabaie and his collaborators wanted to drill down on why this occurs.

    They hypothesized that persuasion might be a strong motive for sharing news — perhaps agents in the network want to persuade others to take on their point of view — and decided to build a theoretical model that would let them explore this possibility.

    In their model, agents have some prior belief about a policy, and their goal is to persuade followers to move their beliefs closer to the agent’s side of the spectrum.

    A news item is initially released to a small, random subgroup of agents, which must decide whether to share this news with their followers. An agent weighs the newsworthiness of the item and its credibility, and updates its belief based on how surprising or convincing the news is. 

    “They will make a cost-benefit analysis to see if, on average, this piece of news will move people closer to what they think or move them away. And we include a nominal cost for sharing. For instance, taking some action, if you are scrolling on social media, you have to stop to do that. Think of that as a cost. Or a reputation cost might come if I share something that is embarrassing. Everyone has this cost, so the more extreme and the more interesting the news is, the more you want to share it,” Jadbabaie says.

    If the news affirms the agent’s perspective and has persuasive power that outweighs the nominal cost, the agent will always share the news. But if an agent thinks the news item is something others may have already seen, the agent is disincentivized to share it.

    Since an agent’s willingness to share news is a product of its perspective and how persuasive the news is, the more extreme an agent’s perspective or the more surprising the news, the more likely the agent will share it.

    The researchers used this model to study how information spreads during a news cascade, which is an unbroken sharing chain that rapidly permeates the network.

    Connectivity and polarization

    The team found that when a network has high connectivity and the news is surprising, the credibility threshold for starting a news cascade is lower. High connectivity means that there are multiple connections between many users in the network.

    Likewise, when the network is largely polarized, there are plenty of agents with extreme views who want to share the news item, starting a news cascade. In both these instances, news with low credibility creates the largest cascades.

    “For any piece of news, there is a natural network speed limit, a range of connectivity, that facilitates good transmission of information where the size of the cascade is maximized by true news. But if you exceed that speed limit, you will get into situations where inaccurate news or news with low credibility has a larger cascade size,” Jadbabaie says.

    If the views of users in the network become more diverse, it is less likely that a poorly credible piece of news will spread more widely than the truth.

    Jadbabaie and his colleagues designed the agents in the network to behave rationally, so the model would better capture actions real humans might take if they want to persuade others.

    “Someone might say that is not why people share, and that is valid. Why people do certain things is a subject of intense debate in cognitive science, social psychology, neuroscience, economics, and political science,” he says. “Depending on your assumptions, you end up getting different results. But I feel like this assumption of persuasion being the motive is a natural assumption.”

    Their model also shows how costs can be manipulated to reduce the spread of false information. Agents make a cost-benefit analysis and won’t share news if the cost to do so outweighs the benefit of sharing.

    “We don’t make any policy prescriptions, but one thing this work suggests is that, perhaps, having some cost associated with sharing news is not a bad idea. The reason you get lots of these cascades is because the cost of sharing the news is actually very low,” he says.

    This work was supported by an Army Research Office Multidisciplinary University Research Initiative grant and a Vannevar Bush Fellowship from the Office of the Secretary of Defense. More

  • in

    Lincoln Laboratory convenes top network scientists for Graph Exploitation Symposium

    As the Covid-19 pandemic has shown, we live in a richly connected world, facilitating not only the efficient spread of a virus but also of information and influence. What can we learn by analyzing these connections? This is a core question of network science, a field of research that models interactions across physical, biological, social, and information systems to solve problems.

    The 2021 Graph Exploitation Symposium (GraphEx), hosted by MIT Lincoln Laboratory, brought together top network science researchers to share the latest advances and applications in the field.

    “We explore and identify how exploitation of graph data can offer key technology enablers to solve the most pressing problems our nation faces today,” says Edward Kao, a symposium organizer and technical staff in Lincoln Laboratory’s AI Software Architectures and Algorithms Group.

    The themes of the virtual event revolved around some of the year’s most relevant issues, such as analyzing disinformation on social media, modeling the pandemic’s spread, and using graph-based machine learning models to speed drug design.

    “The special sessions on influence operations and Covid-19 at GraphEx reflect the relevance of network and graph-based analysis for understanding the phenomenology of these complicated and impactful aspects of modern-day life, and also may suggest paths forward as we learn more and more about graph manipulation,” says William Streilein, who co-chaired the event with Rajmonda Caceres, both of Lincoln Laboratory.

    Social networks

    Several presentations at the symposium focused on the role of network science in analyzing influence operations (IO), or organized attempts by state and/or non-state actors to spread disinformation narratives.  

    Lincoln Laboratory researchers have been developing tools to classify and quantify the influence of social media accounts that are likely IO accounts, such as those willfully spreading false Covid-19 treatments to vulnerable populations.

    “A cluster of IO accounts acts as an echo chamber to amplify the narrative. The vulnerable population is then engaging in these narratives,” says Erika Mackin, a researcher developing the tool, called RIO or Reconnaissance of Influence Operations.

    To classify IO accounts, Mackin and her team trained an algorithm to detect probable IO accounts in Twitter networks based on a specific hashtag or narrative. One example they studied was #MacronLeaks, a disinformation campaign targeting Emmanuel Macron during the 2017 French presidential election. The algorithm is trained to label accounts within this network as being IO on the basis of several factors, such as the number of interactions with foreign news accounts, the number of links tweeted, or number of languages used. Their model then uses a statistical approach to score an account’s level of influence in spreading the narrative within that network.

    The team has found that their classifier outperforms existing detectors of IO accounts, because it can identify both bot accounts and human-operated ones. They’ve also discovered that IO accounts that pushed the 2017 French election disinformation narrative largely overlap with accounts influentially spreading Covid-19 pandemic disinformation today. “This suggests that these accounts will continue to transition to disinformation narratives,” Mackin says.

    Pandemic modeling

    Throughout the Covid-19 pandemic, leaders have been looking to epidemiological models, which predict how disease will spread, to make sound decisions. Alessandro Vespignani, director of the Network Science Institute at Northeastern University, has been leading Covid-19 modeling efforts in the United States, and shared a keynote on this work at the symposium.

    Besides taking into account the biological facts of the disease, such as its incubation period, Vespignani’s model is especially powerful in its inclusion of community behavior. To run realistic simulations of disease spread, he develops “synthetic populations” that are built by using publicly available, highly detailed datasets about U.S. households. “We create a population that is not real, but is statistically real, and generate a map of the interactions of those individuals,” he says. This information feeds back into the model to predict the spread of the disease. 

    Today, Vespignani is considering how to integrate genomic analysis of the virus into this kind of population modeling in order to understand how variants are spreading. “It’s still a work in progress that is extremely interesting,” he says, adding that this approach has been useful in modeling the dispersal of the Delta variant of SARS-CoV-2. 

    As researchers model the virus’ spread, Lucas Laird at Lincoln Laboratory is considering how network science can be used to design effective control strategies. He and his team are developing a model for customizing strategies for different geographic regions. The effort was spurred by the differences in Covid-19 spread across U.S. communities, and what the researchers found to be a gap in intervention modeling to address those differences.

    As examples, they applied their planning algorithm to three counties in Florida, Massachusetts, and California. Taking into account the characteristics of a specific geographic center, such as the number of susceptible individuals and number of infections there, their planner institutes different strategies in those communities throughout the outbreak duration.

    “Our approach eradicates disease in 100 days, but it also is able to do it with much more targeted interventions than any of the global interventions. In other words, you don’t have to shut down a full country.” Laird adds that their planner offers a “sandbox environment” for exploring intervention strategies in the future.

    Machine learning with graphs

    Graph-based machine learning is receiving increasing attention for its potential to “learn” the complex relationships between graphical data, and thus extract new insights or predictions about these relationships. This interest has given rise to a new class of algorithms called graph neural networks. Today, graph neural networks are being applied in areas such as drug discovery and material design, with promising results.

    “We can now apply deep learning much more broadly, not only to medical images and biological sequences. This creates new opportunities in data-rich biology and medicine,” says Marinka Zitnik, an assistant professor at Harvard University who presented her research at GraphEx.

    Zitnik’s research focuses on the rich networks of interactions between proteins, drugs, disease, and patients, at the scale of billions of interactions. One application of this research is discovering drugs to treat diseases with no or few approved drug treatments, such as for Covid-19. In April, Zitnik’s team published a paper on their research that used graph neural networks to rank 6,340 drugs for their expected efficacy against SARS-CoV-2, identifying four that could be repurposed to treat Covid-19.

    At Lincoln Laboratory, researchers are similarly applying graph neural networks to the challenge of designing advanced materials, such as those that can withstand extreme radiation or capture carbon dioxide. Like the process of designing drugs, the trial-and-error approach to materials design is time-consuming and costly. The laboratory’s team is developing graph neural networks that can learn relationships between a material’s crystalline structure and its properties. This network can then be used to predict a variety of properties from any new crystal structure, greatly speeding up the process of screening materials with desired properties for specific applications.

    “Graph representation learning has emerged as a rich and thriving research area for incorporating inductive bias and structured priors during the machine learning process, with broad applications such as drug design, accelerated scientific discovery, and personalized recommendation systems,” Caceres says. 

    A vibrant community

    Lincoln Laboratory has hosted the GraphEx Symposium annually since 2010, with the exception of last year’s cancellation due to Covid-19. “One key takeaway is that despite the postponement from last year and the need to be virtual, the GraphEx community is as vibrant and active as it’s ever been,” Streilein says. “Network-based analysis continues to expand its reach and is applied to ever-more important areas of science, society, and defense with increasing impact.”

    In addition to those from Lincoln Laboratory, technical committee members and co-chairs of the GraphEx Symposium included researchers from Harvard University, Arizona State University, Stanford University, Smith College, Duke University, the U.S. Department of Defense, and Sandia National Laboratories. More