More stories

  • in

    Exploring new methods for increasing safety and reliability of autonomous vehicles

    When we think of getting on the road in our cars, our first thoughts may not be that fellow drivers are particularly safe or careful — but human drivers are more reliable than one may expect. For each fatal car crash in the United States, motor vehicles log a whopping hundred million miles on the road.

    Human reliability also plays a role in how autonomous vehicles are integrated in the traffic system, especially around safety considerations. Human drivers continue to surpass autonomous vehicles in their ability to make quick decisions and perceive complex environments: Autonomous vehicles are known to struggle with seemingly common tasks, such as taking on- or off-ramps, or turning left in the face of oncoming traffic. Despite these enormous challenges, embracing autonomous vehicles in the future could yield great benefits, like clearing congested highways; enhancing freedom and mobility for non-drivers; and boosting driving efficiency, an important piece in fighting climate change.

    MIT engineer Cathy Wu envisions ways that autonomous vehicles could be deployed with their current shortcomings, without experiencing a dip in safety. “I started thinking more about the bottlenecks. It’s very clear that the main barrier to deployment of autonomous vehicles is safety and reliability,” Wu says.

    One path forward may be to introduce a hybrid system, in which autonomous vehicles handle easier scenarios on their own, like cruising on the highway, while transferring more complicated maneuvers to remote human operators. Wu, who is a member of the Laboratory for Information and Decision Systems (LIDS), a Gilbert W. Winslow Assistant Professor of Civil and Environmental Engineering (CEE) and a member of the MIT Institute for Data, Systems, and Society (IDSS), likens this approach to air traffic controllers on the ground directing commercial aircraft.

    In a paper published April 12 in IEEE Transactions on Robotics, Wu and co-authors Cameron Hickert and Sirui Li (both graduate students at LIDS) introduced a framework for how remote human supervision could be scaled to make a hybrid system efficient without compromising passenger safety. They noted that if autonomous vehicles were able to coordinate with each other on the road, they could reduce the number of moments in which humans needed to intervene.

    Humans and cars: finding a balance that’s just right

    For the project, Wu, Hickert, and Li sought to tackle a maneuver that autonomous vehicles often struggle to complete. They decided to focus on merging, specifically when vehicles use an on-ramp to enter a highway. In real life, merging cars must accelerate or slow down in order to avoid crashing into cars already on the road. In this scenario, if an autonomous vehicle was about to merge into traffic, remote human supervisors could momentarily take control of the vehicle to ensure a safe merge. In order to evaluate the efficiency of such a system, particularly while guaranteeing safety, the team specified the maximum amount of time each human supervisor would be expected to spend on a single merge. They were interested in understanding whether a small number of remote human supervisors could successfully manage a larger group of autonomous vehicles, and the extent to which this human-to-car ratio could be improved while still safely covering every merge.

    With more autonomous vehicles in use, one might assume a need for more remote supervisors. But in scenarios where autonomous vehicles coordinated with each other, the team found that cars could significantly reduce the number of times humans needed to step in. For example, a coordinating autonomous vehicle already on a highway could adjust its speed to make room for a merging car, eliminating a risky merging situation altogether.

    The team substantiated the potential to safely scale remote supervision in two theorems. First, using a mathematical framework known as queuing theory, the researchers formulated an expression to capture the probability of a given number of supervisors failing to handle all merges pooled together from multiple cars. This way, the researchers were able to assess how many remote supervisors would be needed in order to cover every potential merge conflict, depending on the number of autonomous vehicles in use. The researchers derived a second theorem to quantify the influence of cooperative autonomous vehicles on surrounding traffic for boosting reliability, to assist cars attempting to merge.

    When the team modeled a scenario in which 30 percent of cars on the road were cooperative autonomous vehicles, they estimated that a ratio of one human supervisor to every 47 autonomous vehicles could cover 99.9999 percent of merging cases. But this level of coverage drops below 99 percent, an unacceptable range, in scenarios where autonomous vehicles did not cooperate with each other.

    “If vehicles were to coordinate and basically prevent the need for supervision, that’s actually the best way to improve reliability,” Wu says.

    Cruising toward the future

    The team decided to focus on merging not only because it’s a challenge for autonomous vehicles, but also because it’s a well-defined task associated with a less-daunting scenario: driving on the highway. About half of the total miles traveled in the United States occur on interstates and other freeways. Since highways allow higher speeds than city roads, Wu says, “If you can fully automate highway driving … you give people back about a third of their driving time.”

    If it became feasible for autonomous vehicles to cruise unsupervised for most highway driving, the challenge of safely navigating complex or unexpected moments would remain. For instance, “you [would] need to be able to handle the start and end of the highway driving,” Wu says. You would also need to be able to manage times when passengers zone out or fall asleep, making them unable to quickly take over controls should it be needed. But if remote human supervisors could guide autonomous vehicles at key moments, passengers may never have to touch the wheel. Besides merging, other challenging situations on the highway include changing lanes and overtaking slower cars on the road.

    Although remote supervision and coordinated autonomous vehicles are hypotheticals for high-speed operations, and not currently in use, Wu hopes that thinking about these topics can encourage growth in the field.

    “This gives us some more confidence that the autonomous driving experience can happen,” Wu says. “I think we need to be more creative about what we mean by ‘autonomous vehicles.’ We want to give people back their time — safely. We want the benefits, we don’t strictly want something that drives autonomously.” More

  • in

    Architectural heritage like you haven’t seen it before

    The shrine of Khwaja Abu Nasr Parsa is a spectacular mosque in Balkh, Afghanistan. Also known as the “Green Mosque” due to the brilliant color of its tiled and painted dome, the intricately decorated building dates to the 16th century.

    If it were more accessible, the Green Mosque would attract many visitors. But Balkh is located in northern Afghanistan, roughly 50 miles from the border with Uzbekistan, and few outsiders will ever reach it. Still, anyone can now get a vivid sense of the mosque thanks to MIT’s new “Ways of Seeing” project, an innovative form of historic preservation.

    Play video

    PHD student Nikolaos Vlavianos created the following Extended Reality sequences for the “Ways of Seeing” project.

    “Ways of Seeing” uses multiple modes of imagery to produce a rich visual record of four historic building sites in Afghanistan — including colorful 3D still images, virtual reality imagery that takes viewers around and in some cases inside the structures, and exquisite hand-drawn architectural renderings of the buildings. The project’s imagery will be made available for viewing through the MIT Libraries by the end of June, with open access for the public. A subset of curated project materials will also be available through Archnet, an open access resource on the built environment of Muslim societies, which is a collaboration between the Aga Khan Documentation Center of the MIT Libraries and the Aga Khan Trust for Culture.

    “After the U.S. withdrawal from Afghanistan in August 2021, Associate Provost Richard Lester convened a set of MIT faculty in a working group to think of what we as a community of scholars could be doing that would be meaningful to people in Afghanistan at this point in time,” says Fotini Christia, an MIT political science professor who led the project. “‘Ways of Seeing’ is a project that I conceived after discussions with that group of colleagues and which is truly in the MIT tradition: It combines field data, technology, and art to protect heritage and serve the world.”

    Christia, the Ford International Professor of the Social Sciences and director of the Sociotechnical Systems Research Center at the MIT Schwarzman College of Computing, has worked extensively in Afghanistan conducting field research about civil society. She viewed this project as a unique opportunity to construct a detailed, accessible record of remarkable heritage sites — through sophisticated digital elements as well as finely wrought ink drawings.

    “The idea is these drawings would inspire interest and pride in this heritage, a kind of amazement and motivation to preserve this for as long as humanly possible,” says Jelena Pejkovic MArch ’06, a practicing architect who made the large-scale renderings by hand over a period of months.

    Pejkovic adds: “These drawings are extremely time-consuming, and for me this is part of the motivation. They ask you to slow down and pay attention. What can you take in from all this material that we have collected? How do you take time to look, to interpret, to understand what is in front of you?”

    The project’s “digital transformation strategy” was led by Nikolaos Vlavianos, a PhD candidate in the Department of Architecture’s Design and Computation group. The group uses cutting-edge technologies and drones to make three-dimensional digital reconstructions of large-scale architectural sites and create immersive experiences in extended reality (XR). Vlavianos also conducts studies of the psychological and physiological responses of humans experiencing such spaces in XR and in person. 

    “I regard this project as an effort toward a broader architectural metaverse consisting of immersive experiences in XR of physical spaces around the world that are difficult or impossible to access due to political, social, and even cultural constraints,” says Vlavianos. “These spaces in the metaverse are information hubs promoting an embodied experiential approach of living, sensing, seeing, hearing, and touching.”

    Nasser Rabbat, the Aga Khan Professor and director of the Aga Khan Program for Islamic Architecture at MIT, also offered advice and guidance on the early stages of the project.

    The project — formally titled “Ways of Seeing: Documenting Endangered Built Heritage in Afghanistan” — encompasses imaging of four quite varied historical sites in Afghanistan.

    These are the Green Mosque in Balkh; the Parwan Stupa, a Buddhist dome south of Kabul; the tomb of Gawhar Saad, in Herat, in honor of the queen of the emperor of the Timurid, who was herself a highly influential figure in the 14th and 15th centuries; and the Minaret of Jam, a remarkable 200-foot tall tower dating to the 12th century, next to the Hari River in a distant spot in western Afghanistan.

    The sites thus encompass multiple religions and a diversity of building types. Many are in remote locations within Afghanistan that cannot readily be accessed by visitors — including scholars.

    “Ways of Seeing” is supported by a Mellon Faculty Grant from the MIT Center for Art, Science, and Technology (CAST), and by faculty funding from the MIT School of Humanities, Arts, and Social Sciences (SHASS). It is co-presented with the Institute for Data, Systems, and Society (IDSS), the Sociotechnical Systems Research Center (SSRC) at the MIT Schwarzman College of Computing, the MIT Department of Political Science, and SHASS.

    Two students from Wellesley College participating in MIT’s Undergraduate Research Opportunities Program (UROP), juniors Meng Lu and Muzi Fang, also worked on the project under the guidance of Vlavianos to create a video game for children involving the Gawhar Saad heritage site. 

    To generate the imagery, the MIT team worked with an Afghan digital production team that was on the ground in the country; they went to the four sites and took thousands of pictures, having been trained remotely by Vlavianos to perform a 3D scanning aerial operation. They were led by Shafic Gawhari, the managing director for Afghanistan at the Moby Group, an international media enterprise; others involved were Mohammad Jan Kamal, Nazifullah Benaam, Warekzai Ghayoor, Rahm Ali Mohebzada, Mohammad Harif Ghobar, and Abdul Musawer Anwari.

    The journalists documented the sites by collecting 15,000 to 30,000 images, while Vlavianos computationally generated point clouds and mesh geometry with detailed texture mapping. The outcome of those models consisted of still images,  immersive experiences in XR, and data for Pejkovic.  

    “‘Ways of Seeing’ proposes a hybrid model of remote data collection,” says Vlavianos, who in his time at MIT has also led similar projects at Machu Picchu in Peru, and the Simonos Petra monastery at Mount Athos, Greece. To produce similar imagery even more easily, he says, “The next step — which I am working on — is to utilize autonomous drones deployed simultaneously in various locations on the world for rapid production and advanced neural network algorithms to generate models from lower number of images.”  

    In the future, Vlavianos envisions documenting and reconstructing other sites around the world using crowdsourcing data, historical images, satellite imagery, or even by having local communities learn XR techniques. 

    Pejkovic produced her drawings based on the digital models assembled by Vlavianos, carefully using a traditional rendering technique in which she would first outline the measurements of each structure, at scale, and then gradually ink in the drawings to give the buildings texture. The inking technique she used is based on VERNADOC, a method of documenting vernacular architecture developed by the Finnish architect Markku Mattila.

    “I wanted to rediscover the most traditional possible kind of documentation — measuring directly by hand, and drawing by hand,” says Pejkovic. She has been active in conservation of cultural heritage for over 10 years.

    The first time Pejkovic ever saw this type of hand-drawn renderings in person, she recalls thinking, “This is not possible, a human being cannot make drawings like this.” However, she wryly adds, “You know the motto at MIT is ‘mens et manus,’ mind and hand.” And so she embarked on hand drawing these renderings herself, at a large scale — her image of the Minaret of Jam has been printed in a crisp 8-foot version by the MIT team.

    “The ultimate intent of this project has been to make all these outputs, which are co-owned with the Afghans who carried out the data collection on the ground, available to Afghan refugees displaced around the world but also accessible to anyone keen to witness them,” Christia says. “The digital twins [representations] of these sites are also meant to work as repositories of information for any future preservation efforts. This model can be replicated and scaled for other heritage sites at risk from wars, environmental disaster, or cultural appropriation.” More

  • in

    A better way to study ocean currents

    To study ocean currents, scientists release GPS-tagged buoys in the ocean and record their velocities to reconstruct the currents that transport them. These buoy data are also used to identify “divergences,” which are areas where water rises up from below the surface or sinks beneath it.

    By accurately predicting currents and pinpointing divergences, scientists can more precisely forecast the weather, approximate how oil will spread after a spill, or measure energy transfer in the ocean. A new model that incorporates machine learning makes more accurate predictions than conventional models do, a new study reports.

    A multidisciplinary research team including computer scientists at MIT and oceanographers has found that a standard statistical model typically used on buoy data can struggle to accurately reconstruct currents or identify divergences because it makes unrealistic assumptions about the behavior of water.

    The researchers developed a new model that incorporates knowledge from fluid dynamics to better reflect the physics at work in ocean currents. They show that their method, which only requires a small amount of additional computational expense, is more accurate at predicting currents and identifying divergences than the traditional model.

    This new model could help oceanographers make more accurate estimates from buoy data, which would enable them to more effectively monitor the transportation of biomass (such as Sargassum seaweed), carbon, plastics, oil, and nutrients in the ocean. This information is also important for understanding and tracking climate change.

    “Our method captures the physical assumptions more appropriately and more accurately. In this case, we know a lot of the physics already. We are giving the model a little bit of that information so it can focus on learning the things that are important to us, like what are the currents away from the buoys, or what is this divergence and where is it happening?” says senior author Tamara Broderick, an associate professor in MIT’s Department of Electrical Engineering and Computer Science (EECS) and a member of the Laboratory for Information and Decision Systems and the Institute for Data, Systems, and Society.

    Broderick’s co-authors include lead author Renato Berlinghieri, an electrical engineering and computer science graduate student; Brian L. Trippe, a postdoc at Columbia University; David R. Burt and Ryan Giordano, MIT postdocs; Kaushik Srinivasan, an assistant researcher in atmospheric and ocean sciences at the University of California at Los Angeles; Tamay Özgökmen, professor in the Department of Ocean Sciences at the University of Miami; and Junfei Xia, a graduate student at the University of Miami. The research will be presented at the International Conference on Machine Learning.

    Diving into the data

    Oceanographers use data on buoy velocity to predict ocean currents and identify “divergences” where water rises to the surface or sinks deeper.

    To estimate currents and find divergences, oceanographers have used a machine-learning technique known as a Gaussian process, which can make predictions even when data are sparse. To work well in this case, the Gaussian process must make assumptions about the data to generate a prediction.

    A standard way of applying a Gaussian process to oceans data assumes the latitude and longitude components of the current are unrelated. But this assumption isn’t physically accurate. For instance, this existing model implies that a current’s divergence and its vorticity (a whirling motion of fluid) operate on the same magnitude and length scales. Ocean scientists know this is not true, Broderick says. The previous model also assumes the frame of reference matters, which means fluid would behave differently in the latitude versus the longitude direction.

    “We were thinking we could address these problems with a model that incorporates the physics,” she says.

    They built a new model that uses what is known as a Helmholtz decomposition to accurately represent the principles of fluid dynamics. This method models an ocean current by breaking it down into a vorticity component (which captures the whirling motion) and a divergence component (which captures water rising or sinking).

    In this way, they give the model some basic physics knowledge that it uses to make more accurate predictions.

    This new model utilizes the same data as the old model. And while their method can be more computationally intensive, the researchers show that the additional cost is relatively small.

    Buoyant performance

    They evaluated the new model using synthetic and real ocean buoy data. Because the synthetic data were fabricated by the researchers, they could compare the model’s predictions to ground-truth currents and divergences. But simulation involves assumptions that may not reflect real life, so the researchers also tested their model using data captured by real buoys released in the Gulf of Mexico.

    This shows the trajectories of approximately 300 buoys released during the Grand LAgrangian Deployment (GLAD) in the Gulf of Mexico in the summer of 2013, to learn about ocean surface currents around the Deepwater Horizon oil spill site. The small, regular clockwise rotations are due to Earth’s rotation.Credit: Consortium of Advanced Research for Transport of Hydrocarbons in the Environment

    In each case, their method demonstrated superior performance for both tasks, predicting currents and identifying divergences, when compared to the standard Gaussian process and another machine-learning approach that used a neural network. For example, in one simulation that included a vortex adjacent to an ocean current, the new method correctly predicted no divergence while the previous Gaussian process method and the neural network method both predicted a divergence with very high confidence.

    The technique is also good at identifying vortices from a small set of buoys, Broderick adds.

    Now that they have demonstrated the effectiveness of using a Helmholtz decomposition, the researchers want to incorporate a time element into their model, since currents can vary over time as well as space. In addition, they want to better capture how noise impacts the data, such as winds that sometimes affect buoy velocity. Separating that noise from the data could make their approach more accurate.

    “Our hope is to take this noisily observed field of velocities from the buoys, and then say what is the actual divergence and actual vorticity, and predict away from those buoys, and we think that our new technique will be helpful for this,” she says.

    “The authors cleverly integrate known behaviors from fluid dynamics to model ocean currents in a flexible model,” says Massimiliano Russo, an associate biostatistician at Brigham and Women’s Hospital and instructor at Harvard Medical School, who was not involved with this work. “The resulting approach retains the flexibility to model the nonlinearity in the currents but can also characterize phenomena such as vortices and connected currents that would only be noticed if the fluid dynamic structure is integrated into the model. This is an excellent example of where a flexible model can be substantially improved with a well thought and scientifically sound specification.”

    This research is supported, in part, by the Office of Naval Research, a National Science Foundation (NSF) CAREER Award, and the Rosenstiel School of Marine, Atmospheric, and Earth Science at the University of Miami. More

  • in

    Researchers create a tool for accurately simulating complex systems

    Researchers often use simulations when designing new algorithms, since testing ideas in the real world can be both costly and risky. But since it’s impossible to capture every detail of a complex system in a simulation, they typically collect a small amount of real data that they replay while simulating the components they want to study.

    Known as trace-driven simulation (the small pieces of real data are called traces), this method sometimes results in biased outcomes. This means researchers might unknowingly choose an algorithm that is not the best one they evaluated, and which will perform worse on real data than the simulation predicted that it should.

    MIT researchers have developed a new method that eliminates this source of bias in trace-driven simulation. By enabling unbiased trace-driven simulations, the new technique could help researchers design better algorithms for a variety of applications, including improving video quality on the internet and increasing the performance of data processing systems.

    The researchers’ machine-learning algorithm draws on the principles of causality to learn how the data traces were affected by the behavior of the system. In this way, they can replay the correct, unbiased version of the trace during the simulation.

    When compared to a previously developed trace-driven simulator, the researchers’ simulation method correctly predicted which newly designed algorithm would be best for video streaming — meaning the one that led to less rebuffering and higher visual quality. Existing simulators that do not account for bias would have pointed researchers to a worse-performing algorithm.

    “Data are not the only thing that matter. The story behind how the data are generated and collected is also important. If you want to answer a counterfactual question, you need to know the underlying data generation story so you only intervene on those things that you really want to simulate,” says Arash Nasr-Esfahany, an electrical engineering and computer science (EECS) graduate student and co-lead author of a paper on this new technique.

    He is joined on the paper by co-lead authors and fellow EECS graduate students Abdullah Alomar and Pouya Hamadanian; recent graduate student Anish Agarwal PhD ’21; and senior authors Mohammad Alizadeh, an associate professor of electrical engineering and computer science; and Devavrat Shah, the Andrew and Erna Viterbi Professor in EECS and a member of the Institute for Data, Systems, and Society and of the Laboratory for Information and Decision Systems. The research was recently presented at the USENIX Symposium on Networked Systems Design and Implementation.

    Specious simulations

    The MIT researchers studied trace-driven simulation in the context of video streaming applications.

    In video streaming, an adaptive bitrate algorithm continually decides the video quality, or bitrate, to transfer to a device based on real-time data on the user’s bandwidth. To test how different adaptive bitrate algorithms impact network performance, researchers can collect real data from users during a video stream for a trace-driven simulation.

    They use these traces to simulate what would have happened to network performance had the platform used a different adaptive bitrate algorithm in the same underlying conditions.

    Researchers have traditionally assumed that trace data are exogenous, meaning they aren’t affected by factors that are changed during the simulation. They would assume that, during the period when they collected the network performance data, the choices the bitrate adaptation algorithm made did not affect those data.

    But this is often a false assumption that results in biases about the behavior of new algorithms, making the simulation invalid, Alizadeh explains.

    “We recognized, and others have recognized, that this way of doing simulation can induce errors. But I don’t think people necessarily knew how significant those errors could be,” he says.

    To develop a solution, Alizadeh and his collaborators framed the issue as a causal inference problem. To collect an unbiased trace, one must understand the different causes that affect the observed data. Some causes are intrinsic to a system, while others are affected by the actions being taken.

    In the video streaming example, network performance is affected by the choices the bitrate adaptation algorithm made — but it’s also affected by intrinsic elements, like network capacity.

    “Our task is to disentangle these two effects, to try to understand what aspects of the behavior we are seeing are intrinsic to the system and how much of what we are observing is based on the actions that were taken. If we can disentangle these two effects, then we can do unbiased simulations,” he says.

    Learning from data

    But researchers often cannot directly observe intrinsic properties. This is where the new tool, called CausalSim, comes in. The algorithm can learn the underlying characteristics of a system using only the trace data.

    CausalSim takes trace data that were collected through a randomized control trial, and estimates the underlying functions that produced those data. The model tells the researchers, under the exact same underlying conditions that a user experienced, how a new algorithm would change the outcome.

    Using a typical trace-driven simulator, bias might lead a researcher to select a worse-performing algorithm, even though the simulation indicates it should be better. CausalSim helps researchers select the best algorithm that was tested.

    The MIT researchers observed this in practice. When they used CausalSim to design an improved bitrate adaptation algorithm, it led them to select a new variant that had a stall rate that was nearly 1.4 times lower than a well-accepted competing algorithm, while achieving the same video quality. The stall rate is the amount of time a user spent rebuffering the video.

    By contrast, an expert-designed trace-driven simulator predicted the opposite. It indicated that this new variant should cause a stall rate that was nearly 1.3 times higher. The researchers tested the algorithm on real-world video streaming and confirmed that CausalSim was correct.

    “The gains we were getting in the new variant were very close to CausalSim’s prediction, while the expert simulator was way off. This is really exciting because this expert-designed simulator has been used in research for the past decade. If CausalSim can so clearly be better than this, who knows what we can do with it?” says Hamadanian.

    During a 10-month experiment, CausalSim consistently improved simulation accuracy, resulting in algorithms that made about half as many errors as those designed using baseline methods.

    In the future, the researchers want to apply CausalSim to situations where randomized control trial data are not available or where it is especially difficult to recover the causal dynamics of the system. They also want to explore how to design and monitor systems to make them more amenable to causal analysis. More

  • in

    Study: Covid-19 has reduced diverse urban interactions

    The Covid-19 pandemic has reduced how often urban residents intersect with people from different income brackets, according to a new study led by MIT researchers.

    Examining the movement of people in four U.S. cities before and after the onset of the pandemic, the study found a 15 to 30 percent decrease in the number of visits residents were making to areas that are socioeconomically different than their own. In turn, this has reduced people’s opportunities to interact with others from varied social and economic spheres.

    “Income diversity of urban encounters decreased during the pandemic, and not just in the lockdown stages,” says Takahiro Yabe, a postdoc at the Media Lab and co-author of a newly published paper detailing the study’s results. “It decreased in the long term as well, after mobility patterns recovered.”

    Indeed, the study found a large immediate dropoff in urban movement in the spring of 2020, when new policies temporarily shuttered many types of institutions and businesses in the U.S. and much of the world due to the emergence of the deadly Covid-19 virus. But even after such restrictions were lifted and the overall amount of urban movement approached prepandemic levels, movement patterns within cities have narrowed; people now visit fewer places.

    “We see that changes like working from home, less exploration, more online shopping, all these behaviors add up,” says Esteban Moro, a research scientist at MIT’s Sociotechnical Systems Research Center (SSRC) and another of the paper’s co-authors. “Working from home is amazing and shopping online is great, but we are not seeing each other at the rates we were before.”

    The paper, “Behavioral changes during the Covid-19 pandemic decreased income diversity of urban encounters,” appears in Nature Communications. The co-authors are Yabe; Bernardo García Bulle Bueno, a doctoral candidate at MIT’s Institute for Data, Systems, and Society (IDSS); Xiaowen Dong, an associate professor at Oxford University; Alex Pentland, professor of media arts and sciences at MIT and the Toshiba Professor at the Media Lab; and Moro, who is also an associate professor at the University Carlos III of Madrid.

    A decline in exploration

    To conduct the study, the researchers examined anonymized cellphone data from 1 million users over a three-year period, starting in early 2019, with data focused on four U.S. cities: Boston, Dallas, Los Angeles, and Seattle. The researchers recorded visits to 433,000 specific “point of interest” locations in those cities, corroborated in part with records from Infogroup’s U.S. Business Database, an annual census of company information.  

    The researchers used U.S. Census Bureau data to categorize the socioeconomic status of the people in the study, placing everyone into one of four income quartiles, based on the average income of the census block (a small area) in which they live. The scholars made the same income-level assessment for every census block in the four cities, then recorded instances in which someone spent from 10 minutes to four hours in a census block other than their own, to see how often people visited areas in different income quartiles. 

    Ultimately, the researchers found that by late 2021, the amount of urban movement overall was returning to prepandemic levels, but the scope of places residents were visiting had become more restricted.

    Among other things, people made many fewer visits to museums, leisure venues, transport sites, and coffee shops. Visits to grocery stores remained fairly constant — but people tend not to leave their socioeconomic circles for grocery shopping.

    “Early in the pandemic, people reduced their mobility radius significantly,” Yabe says. “By late 2021, that decrease flattened out, and the average dwell time people spent at places other than work and home recovered to prepandemic levels. What’s different is that exploration substantially decreased, around 5 to 10 percent. We also see less visitation to fun places.” He adds: “Museums are the most diverse places you can find, parks — they took the biggest hit during the pandemic. Places that are [more] segregated, like grocery stores, did not.”

    Overall, Moro notes, “When we explore less, we go to places that are less diverse.”

    Different cities, same pattern

    Because the study encompassed four cities with different types of policies about reopening public sites and businesses during the pandemic, the researchers could also evaluate what impact public health policies had on urban movement. But even in these different settings, the same phenomenon emerged, with a narrower range of mobility occurring by late 2021.

    “Despite the substantial differences in how cities dealt with Covid-19, the decrease in diversity and the behavioral changes were surprisingly similar across the four cities,” Yabe observes.

    The researchers emphasize that these changes in urban movement can have long-term societal effects. Prior research has shown a significant association between a diversity of social connections and greater economic success for people in lower-income groups. And while some interactions between people in different income quartiles might be brief and transactional, the evidence suggests that, on aggregate, other more substantial connections have also been reduced. Additionally, the scholars note, the narrowing of experience can also weaken civic ties and valuable political connections.

    “It’s creating an urban fabric that is actually more brittle, in the sense that we are less exposed to other people,” Moro says. “We don’t get to know other people in the city, and that is very important for policies and public opinion. We need to convince people that new policies and laws would be fair. And the only way to do that is to know other people’s needs. If we don’t see them around the city, that will be impossible.”

    At the same time, Yabe adds, “I think there is a lot we can do from a policy standpoint to bring people back to places that used to be a lot more diverse.” The researchers are currently developing further studies related to cultural and public institutions, as well and transportation issues, to try to evaluate urban connectivity in additional detail.

    “The quantity of our mobility has recovered,” Yabe says. “The quality has really changed, and we’re more segregated as a result.” More

  • in

    Martin Wainwright named director of the Institute for Data, Systems, and Society

    Martin Wainwright, the Cecil H. Green Professor in MIT’s departments of Electrical Engineering and Computer Science (EECS) and Mathematics, has been named the new director of the Institute for Data, Systems, and Society (IDSS), effective July 1.

    “Martin is a widely recognized leader in statistics and machine learning — both in research and in education. In taking on this leadership role in the college, Martin will work to build up the human and institutional behavior component of IDSS, while strengthening initiatives in both policy and statistics, and collaborations within the institute, across MIT, and beyond,” says Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science. “I look forward to working with him and supporting his efforts in this next chapter for IDSS.”

    “Martin holds a strong belief in the value of theoretical, experimental, and computational approaches to research and in facilitating connections between them. He also places much importance in having practical, as well as academic, impact,” says Asu Ozdaglar, deputy dean of academics for the MIT Schwarzman College of Computing, department head of EECS, and the MathWorks Professor of Electrical Engineering and Computer Science. “As the new director of IDSS, he will undoubtedly bring these tenets to the role in advancing the mission of IDSS and helping to shape its future.”

    A principal investigator in the Laboratory for Information and Decision Systems and the Statistics and Data Science Center, Wainwright joined the MIT faculty in July 2022 from the University of California at Berkeley, where he held the Howard Friesen Chair with a joint appointment between the departments of Electrical Engineering and Computer Science and Statistics.

    Wainwright received his bachelor’s degree in mathematics from the University of Waterloo, Canada, and doctoral degree in electrical engineering and computer science from MIT. He has received a number of awards and recognition, including an Alfred P. Sloan Foundation Fellowship, and best paper awards from the IEEE Signal Processing Society, IEEE Communications Society, and IEEE Information Theory and Communication Societies. He has also been honored with the Medallion Lectureship and Award from the Institute of Mathematical Statistics, and the COPSS Presidents’ Award from the Joint Statistical Societies. He was a section lecturer with the International Congress of Mathematicians in 2014 and received the Blackwell Award from the Institute of Mathematical Statistics in 2017.

    He is the author of “High-dimensional Statistics: A Non-Asymptotic Viewpoint” (Cambridge University Press, 2019), and is coauthor on several books, including on graphical models and on sparse statistical modeling.

    Wainwright succeeds Munther Dahleh, the William A. Coolidge Professor in EECS, who has helmed IDSS since its founding in 2015.

    “I am grateful to Munther and thank him for his leadership of IDSS. As the founding director, he has led the creation of a remarkable new part of MIT,” says Huttenlocher. More

  • in

    Illuminating the money trail

    You may not know this, but the U.S. imposes a 12.5 percent import tariff on imported flashlights. However, for a product category the federal government describes as “portable electric lamps designed to function by their own source of energy, other than flashlights,” the import tariff is just 3.5 percent.

    At a glance, this seems inexplicable. Why is one kind of self-powered portable light taxed more heavily than another? According to MIT political science professor In Song Kim, a policy discrepancy like this often stems from the difference in firms’ political power, as well as the extent to which firms are empowered by global production networks. This is a subject Kim has spent years examining in detail, producing original scholarly results while opening up a wealth of big data about politics to the public.

    “We all understand companies as being important economic agents,” Kim says. “But companies are political agents, too. They are very important political actors.”

    In particular, Kim’s work has illuminated the effects of lobbying upon U.S. trade policy. International trade is often presented as an unalloyed good, opening up markets and fueling growth. Beyond that, trade issues are usually described at the industry level; we hear about what the agriculture lobby or auto industry wants. But in reality, different firms want different things, even within the same industry.

    As Kim’s work shows, most firms lobby for policies pertaining to specific components of their products, and trade policy consists heavily of carve-outs for companies, not industry-wide standards. Firms making non-flashlight portable lights, it would seem, are good at lobbying, but the benefits clearly do not carry over to all portable light makers, as long as products are not perfect substitutes for each other. Meanwhile, as Kim’s research also shows, lobbying helps firms grow faster in size, even as lobbying-influenced policies may slow down the economy as a whole.

    “All our existing theories suggest that trade policy is a public good, in the sense that the benefits of open trade, the gains from trade, will be enjoyed by the public and will benefit the country as a whole,” Kim says. “But what I’ve learned is that trade policies are very, very granular. It’s become obvious to me that trade is no longer a public good. It’s actually a private good for individual companies.”

    Kim’s work includes over a dozen published journal articles over the last several years, several other forthcoming research papers, and a book he is currently writing. At the same time, Kim has created a public database, LobbyView, which tracks money in U.S. politics extending back to 1999. LobbyView, as an important collection of political information, has research, educational, and public-interest applications, enabling others, in academia or outside it, to further delve into the topic.

    “I want to contribute to the scholarly community, and I also want to create a public [resource] for our MIT community [and beyond], so we can all study politics through it,” Kim says.

    Keeping the public good in sight

    Kim grew up in South Korea, in a setting where politics was central to daily life. Kim’s grandfather, Kim jae-soon, was the Speaker of the National Assembly in South Korea from 1988 through 1990 and an important figure in the country’s government.

    “I’ve always been fascinated by politics,” says Kim, who remembers prominent political figures dropping by the family home when he was young. One of the principal lessons Kim learned about politics from his grandfather, however, was not about proximity to power, but the importance of public service. The enduring lesson of his family’s engagement with politics, Kim says, is that “I truly believe in contributing to the public good.”

    Kim’s found his own way of contributing to the public good not as a politician but as a scholar of politics. Kim received his BA in political science from Yonsei University in Seoul but decided he wanted to pursue graduate studies in the U.S. He earned an MA in law and diplomacy from the Fletcher School of Tufts University, then an MA in political science at George Washington University. By this time, Kim had become focused on the quantitative analysis of trade policy; for his PhD work, he attended Princeton University and was awarded his doctorate in 2014, joining the MIT faculty that year.

    Among the key pieces of research Kim has published, one paper, “Political Cleavages within Industry: Firm-level Lobbying for Trade Liberalization,” published in the American Political Science Review and growing out of his dissertation research, helped show how remarkably specialized many trade policies are. As of 2017, the U.S. had almost 17,000 types of products it made tariff decisions about. Many of these are the component parts of a product; about two-thirds of international trade consists of manufactured components that get shipped around during the production process, rather than raw goods or finished products. That paper won the 2018 Michael Wallerstein Award for the best published article in political economy in the previous year.

    Another 2017 paper Kim co-authored, “The Charmed Life of Superstar Exporters,” from the Journal of Politics, provides more empirical evidence of the differences among firms within an industry. The “superstar” firms that are the largest exporters tend to lobby the most about trade politics; a firm’s characteristics reveal more about its preferences for open trade than the possibility that its industry as a whole will gain a comparative advantage internationally.

    Kim often uses large-scale data and computational methods to study international trade and trade politics. Still another paper he has co-authored, “Measuring Trade Profile with Granular Product-level Trade Data,” published in the American Journal of Political Science in 2020, traces trade relationships in highly specific terms. Looking at over 2 billion observations of international trade data, Kim developed an algorithm to group countries based on which products they import and export. The methodology helps researchers to learn about the highly different developmental paths that countries follow, and about the deepening international competition between countries such as the U.S. and China.

    At other times, Kim has analyzed who is influencing trade policy. His paper “Mapping Political Communities,” from the journal Political Analysis in 2021, looks at the U.S. Congress and uses mandatory reports filed by lobbyists to build a picture of which interests groups are most closely connected to which politicians.

    Kim has published all his papers while balancing both his scholarly research and the public launch of LobbyView, which occurred in 2018. He was awarded tenure by MIT in the spring of 2022. Currently he is an associate professor in the Department of Political Science and a faculty affiliate of the Institute for Data, Systems, and Society.

    By the book

    Kim has continued to explore firm-level lobbying dynamics, although his recent research runs in a few directions. In a 2021 working paper, Kim and co-author Federico Huneeus of the Central Bank of Chile built a model estimating that eliminating lobbying in the U.S. could increase productivity by as much as 6 percent.

    “Political rents [favorable policies] given to particular companies might introduce inefficiencies or a misallocation of resources in the economy,” Kim says. “You could allocate those resources to more productive although politically inactive firms, but now they’re given to less productive and yet politically active big companies, increasing market concentration and monopolies.”

    Kim is on sabbatical during the 2022-23 academic year, working on a book about the importance of firms’ political activities in trade policymaking. The book will have an expansive timeframe, dating back to ancient times, which underscores the salience of trade policy across eras. At the same time, the book will analyze the distinctive features of modern trade politics with deepening global production networks.

    “I’m trying to allow people to learn about the history of trade politics, to show how the politics have changed over time,” Kim says. “In doing that, I’m also highlighting the importance of firm-to-firm trade and the emergence of new trade coalitions among firms in different countries and industries that are linked through the global production chain.”

    While continuing his own scholarly research, Kim still leads LobbyView, which he views both as a big data resource for any scholars interested in money in politics and an excellent teaching resource for his MIT classes, as students can tap into it for projects and papers. LobbyView contains so much data, in fact, that part of the challenge is finding ways to mine it effectively.

    “It really offers me an opportunity to work with MIT students,” Kim says of LobbyView. “What I think I can contribute is to bring those technologies to our understanding of politics. Having this unique data set can really allow students here to use technology to learn about politics, and I believe that fits the MIT identity.” More

  • in

    MIT PhD students honored for their work to solve critical issues in water and food

    In 2017, the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) initiated the J-WAFS Fellowship Program for outstanding MIT PhD students working to solve humankind’s water-related challenges. Since then, J-WAFS has awarded 18 fellowships to students who have gone on to create innovations like a pump that can maximize energy efficiency even with changing flow rates, and a low-cost water filter made out of sapwood xylem that has seen real-world use in rural India. Last year, J-WAFS expanded eligibility to students with food-related research. The 2022 fellows included students working on micronutrient deficiency and plastic waste from traditional food packaging materials. 

    Today, J-WAFS has announced the award of the 2023-24 fellowships to Gokul Sampath and Jie Yun. A doctoral student in the Department of Urban Studies and planning, Sampath has been awarded the Rasikbhai L. Meswani Fellowship for Water Solutions, which is supported through a generous gift from Elina and Nikhil Meswani and family. Yun, who is in the Department of Civil and Environmental Engineering, received a J-WAFS Fellowship for Water and Food Solutions, which is funded by the J-WAFS Research Affiliate Program. Currently, Xylem, Inc. and GoAigua are J-WAFS’ Research Affiliate companies. A review committee comprised of MIT faculty and staff selected Sampath and Yun from a competitive field of outstanding graduate students working in water and food who were nominated by their faculty advisors. Sampath and Yun will receive one academic semester of funding, along with opportunities for networking and mentoring to advance their research.

    “Both Yun and Sampath have demonstrated excellence in their research,” says J-WAFS executive director Renee J. Robins. “They also stood out in their communication skills and their passion to work on issues of agricultural sustainability and resilience and access to safe water. We are so pleased to have them join our inspiring group of J-WAFS fellows,” she adds.

    Using behavioral health strategies to address the arsenic crisis in India and Bangladesh

    Gokul Sampath’s research centers on ways to improve access to safe drinking water in developing countries. A PhD candidate in the International Development Group in the Department of Urban Studies and Planning, his current work examines the issue of arsenic in drinking water sources in India and Bangladesh. In Eastern India, millions of shallow tube wells provide rural households a personal water source that is convenient, free, and mostly safe from cholera. Unfortunately, it is now known that one-in-four of these wells is contaminated with naturally occurring arsenic at levels dangerous to human health. As a result, approximately 40 million people across the region are at elevated risk of cancer, stroke, and heart disease from arsenic consumed through drinking water and cooked food. 

    Since the discovery of arsenic in wells in the late 1980s, governments and nongovernmental organizations have sought to address the problem in rural villages by providing safe community water sources. Yet despite access to safe alternatives, many households still consume water from their contaminated home wells. Sampath’s research seeks to understand the constraints and trade-offs that account for why many villagers don’t collect water from arsenic-safe government wells in the village, even when they know their own wells at home could be contaminated.

    Before coming to MIT, Sampath received a master’s degree in Middle East, South Asian, and African studies from Columbia University, as well as a bachelor’s degree in microbiology and history from the University of California at Davis. He has long worked on water management in India, beginning in 2015 as a Fulbright scholar studying households’ water source choices in arsenic-affected areas of the state of West Bengal. He also served as a senior research associate with the Abdul Latif Jameel Poverty Action Lab, where he conducted randomized evaluations of market incentives for groundwater conservation in Gujarat, India. Sampath’s advisor, Bishwapriya Sanyal, the Ford International Professor of Urban Development and Planning at MIT, says Sampath has shown “remarkable hard work and dedication.” In addition to his classes and research, Sampath taught the department’s undergraduate Introduction to International Development course, for which he received standout evaluations from students.

    This summer, Sampath will travel to India to conduct field work in four arsenic-affected villages in West Bengal to understand how social influence shapes villagers’ choices between arsenic-safe and unsafe water sources. Through longitudinal surveys, he hopes to connect data on the social ties between families in villages and the daily water source choices they make. Exclusionary practices in Indian village communities, especially the segregation of water sources on the basis of caste and religion, has long been suspected to be a barrier to equitable drinking water access in Indian villages. Yet despite this, planners seeking to expand safe water access in diverse Indian villages have rarely considered the way social divisions within communities might be working against their efforts. Sampath hopes to test whether the injunctive norms enabled by caste ties constrain villagers’ ability to choose the safest water source among those shared within the village. When he returns to MIT in the fall, he plans to dive into analyzing his survey data and start work on a publication.

    Understanding plant responses to stress to improve crop drought resistance and yield

    Plants, including crops, play a fundamental role in Earth’s ecosystems through their effects on climate, air quality, and water availability. At the same time, plants grown for agriculture put a burden on the environment as they require energy, irrigation, and chemical inputs. Understanding plant/environment interactions is becoming more and more important as intensifying drought is straining agricultural systems. Jie Yun, a PhD student in the Department of Civil and Environmental Engineering, is studying plant response to drought stress in the hopes of improving agricultural sustainability and yield under climate change.  Yun’s research focuses on genotype-by-environment interaction (GxE.) This relates to the observation that plant varieties respond to environmental changes differently. The effects of GxE in crop breeding can be exploited because differing environmental responses among varieties enables breeders to select for plants that demonstrate high stress-tolerant genotypes under particular growing conditions. Yun bases her studies on Brachypodium, a model grass species related to wheat, oat, barley, rye, and perennial forage grasses. By experimenting with this species, findings can be directly applied to cereal and forage crop improvement. For the first part of her thesis, Yun collaborated with Professor Caroline Uhler’s group in the Department of Electrical Engineering and Computer Science and the Institute for Data, Systems, and Society. Uhler’s computational tools helped Yun to evaluate gene regulatory networks and how they relate to plant resilience and environmental adaptation. This work will help identify the types of genes and pathways that drive differences in drought stress response among plant varieties.  David Des Marais, the Cecil and Ida Green Career Development Professor in the Department of Civil and Environmental Engineering, is Yun’s advisor. He notes, “throughout Jie’s time [at MIT] I have been struck by her intellectual curiosity, verging on fearlessness.” When she’s not mentoring undergraduate students in Des Marais’ lab, Yun is working on the second part of her project: how carbon allocation in plants and growth is affected by soil drying. One result of this work will be to understand which populations of plants harbor the necessary genetic diversity to adapt or acclimate to climate change. Another likely impact is identifying targets for the genetic improvement of crop species to increase crop yields with less water supply. Growing up in China, Yun witnessed environmental issues springing from the development of the steel industry, which caused contamination of rivers in her hometown. On one visit to her aunt’s house in rural China, she learned that water pollution was widespread after noticing wastewater was piped outside of the house into nearby farmland without being treated. These experiences led Yun to study water supply and sewage engineering for her undergraduate degree at Shenyang Jianzhu University. She then went on to complete a master’s program in civil and environmental engineering at Carnegie Mellon University. It was there that Yun discovered a passion for plant-environment interactions; during an independent study on perfluorooctanoic sulfonate, she realized the amazing ability of plants to adapt to environmental changes, toxins, and stresses. Her goal is to continue researching plant and environment interactions and to translate the latest scientific findings into applications that can improve food security. More