More stories

  • in

    Making machine learning more useful to high-stakes decision makers

    The U.S. Centers for Disease Control and Prevention estimates that one in seven children in the United States experienced abuse or neglect in the past year. Child protective services agencies around the nation receive a high number of reports each year (about 4.4 million in 2019) of alleged neglect or abuse. With so many cases, some agencies are implementing machine learning models to help child welfare specialists screen cases and determine which to recommend for further investigation.

    But these models don’t do any good if the humans they are intended to help don’t understand or trust their outputs.

    Researchers at MIT and elsewhere launched a research project to identify and tackle machine learning usability challenges in child welfare screening. In collaboration with a child welfare department in Colorado, the researchers studied how call screeners assess cases, with and without the help of machine learning predictions. Based on feedback from the call screeners, they designed a visual analytics tool that uses bar graphs to show how specific factors of a case contribute to the predicted risk that a child will be removed from their home within two years.

    The researchers found that screeners are more interested in seeing how each factor, like the child’s age, influences a prediction, rather than understanding the computational basis of how the model works. Their results also show that even a simple model can cause confusion if its features are not described with straightforward language.

    These findings could be applied to other high-risk fields where humans use machine learning models to help them make decisions, but lack data science experience, says senior author Kalyan Veeramachaneni, principal research scientist in the Laboratory for Information and Decision Systems (LIDS) and senior author of the paper.

    “Researchers who study explainable AI, they often try to dig deeper into the model itself to explain what the model did. But a big takeaway from this project is that these domain experts don’t necessarily want to learn what machine learning actually does. They are more interested in understanding why the model is making a different prediction than what their intuition is saying, or what factors it is using to make this prediction. They want information that helps them reconcile their agreements or disagreements with the model, or confirms their intuition,” he says.

    Co-authors include electrical engineering and computer science PhD student Alexandra Zytek, who is the lead author; postdoc Dongyu Liu; and Rhema Vaithianathan, professor of economics and director of the Center for Social Data Analytics at the Auckland University of Technology and professor of social data analytics at the University of Queensland. The research will be presented later this month at the IEEE Visualization Conference.

    Real-world research

    The researchers began the study more than two years ago by identifying seven factors that make a machine learning model less usable, including lack of trust in where predictions come from and disagreements between user opinions and the model’s output.

    With these factors in mind, Zytek and Liu flew to Colorado in the winter of 2019 to learn firsthand from call screeners in a child welfare department. This department is implementing a machine learning system developed by Vaithianathan that generates a risk score for each report, predicting the likelihood the child will be removed from their home. That risk score is based on more than 100 demographic and historic factors, such as the parents’ ages and past court involvements.

    “As you can imagine, just getting a number between one and 20 and being told to integrate this into your workflow can be a bit challenging,” Zytek says.

    They observed how teams of screeners process cases in about 10 minutes and spend most of that time discussing the risk factors associated with the case. That inspired the researchers to develop a case-specific details interface, which shows how each factor influenced the overall risk score using color-coded, horizontal bar graphs that indicate the magnitude of the contribution in a positive or negative direction.

    Based on observations and detailed interviews, the researchers built four additional interfaces that provide explanations of the model, including one that compares a current case to past cases with similar risk scores. Then they ran a series of user studies.

    The studies revealed that more than 90 percent of the screeners found the case-specific details interface to be useful, and it generally increased their trust in the model’s predictions. On the other hand, the screeners did not like the case comparison interface. While the researchers thought this interface would increase trust in the model, screeners were concerned it could lead to decisions based on past cases rather than the current report.   

    “The most interesting result to me was that, the features we showed them — the information that the model uses — had to be really interpretable to start. The model uses more than 100 different features in order to make its prediction, and a lot of those were a bit confusing,” Zytek says.

    Keeping the screeners in the loop throughout the iterative process helped the researchers make decisions about what elements to include in the machine learning explanation tool, called Sibyl.

    As they refined the Sibyl interfaces, the researchers were careful to consider how providing explanations could contribute to some cognitive biases, and even undermine screeners’ trust in the model.

    For instance, since explanations are based on averages in a database of child abuse and neglect cases, having three past abuse referrals may actually decrease the risk score of a child, since averages in this database may be far higher. A screener may see that explanation and decide not to trust the model, even though it is working correctly, Zytek explains. And because humans tend to put more emphasis on recent information, the order in which the factors are listed could also influence decisions.

    Improving interpretability

    Based on feedback from call screeners, the researchers are working to tweak the explanation model so the features that it uses are easier to explain.

    Moving forward, they plan to enhance the interfaces they’ve created based on additional feedback and then run a quantitative user study to track the effects on decision making with real cases. Once those evaluations are complete, they can prepare to deploy Sibyl, Zytek says.

    “It was especially valuable to be able to work so actively with these screeners. We got to really understand the problems they faced. While we saw some reservations on their part, what we saw more of was excitement about how useful these explanations were in certain cases. That was really rewarding,” she says.

    This work is supported, in part, by the National Science Foundation. More

  • in

    Deep learning helps predict traffic crashes before they happen

    Today’s world is one big maze, connected by layers of concrete and asphalt that afford us the luxury of navigation by vehicle. For many of our road-related advancements — GPS lets us fire fewer neurons thanks to map apps, cameras alert us to potentially costly scrapes and scratches, and electric autonomous cars have lower fuel costs — our safety measures haven’t quite caught up. We still rely on a steady diet of traffic signals, trust, and the steel surrounding us to safely get from point A to point B. 

    To get ahead of the uncertainty inherent to crashes, scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Qatar Center for Artificial Intelligence developed a deep learning model that predicts very high-resolution crash risk maps. Fed on a combination of historical crash data, road maps, satellite imagery, and GPS traces, the risk maps describe the expected number of crashes over a period of time in the future, to identify high-risk areas and predict future crashes. 

    Typically, these types of risk maps are captured at much lower resolutions that hover around hundreds of meters, which means glossing over crucial details since the roads become blurred together. These maps, though, are 5×5 meter grid cells, and the higher resolution brings newfound clarity: The scientists found that a highway road, for example, has a higher risk than nearby residential roads, and ramps merging and exiting the highway have an even higher risk than other roads. 

    “By capturing the underlying risk distribution that determines the probability of future crashes at all places, and without any historical data, we can find safer routes, enable auto insurance companies to provide customized insurance plans based on driving trajectories of customers, help city planners design safer roads, and even predict future crashes,” says MIT CSAIL PhD student Songtao He, a lead author on a new paper about the research. 

    Even though car crashes are sparse, they cost about 3 percent of the world’s GDP and are the leading cause of death in children and young adults. This sparsity makes inferring maps at such a high resolution a tricky task. Crashes at this level are thinly scattered — the average annual odds of a crash in a 5×5 grid cell is about one-in-1,000 — and they rarely happen at the same location twice. Previous attempts to predict crash risk have been largely “historical,” as an area would only be considered high-risk if there was a previous nearby crash. 

    The team’s approach casts a wider net to capture critical data. It identifies high-risk locations using GPS trajectory patterns, which give information about density, speed, and direction of traffic, and satellite imagery that describes road structures, such as the number of lanes, whether there’s a shoulder, or if there’s a large number of pedestrians. Then, even if a high-risk area has no recorded crashes, it can still be identified as high-risk, based on its traffic patterns and topology alone. 

    To evaluate the model, the scientists used crashes and data from 2017 and 2018, and tested its performance at predicting crashes in 2019 and 2020. Many locations were identified as high-risk, even though they had no recorded crashes, and also experienced crashes during the follow-up years.

    “Our model can generalize from one city to another by combining multiple clues from seemingly unrelated data sources. This is a step toward general AI, because our model can predict crash maps in uncharted territories,” says Amin Sadeghi, a lead scientist at Qatar Computing Research Institute (QCRI) and an author on the paper. “The model can be used to infer a useful crash map even in the absence of historical crash data, which could translate to positive use for city planning and policymaking by comparing imaginary scenarios.” 

    The dataset covered 7,500 square kilometers from Los Angeles, New York City, Chicago and Boston. Among the four cities, L.A. was the most unsafe, since it had the highest crash density, followed by New York City, Chicago, and Boston. 

    “If people can use the risk map to identify potentially high-risk road segments, they can take action in advance to reduce the risk of trips they take. Apps like Waze and Apple Maps have incident feature tools, but we’re trying to get ahead of the crashes — before they happen,” says He. 

    He and Sadeghi wrote the paper alongside Sanjay Chawla, research director at QCRI, and MIT professors of electrical engineering and computer science Mohammad Alizadeh, ​​Hari Balakrishnan, and Sam Madden. They will present the paper at the 2021 International Conference on Computer Vision. More

  • in

    Making data visualizations more accessible

    In the early days of the Covid-19 pandemic, the Centers for Disease Control and Prevention produced a simple chart to illustrate how measures like mask wearing and social distancing could “flatten the curve” and reduce the peak of infections.

    The chart was amplified by news sites and shared on social media platforms, but it often lacked a corresponding text description to make it accessible for blind individuals who use a screen reader to navigate the web, shutting out many of the 253 million people worldwide who have visual disabilities.

    This alternative text is often missing from online charts, and even when it is included, it is frequently uninformative or even incorrect, according to qualitative data gathered by scientists at MIT.

    These researchers conducted a study with blind and sighted readers to determine which text is useful to include in a chart description, which text is not, and why. Ultimately, they found that captions for blind readers should focus on the overall trends and statistics in the chart, not its design elements or higher-level insights.

    They also created a conceptual model that can be used to evaluate a chart description, whether the text was generated automatically by software or manually by a human author. Their work could help journalists, academics, and communicators create descriptions that are more effective for blind individuals and guide researchers as they develop better tools to automatically generate captions.

    “Ninety-nine-point-nine percent of images on Twitter lack any kind of description — and that is not hyperbole, that is the actual statistic,” says Alan Lundgard, a graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and lead author of the paper. “Having people manually author those descriptions seems to be difficult for a variety of reasons. Perhaps semiautonomous tools could help with that. But it is crucial to do this preliminary participatory design work to figure out what is the target for these tools, so we are not generating content that is either not useful to its intended audience or, in the worst case, erroneous.”

    Lundgard wrote the paper with senior author Arvind Satyanarayan, an assistant professor of computer science who leads the Visualization Group in CSAIL. The research will be presented at the Institute of Electrical and Electronics Engineers Visualization Conference in October.

    Evaluating visualizations

    To develop the conceptual model, the researchers planned to begin by studying graphs featured by popular online publications such as FiveThirtyEight and NYTimes.com, but they ran into a problem — those charts mostly lacked any textual descriptions. So instead, they collected descriptions for these charts from graduate students in an MIT data visualization class and through an online survey, then grouped the captions into four categories.

    Level 1 descriptions focus on the elements of the chart, such as its title, legend, and colors. Level 2 descriptions describe statistical content, like the minimum, maximum, or correlations. Level 3 descriptions cover perceptual interpretations of the data, like complex trends or clusters. Level 4 descriptions include subjective interpretations that go beyond the data and draw on the author’s knowledge.

    In a study with blind and sighted readers, the researchers presented visualizations with descriptions at different levels and asked participants to rate how useful they were. While both groups agreed that level 1 content on its own was not very helpful, sighted readers gave level 4 content the highest marks while blind readers ranked that content among the least useful.

    Survey results revealed that a majority of blind readers were emphatic that descriptions should not contain an author’s editorialization, but rather stick to straight facts about the data. On the other hand, most sighted readers preferred a description that told a story about the data.

    “For me, a surprising finding about the lack of utility for the highest-level content is that it ties very closely to feelings about agency and control as a disabled person. In our research, blind readers specifically didn’t want the descriptions to tell them what to think about the data. They want the data to be accessible in a way that allows them to interpret it for themselves, and they want to have the agency to do that interpretation,” Lundgard says.

    A more inclusive future

    This work could have implications as data scientists continue to develop and refine machine learning methods for autogenerating captions and alternative text.

    “We are not able to do it yet, but it is not inconceivable to imagine that in the future we would be able to automate the creation of some of this higher-level content and build models that target level 2 or level 3 in our framework. And now we know what the research questions are. If we want to produce these automated captions, what should those captions say? We are able to be a bit more directed in our future research because we have these four levels,” Satyanarayan says.

    In the future, the four-level framework could also help researchers develop machine learning models that can automatically suggest effective visualizations as part of the data analysis process, or models that can extract the most useful information from a chart.

    This research could also inform future work in Satyanarayan’s group that seeks to make interactive visualizations more accessible for blind readers who use a screen reader to access and interpret the information. 

    “The question of how to ensure that charts and graphs are accessible to screen reader users is both a socially important equity issue and a challenge that can advance the state-of-the-art in AI,” says Meredith Ringel Morris, director and principal scientist of the People + AI Research team at Google Research, who was not involved with this study. “By introducing a framework for conceptualizing natural language descriptions of information graphics that is grounded in end-user needs, this work helps ensure that future AI researchers will focus their efforts on problems aligned with end-users’ values.”

    Morris adds: “Rich natural-language descriptions of data graphics will not only expand access to critical information for people who are blind, but will also benefit a much wider audience as eyes-free interactions via smart speakers, chatbots, and other AI-powered agents become increasingly commonplace.”

    This research was supported by the National Science Foundation. More

  • in

    3 Questions: Kalyan Veeramachaneni on hurdles preventing fully automated machine learning

    The proliferation of big data across domains, from banking to health care to environmental monitoring, has spurred increasing demand for machine learning tools that help organizations make decisions based on the data they gather.

    That growing industry demand has driven researchers to explore the possibilities of automated machine learning (AutoML), which seeks to automate the development of machine learning solutions in order to make them accessible for nonexperts, improve their efficiency, and accelerate machine learning research. For example, an AutoML system might enable doctors to use their expertise interpreting electroencephalography (EEG) results to build a model that can predict which patients are at higher risk for epilepsy — without requiring the doctors to have a background in data science.

    Yet, despite more than a decade of work, researchers have been unable to fully automate all steps in the machine learning development process. Even the most efficient commercial AutoML systems still require a prolonged back-and-forth between a domain expert, like a marketing manager or mechanical engineer, and a data scientist, making the process inefficient.

    Kalyan Veeramachaneni, a principal research scientist in the MIT Laboratory for Information and Decision Systems who has been studying AutoML since 2010, has co-authored a paper in the journal ACM Computing Surveys that details a seven-tiered schematic to evaluate AutoML tools based on their level of autonomy.

    A system at level zero has no automation and requires a data scientist to start from scratch and build models by hand, while a tool at level six is completely automated and can be easily and effectively used by a nonexpert. Most commercial systems fall somewhere in the middle.

    Veeramachaneni spoke with MIT News about the current state of AutoML, the hurdles that prevent truly automatic machine learning systems, and the road ahead for AutoML researchers.

    Q: How has automatic machine learning evolved over the past decade, and what is the current state of AutoML systems?

    A: In 2010, we started to see a shift, with enterprises wanting to invest in getting value out of their data beyond just business intelligence. So then came the question, maybe there are certain things in the development of machine learning-based solutions that we can automate? The first iteration of AutoML was to make our own jobs as data scientists more efficient. Can we take away the grunt work that we do on a day-to-day basis and automate that by using a software system? That area of research ran its course until about 2015, when we realized we still weren’t able to speed up this development process.

    Then another thread emerged. There are a lot of problems that could be solved with data, and they come from experts who know those problems, who live with them on a daily basis. These individuals have very little to do with machine learning or software engineering. How do we bring them into the fold? That is really the next frontier.

    There are three areas where these domain experts have strong input in a machine learning system. The first is defining the problem itself and then helping to formulate it as a prediction task to be solved by a machine learning model. Second, they know how the data have been collected, so they also know intuitively how to process that data. And then third, at the end, machine learning models only give you a very tiny part of a solution — they just give you a prediction. The output of a machine learning model is just one input to help a domain expert get to a decision or action.

    Q: What steps of the machine learning pipeline are the most difficult to automate, and why has automating them been so challenging?

    A: The problem-formulation part is extremely difficult to automate. For example, if I am a researcher who wants to get more government funding, and I have a lot of data about the content of the research proposals that I write and whether or not I receive funding, can machine learning help there? We don’t know yet. In problem formulation, I use my domain expertise to translate the problem into something that is more tangible to predict, and that requires somebody who knows the domain very well. And he or she also knows how to use that information post-prediction. That problem is refusing to be automated.

    There is one part of problem-formulation that could be automated. It turns out that we can look at the data and mathematically express several possible prediction tasks automatically. Then we can share those prediction tasks with the domain expert to see if any of them would help in the larger problem they are trying to tackle. Then once you pick the prediction task, there are a lot of intermediate steps you do, including feature engineering, modeling, etc., that are very mechanical steps and easy to automate.

    But defining the prediction tasks has typically been a collaborative effort between data scientists and domain experts because, unless you know the domain, you can’t translate the domain problem into a prediction task. And then sometimes domain experts don’t know what is meant by “prediction.” That leads to the major, significant back and forth in the process. If you automate that step, then machine learning penetration and the use of data to create meaningful predictions will increase tremendously.

    Then what happens after the machine learning model gives a prediction? We can automate the software and technology part of it, but at the end of the day, it is root cause analysis and human intuition and decision making. We can augment them with a lot of tools, but we can’t fully automate that.

    Q: What do you hope to achieve with the seven-tiered framework for evaluating AutoML systems that you outlined in your paper?

    A: My hope is that people start to recognize that some levels of automation have already been achieved and some still need to be tackled. In the research community, we tend to focus on what we are comfortable with. We have gotten used to automating certain steps, and then we just stick to it. Automating these other parts of the machine learning solution development is very important, and that is where the biggest bottlenecks remain.

    My second hope is that researchers will very clearly understand what domain expertise means. A lot of this AutoML work is still being conducted by academics, and the problem is that we often don’t do applied work. There is not a crystal-clear definition of what a domain expert is and in itself, “domain expert,” is a very nebulous phrase. What we mean by domain expert is the expert in the problem you are trying to solve with machine learning. And I am hoping that everyone unifies around that because that would make things so much clearer.

    I still believe that we are not able to build that many models for that many problems, but even for the ones that we are building, the majority of them are not getting deployed and used in day-to-day life. The output of machine learning is just going to be another data point, an augmented data point, in someone’s decision making. How they make those decisions, based on that input, how that will change their behavior, and how they will adapt their style of working, that is still a big, open question. Once we automate everything, that is what’s next.

    We have to determine what has to fundamentally change in the day-to-day workflow of someone giving loans at a bank, or an educator trying to decide whether he or she should change the assignments in an online class. How are they going to use machine learning’s outputs? We need to focus on the fundamental things we have to build out to make machine learning more usable. More

  • in

    Study: Global cancer risk from burning organic matter comes from unregulated chemicals

    Whenever organic matter is burned, such as in a wildfire, a power plant, a car’s exhaust, or in daily cooking, the combustion releases polycyclic aromatic hydrocarbons (PAHs) — a class of pollutants that is known to cause lung cancer.

    There are more than 100 known types of PAH compounds emitted daily into the atmosphere. Regulators, however, have historically relied on measurements of a single compound, benzo(a)pyrene, to gauge a community’s risk of developing cancer from PAH exposure. Now MIT scientists have found that benzo(a)pyrene may be a poor indicator of this type of cancer risk.

    In a modeling study appearing today in the journal GeoHealth, the team reports that benzo(a)pyrene plays a small part — about 11 percent — in the global risk of developing PAH-associated cancer. Instead, 89 percent of that cancer risk comes from other PAH compounds, many of which are not directly regulated.

    Interestingly, about 17 percent of PAH-associated cancer risk comes from “degradation products” — chemicals that are formed when emitted PAHs react in the atmosphere. Many of these degradation products can in fact be more toxic than the emitted PAH from which they formed.

    The team hopes the results will encourage scientists and regulators to look beyond benzo(a)pyrene, to consider a broader class of PAHs when assessing a community’s cancer risk.

    “Most of the regulatory science and standards for PAHs are based on benzo(a)pyrene levels. But that is a big blind spot that could lead you down a very wrong path in terms of assessing whether cancer risk is improving or not, and whether it’s relatively worse in one place than another,” says study author Noelle Selin, a professor in MIT’s Institute for Data, Systems and Society, and the Department of Earth, Atmospheric and Planetary Sciences.

    Selin’s MIT co-authors include Jesse Kroll, Amy Hrdina, Ishwar Kohale, Forest White, and Bevin Engelward, and Jamie Kelly (who is now at University College London). Peter Ivatt and Mathew Evans at the University of York are also co-authors.

    Chemical pixels

    Benzo(a)pyrene has historically been the poster chemical for PAH exposure. The compound’s indicator status is largely based on early toxicology studies. But recent research suggests the chemical may not be the PAH representative that regulators have long relied upon.   

    “There has been a bit of evidence suggesting benzo(a)pyrene may not be very important, but this was from just a few field studies,” says Kelly, a former postdoc in Selin’s group and the study’s lead author.

    Kelly and his colleagues instead took a systematic approach to evaluate benzo(a)pyrene’s suitability as a PAH indicator. The team began by using GEOS-Chem, a global, three-dimensional chemical transport model that breaks the world into individual grid boxes and simulates within each box the reactions and concentrations of chemicals in the atmosphere.

    They extended this model to include chemical descriptions of how various PAH compounds, including benzo(a)pyrene, would react in the atmosphere. The team then plugged in recent data from emissions inventories and meteorological observations, and ran the model forward to simulate the concentrations of various PAH chemicals around the world over time.

    Risky reactions

    In their simulations, the researchers started with 16 relatively well-studied PAH chemicals, including benzo(a)pyrene, and traced the concentrations of these chemicals, plus the concentration of their degradation products over two generations, or chemical transformations. In total, the team evaluated 48 PAH species.

    They then compared these concentrations with actual concentrations of the same chemicals, recorded by monitoring stations around the world. This comparison was close enough to show that the model’s concentration predictions were realistic.

    Then within each model’s grid box, the researchers related the concentration of each PAH chemical to its associated cancer risk; to do this, they had to develop a new method based on previous studies in the literature to avoid double-counting risk from the different chemicals. Finally, they overlaid population density maps to predict the number of cancer cases globally, based on the concentration and toxicity of a specific PAH chemical in each location.

    Dividing the cancer cases by population produced the cancer risk associated with that chemical. In this way, the team calculated the cancer risk for each of the 48 compounds, then determined each chemical’s individual contribution to the total risk.

    This analysis revealed that benzo(a)pyrene had a surprisingly small contribution, of about 11 percent, to the overall risk of developing cancer from PAH exposure globally. Eighty-nine percent of cancer risk came from other chemicals. And 17 percent of this risk arose from degradation products.

    “We see places where you can find concentrations of benzo(a)pyrene are lower, but the risk is higher because of these degradation products,” Selin says. “These products can be orders of magnitude more toxic, so the fact that they’re at tiny concentrations doesn’t mean you can write them off.”

    When the researchers compared calculated PAH-associated cancer risks around the world, they found significant differences depending on whether that risk calculation was based solely on concentrations of benzo(a)pyrene or on a region’s broader mix of PAH compounds.

    “If you use the old method, you would find the lifetime cancer risk is 3.5 times higher in Hong Kong versus southern India, but taking into account the differences in PAH mixtures, you get a difference of 12 times,” Kelly says. “So, there’s a big difference in the relative cancer risk between the two places. And we think it’s important to expand the group of compounds that regulators are thinking about, beyond just a single chemical.”

    The team’s study “provides an excellent contribution to better understanding these ubiquitous pollutants,” says Elisabeth Galarneau, an air quality expert and PhD research scientist in Canada’s Department of the Environment. “It will be interesting to see how these results compare to work being done elsewhere … to pin down which (compounds) need to be tracked and considered for the protection of human and environmental health.”

    This research was conducted in MIT’s Superfund Research Center and is supported in part by the National Institute of Environmental Health Sciences Superfund Basic Research Program, and the National Institutes of Health. More

  • in

    Data flow’s decisive role on the global stage

    In 2016, Meicen Sun came to a profound realization: “The control of digital information will lie at the heart of all the big questions and big contentions in politics.” A graduate student in her final year of study who is specializing in international security and the political economy of technology, Sun vividly recalls the emergence of the internet “as a democratizing force, an opener, an equalizer,” helping give rise to the Arab Spring. But she was also profoundly struck when nations in the Middle East and elsewhere curbed internet access to throttle citizens’ efforts to speak and mobilize freely.

    During her undergraduate and graduate studies, which came to focus on China and its expanding global role, Sun became convinced that digital constraints initially intended to prevent the free flow of ideas were also having enormous and growing economic impacts.

    “With an exceptionally high mobile internet adoption rate and the explosion of indigenous digital apps, China’s digital economy was surging, helping to drive the nation’s broader economic growth and international competitiveness,” Sun says. “Yet at the same time, the country maintained the most tightly controlled internet ecosystem in the world.”

    Sun set out to explore this apparent paradox in her dissertation. Her research to date has yielded both novel findings and troubling questions.  

    “Through its control of the internet, China has in effect provided protectionist benefits to its own data-intensive domestic sectors,” she says. “If there is a benefit to imposing internet control, given the absence of effective international regulations, does this give authoritarian states an advantage in trade and national competitiveness?” Following this thread, Sun asks, “What might this mean for the future of democracy as the world grows increasingly dependent on digital technology?”

    Protect or innovate

    Early in her graduate program, classes in capitalism and technology and public policy, says Sun, “cemented for me the idea of data as a factor of production, and the importance of cross-border information flow in making a country innovative.” This central premise serves as a springboard for Sun’s doctoral studies.

    In a series of interconnected research papers using China as her primary case, she is examining the double-edged nature of internet limits. “They accord protectionist benefits to domestic data-internet-intensive sectors, on the one hand, but on the other, act as a potential longer-term deterrent to the country’s capacity to innovate.”

    To pursue her doctoral project, advised by professor of political science Kenneth Oye, Sun is extracting data from a multitude of sources, including a website that has been routinely testing web domain accessibility from within China since 2011. This allows her to pin down when and to what degree internet control occurs. She can then compare this information to publicly available records on the expansion or contraction of data-intensive industrial sectors, enabling her to correlate internet control to a sector’s performance.

    Sun has also compiled datasets for firm-level revenue, scientific citations, and patents that permit her to measure aspects of China’s innovation culture. In analyzing her data she leverages both quantitative and qualitative methods, including one co-developed by her dissertation co-advisor, associate professor of political science In Song Kim. Her initial analysis suggests internet control prevents scholars from accessing knowledge available on foreign websites, and that if sustained, such control could take a toll on the Chinese economy over time.

    Of particular concern is the possibility that the economic success that flows from strict internet controls, as exemplified by the Chinese model, may encourage the rise of similar practices among emerging states or those in political flux.

    “The grim implication of my research is that without international regulation on information flow restrictions, democracies will be at a disadvantage against autocracies,” she says. “No matter how short-term or narrow these curbs are, they confer concrete benefits on certain economic sectors.”

    Data, politics, and economy

    Sun got a quick start as a student of China and its role in the world. She was born in Xiamen, a coastal Chinese city across from Taiwan, to academic parents who cultivated her interest in international politics. “My dad would constantly talk to me about global affairs, and he was passionate about foreign policy,” says Sun.

    Eager for education and a broader view of the world, Sun took a scholarship at 15 to attend school in Singapore. “While this experience exposed me to a variety of new ideas and social customs, I felt the itch to travel even farther away, and to meet people with different backgrounds and viewpoints from mine,” than she says.

    Sun attended Princeton University where, after two years sticking to her “comfort zone” — writing and directing plays and composing music for them — she underwent a process of intellectual transition. Political science classes opened a window onto a larger landscape to which she had long been connected: China’s behavior as a rising power and the shifting global landscape.

    She completed her undergraduate degree in politics, and followed up with a master’s degree in international relations at the University of Pennsylvania, where she focused on China-U.S. relations and China’s participation in international institutions. She was on the path to completing a PhD at Penn when, Sun says, “I became confident in my perception that digital technology, and especially information sharing, were becoming critically important factors in international politics, and I felt a strong desire to devote my graduate studies, and even my career, to studying these topics,”

    Certain that the questions she hoped to pursue could best be addressed through an interdisciplinary approach with those working on similar issues, Sun began her doctoral program anew at MIT.

    “Doer mindset”

    Sun is hopeful that her doctoral research will prove useful to governments, policymakers, and business leaders. “There are a lot of developing states actively shopping between data governance and development models for their own countries,” she says. “My findings around the pros and cons of information flow restrictions should be of interest to leaders in these places, and to trade negotiators and others dealing with the global governance of data and what a fair playing field for digital trade would be.”

    Sun has engaged directly with policy and industry experts through her fellowships with the World Economic Forum and the Pacific Forum. And she has embraced questions that touch on policy outside of her immediate research: Sun is collaborating with her dissertation co-advisor, MIT Sloan Professor Yasheng Huang, on a study of the political economy of artificial intelligence in China for the MIT Task Force on the Work of the Future.

    This year, as she writes her dissertation papers, Sun will be based at Georgetown University, where she has a Mortara Center Global Political Economy Project Predoctoral Fellowship. In Washington, she will continue her journey to becoming a “policy-minded scholar, a thinker with a doer mindset, whose findings have bearing on things that happen in the world.” More

  • in

    Research collaboration puts climate-resilient crops in sight

    Any houseplant owner knows that changes in the amount of water or sunlight a plant receives can put it under immense stress. A dying plant brings certain disappointment to anyone with a green thumb. 

    But for farmers who make their living by successfully growing plants, and whose crops may nourish hundreds or thousands of people, the devastation of failing flora is that much greater. As climate change is poised to cause increasingly unpredictable weather patterns globally, crops may be subject to more extreme environmental conditions like droughts, fluctuating temperatures, floods, and wildfire. 

    Climate scientists and food systems researchers worry about the stress climate change may put on crops, and on global food security. In an ambitious interdisciplinary project funded by the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS), David Des Marais, the Gale Assistant Professor in the Department of Civil and Environmental Engineering at MIT, and Caroline Uhler, an associate professor in the MIT Department of Electrical Engineering and Computer Science and the Institute for Data, Systems, and Society, are investigating how plant genes communicate with one another under stress. Their research results can be used to breed plants more resilient to climate change.

    Crops in trouble

    Governing plants’ responses to environmental stress are gene regulatory networks, or GRNs, which guide the development and behaviors of living things. A GRN may be comprised of thousands of genes and proteins that all communicate with one another. GRNs help a particular cell, tissue, or organism respond to environmental changes by signaling certain genes to turn their expression on or off.

    Even seemingly minor or short-term changes in weather patterns can have large effects on crop yield and food security. An environmental trigger, like a lack of water during a crucial phase of plant development, can turn a gene on or off, and is likely to affect many others in the GRN. For example, without water, a gene enabling photosynthesis may switch off. This can create a domino effect, where the genes that rely on those regulating photosynthesis are silenced, and the cycle continues. As a result, when photosynthesis is halted, the plant may experience other detrimental side effects, like no longer being able to reproduce or defend against pathogens. The chain reaction could even kill a plant before it has the chance to be revived by a big rain.

    Des Marais says he wishes there was a way to stop those genes from completely shutting off in such a situation. To do that, scientists would need to better understand how exactly gene networks respond to different environmental triggers. Bringing light to this molecular process is exactly what he aims to do in this collaborative research effort.

    Solving complex problems across disciplines

    Despite their crucial importance, GRNs are difficult to study because of how complex and interconnected they are. Usually, to understand how a particular gene is affecting others, biologists must silence one gene and see how the others in the network respond. 

    For years, scientists have aspired to an algorithm that could synthesize the massive amount of information contained in GRNs to “identify correct regulatory relationships among genes,” according to a 2019 article in the Encyclopedia of Bioinformatics and Computational Biology. 

    “A GRN can be seen as a large causal network, and understanding the effects that silencing one gene has on all other genes requires understanding the causal relationships among the genes,” says Uhler. “These are exactly the kinds of algorithms my group develops.”

    Des Marais and Uhler’s project aims to unravel these complex communication networks and discover how to breed crops that are more resilient to the increased droughts, flooding, and erratic weather patterns that climate change is already causing globally.

    In addition to climate change, by 2050, the world will demand 70 percent more food to feed a booming population. “Food systems challenges cannot be addressed individually in disciplinary or topic area silos,” says Greg Sixt, J-WAFS’ research manager for climate and food systems. “They must be addressed in a systems context that reflects the interconnected nature of the food system.”

    Des Marais’ background is in biology, and Uhler’s in statistics. “Dave’s project with Caroline was essentially experimental,” says Renee J. Robins, J-WAFS’ executive director. “This kind of exploratory research is exactly what the J-WAFS seed grant program is for.”

    Getting inside gene regulatory networks

    Des Marais and Uhler’s work begins in a windowless basement on MIT’s campus, where 300 genetically identical Brachypodium distachyon plants grow in large, temperature-controlled chambers. The plant, which contains more than 30,000 genes, is a good model for studying important cereal crops like wheat, barley, maize, and millet. For three weeks, all plants receive the same temperature, humidity, light, and water. Then, half are slowly tapered off water, simulating drought-like conditions.

    Six days into the forced drought, the plants are clearly suffering. Des Marais’ PhD student Jie Yun takes tissues from 50 hydrated and 50 dry plants, freezes them in liquid nitrogen to immediately halt metabolic activity, grinds them up into a fine powder, and chemically separates the genetic material. The genes from all 100 samples are then sequenced at a lab across the street.

    The team is left with a spreadsheet listing the 30,000 genes found in each of the 100 plants at the moment they were frozen, and how many copies there were. Uhler’s PhD student Anastasiya Belyaeva inputs the massive spreadsheet into the computer program she developed and runs her novel algorithm. Within a few hours, the group can see which genes were most active in one condition over another, how the genes were communicating, and which were causing changes in others. 

    The methodology captures important subtleties that could allow researchers to eventually alter gene pathways and breed more resilient crops. “When you expose a plant to drought stress, it’s not like there’s some canonical response,” Des Marais says. “There’s lots of things going on. It’s turning this physiologic process up, this one down, this one didn’t exist before, and now suddenly is turned on.” 

    In addition to Des Marais and Uhler’s research, J-WAFS has funded projects in food and water from researchers in 29 departments across all five MIT schools as well as the MIT Schwarzman College of Computing. J-WAFS seed grants typically fund seven to eight new projects every year.

    “The grants are really aimed at catalyzing new ideas, providing the sort of support [for MIT researchers] to be pushing boundaries, and also bringing in faculty who may have some interesting ideas that they haven’t yet applied to water or food concerns,” Robins says. “It’s an avenue for researchers all over the Institute to apply their ideas to water and food.”

    Alison Gold is a student in MIT’s Graduate Program in Science Writing. More

  • in

    A universal system for decoding any type of data sent across a network

    Every piece of data that travels over the internet — from paragraphs in an email to 3D graphics in a virtual reality environment — can be altered by the noise it encounters along the way, such as electromagnetic interference from a microwave or Bluetooth device. The data are coded so that when they arrive at their destination, a decoding algorithm can undo the negative effects of that noise and retrieve the original data.

    Since the 1950s, most error-correcting codes and decoding algorithms have been designed together. Each code had a structure that corresponded with a particular, highly complex decoding algorithm, which often required the use of dedicated hardware.

    Researchers at MIT, Boston University, and Maynooth University in Ireland have now created the first silicon chip that is able to decode any code, regardless of its structure, with maximum accuracy, using a universal decoding algorithm called Guessing Random Additive Noise Decoding (GRAND). By eliminating the need for multiple, computationally complex decoders, GRAND enables increased efficiency that could have applications in augmented and virtual reality, gaming, 5G networks, and connected devices that rely on processing a high volume of data with minimal delay.

    The research at MIT is led by Muriel Médard, the Cecil H. and Ida Green Professor in the Department of Electrical Engineering and Computer Science, and was co-authored by Amit Solomon and Wei Ann, both graduate students at MIT; Rabia Tugce Yazicigil, assistant professor of electrical and computer engineering at Boston University; Arslan Riaz and Vaibhav Bansal, both graduate students at Boston University; Ken R. Duffy, director of the Hamilton Institute at the National University of Ireland at Maynooth; and Kevin Galligan, a Maynooth graduate student. The research will be presented at the European Solid-States Device Research and Circuits Conference next week.

    Focus on noise

    One way to think of these codes is as redundant hashes (in this case, a series of 1s and 0s) added to the end of the original data. The rules for the creation of that hash are stored in a specific codebook.

    As the encoded data travel over a network, they are affected by noise, or energy that disrupts the signal, which is often generated by other electronic devices. When that coded data and the noise that affected them arrive at their destination, the decoding algorithm consults its codebook and uses the structure of the hash to guess what the stored information is.

    Instead, GRAND works by guessing the noise that affected the message, and uses the noise pattern to deduce the original information. GRAND generates a series of noise sequences in the order they are likely to occur, subtracts them from the received data, and checks to see if the resulting codeword is in a codebook.

    While the noise appears random in nature, it has a probabilistic structure that allows the algorithm to guess what it might be.

    “In a way, it is similar to troubleshooting. If someone brings their car into the shop, the mechanic doesn’t start by mapping the entire car to blueprints. Instead, they start by asking, ‘What is the most likely thing to go wrong?’ Maybe it just needs gas. If that doesn’t work, what’s next? Maybe the battery is dead?” Médard says.

    Novel hardware

    The GRAND chip uses a three-tiered structure, starting with the simplest possible solutions in the first stage and working up to longer and more complex noise patterns in the two subsequent stages. Each stage operates independently, which increases the throughput of the system and saves power.

    The device is also designed to switch seamlessly between two codebooks. It contains two static random-access memory chips, one that can crack codewords, while the other loads a new codebook and then switches to decoding without any downtime.

    The researchers tested the GRAND chip and found it could effectively decode any moderate redundancy code up to 128 bits in length, with only about a microsecond of latency.

    Médard and her collaborators had previously demonstrated the success of the algorithm, but this new work showcases the effectiveness and efficiency of GRAND in hardware for the first time.

    Developing hardware for the novel decoding algorithm required the researchers to first toss aside their preconceived notions, Médard says.

    “We couldn’t go out and reuse things that had already been done. This was like a complete whiteboard. We had to really think about every single component from scratch. It was a journey of reconsideration. And I think when we do our next chip, there will be things with this first chip that we’ll realize we did out of habit or assumption that we can do better,” she says.

    A chip for the future

    Since GRAND only uses codebooks for verification, the chip not only works with legacy codes but could also be used with codes that haven’t even been introduced yet.

    In the lead-up to 5G implementation, regulators and communications companies struggled to find consensus as to which codes should be used in the new network. Regulators ultimately chose to use two types of traditional codes for 5G infrastructure in different situations. Using GRAND could eliminate the need for that rigid standardization in the future, Médard says.

    The GRAND chip could even open the field of coding to a wave of innovation.

    “For reasons I’m not quite sure of, people approach coding with awe, like it is black magic. The process is mathematically nasty, so people just use codes that already exist. I’m hoping this will recast the discussion so it is not so standards-oriented, enabling people to use codes that already exist and create new codes,” she says.

    Moving forward, Médard and her collaborators plan to tackle the problem of soft detection with a retooled version of the GRAND chip. In soft detection, the received data are less precise.

    They also plan to test the ability of GRAND to crack longer, more complex codes and adjust the structure of the silicon chip to improve its energy efficiency.

    The research was funded by the Battelle Memorial Institute and Science Foundation of Ireland. More