More stories

  • in

    Statistical model improves analysis of skin conductance

    Electrodermal activity (EDA) — the sweat-induced fluctuations of skin conductance made famous in TV dramatizations of lie-detector tests — can be a truly strong indicator of subconscious, or “sympathetic,” nervous system activity for all kinds of purposes, but only if it is analyzed optimally. In a new study in the Proceedings of the National Academy of Sciences, an MIT-based team of scientists provides a new, fast, and accurate statistical model for analyzing EDA.
    “Only so much of EDA is intuitive just by looking at the signal,” says Sandya Subramanian, a graduate student in the Harvard-MIT Health Sciences and Technology program and the study’s lead author. Meanwhile, existing mathematical methods of analysis either compute averages of the signal that obscure its instantaneous nature, or inefficiently force measurements into a fit with signal processing models that have nothing to do with what’s going on in the body.
    To make EDA analysis faster and more accurate for interpreting internal cognitive states (like anxiety) or physiological states (like sleep), the team instead sought a statistical model that matches with the actual physiology of sweat. When stimulated by the sympathetic nervous system, glands under the skin build up a reservoir of sweat and then release it when they are full. This kind of process, called “integrate-and-fire,” is also characteristic of diverse natural phenomena like the electrical spiking of nerve cells and geyser eruptions, says senior author Emery N. Brown, the Edward Hood Taplin Professor at The Picower Institute for Learning and Memory and the Institute for Medical Engineering and Science at MIT.
    A key insight of the study was the recognition that there is a well-established statistical formula for describing integrate-and-fire systems called an “inverse Gaussian” that could provide a principled way to model EDA signals.
    “There is a push away from modeling actual physiology to just using off-the-shelf machine learning,” says Brown, who is also an anesthesiologist at Massachusetts General Hospital and a professor at Harvard University. “But we would have missed a very simple, straightforward, and even elegant description that is a readout of the body’s autonomic state.”
    Led by Subramanian, the study team, which also included MGH researcher Riccardo Barbieri, formulated an inverse Gaussian model of EDA, and then put it to the test with 11 volunteers who wore skin conductance monitors for an hour as they sat quietly, read, or watched videos. Even while “at rest” people’s thoughts and feelings wander, creating ample variation in the EDA signal. Nevertheless, after analysis of all 11, the inverse Gaussian produced a tight fit with their actual readings.
    The modeling was able to account for smaller peaks in EDA activity than other methods typically exclude and also the degree of “bumpiness” of the signal, as indicated by the length of the intervals between the pulses, Subramanian said.
    In nine of the 11 cases, adding one of a few related statistical models tightened the inverse Gaussian’s fit a little further.
    Subramanian said that in practical use, an EDA monitoring system based on an inverse Gaussian model alone could immediately be useful, but it could also be quickly fine-tuned by initial readings from a subject to apply the best combination of models to fit the raw data.
    Even with a bit of blending of models, the new approach will be quicker, more computationally efficient, and readily interpretable than less-principled analysis methods, the authors said, because the tight coupling to physiology requires varying only a few parameters to maintain a good fit with the readings. That’s important because if the job of an EDA monitoring system is to detect significant deviations in the signal from normal levels, such as when someone feels acute discomfort, that comparison can only be made based on an accurate, real-time model of what a subject’s normal and significantly abnormal levels are.
    Indeed, among the next steps in the work are tests of the model in subjects under a wider range of conditions ranging from sleep to emotional or physical stimulation and even disease states such as depression.
    “Our findings provide a principled, physiologically based approach for extending EDA analyses to these more complex and important applications,” the authors conclude.
    The JPB Foundation, the National Science Foundation, and the National Institutes of Health provided funding for the research. More

  • in

    Fotini Christia named director of the Sociotechnical Systems Research Center

    Professor Fotini Christia has been named the director of the Sociotechnical Systems Research Center (SSRC) at MIT.
    A professor in the Department of Political Science, Christia stepped into her new role with SSRC on Oct. 1. The interdisciplinary center, part of the Institute for Data, Systems, and Society in the MIT Stephen A. Schwarzman College of Computing, focuses on the study of high-impact, complex societal challenges that shape our world.
    Christia succeeds Ali Jadbabaie, the JR East Professor of Engineering, who has led SSRC since 2016. Jadbabaie recently stepped down to become the new head of the Department of Civil and Environmental Engineering.
    “Fotini’s breadth as a social scientist, on-the-ground approach, use of data science and computational techniques, and application of novel methods to understand how societies are being shaped in diverse areas, made her a natural fit to lead SSRC into the next chapter,” says Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing. “I’m delighted to welcome her and look forward to collaborating on behalf of the college and the Institute.”
    Christia’s research interests deal with the political economy of conflict and development in the Muslim world, for which she has done extensive experimental and survey-related fieldwork in Afghanistan, Bosnia-Herzegovina, Iraq, and Yemen. She is presently using cellphone and social media data in ongoing research on refugee return in Syria, and on gender-based violence in Egypt during Covid-19.
    She is the author of “Alliance Formation in Civil War” (Cambridge University Press, 2012), which rationalizes that warring group alliances are not eternally divided along ethnic or religious lines, but rather are dynamic, formed for more instrumental reasons that often reflect shifts in the balance of power. Her book was awarded the Luebbert Award for Best Book in Comparative Politics, the Lepgold Prize for Best Book in International Relations, and a Distinguished Book Award from the International Studies Association. 
    Her research has also appeared in Science, Review of Economic Studies, IEEE Transactions on Network Science and Engineering, and American Political Science Review, among other journals, and her opinion pieces have been published in Foreign Affairs, The New York Times, and The Washington Post. She has been awarded an inaugural Andrew Carnegie fellowship and a Harvard Academy fellowship.
    A native of Greece, where she grew up in the port city of Salonika, Christia moved to the United States to attend college at Columbia University. She graduated magna cum laude in 2001 with a joint BA in economics–operations research and an MA in international affairs. She joined the MIT faculty in 2008 after receiving her PhD in public policy from Harvard University. More

  • in

    Finding patterns in the noise

    When social scientists administer surveys and questionnaires, they cannot always count on the scrupulous cooperation of their respondents: It’s human nature to get distracted when faced with a form. So how can researchers sort through what may be unreliable data to identify statistically significant answers to their questions? That’s where Shiyao “Sean” Liu comes in.
    “I have designed a tool for reducing measurement errors when respondents don’t pay serious attention to online questions,” says Liu, a sixth-year PhD candidate in political science. Through statistical methods he has devised, Liu can detect and eliminate random-seeming answers that make for a noisy dataset. “With cleaner data, it’s easier to discover patterns and generate meaningful results.”
    Liu’s work on this computational tool earned the Best Graduate Student Poster Award from the Society of Political Methodology in 2019. It is one thrust of his dissertation research, which focuses on optimizing social science survey methods and data analysis.
    Fast start
    After arriving at MIT in 2015 from Peking University with undergraduate degrees both in statistics and in philosophy, politics, and economics, Liu found himself in demand. During his first summer in Cambridge, Massachusetts, he was engaged by Lily Tsai, faculty director of the MIT Governance Lab, as both a field researcher and methodologist.
    “She was writing about public support for authoritarian regimes, and investigating the role played by retributive justice,” says Liu. “The idea is that when authoritarian leaders punish their lower-level officials for wrongdoing, public support rises for the leaders because people perceive that the regime is pursuing justice.”
    Liu visited China twice, and with MIT colleagues and local Chinese collaborators, helped conduct 1,600 face-to-face interviews.
    Some political science theories suggest that punishing corruption improves the image of authoritarian leaders because it makes them appear more competent. But with computational help from Liu, the research team learned that authoritarian image-building wasn’t just about making the trains run on time.
    “By leveraging new surveying methods, I was able to show that increased support for top leadership also flowed from people’s belief that leaders were behaving in a moral way — that they knew the difference between right and wrong,” he says.
    Liu, who co-authored a forthcoming paper with Tsai on the popularity of anti-corruption punishment, believes this research provides a useful prism for examining governments today.
    “When the Soviet Union collapsed in 1989, people thought democracy would be the only surviving political system in the world, but it wasn’t the case,” he says. “Regimes emerged that frequently used anti-corruption campaigns as a way of building up their popularity among the people.” Think Duterte in the Philippines and Bolsonaro in Brazil, says Liu. People are eager to prop up even the most non-democratic dictators, if they perceive them as pursuing justice.
    Statistics for social sciences
    Liu was born and grew up in Shanghai, the son of a businessman and a warehouse worker. He quickly discovered an affinity for history and political science, with a special fascination for the workings of different political systems and ideologies.
    After Liu’s stellar results in China’s high-pressure college entrance exam (he scored at the very top of Shanghai’s cohort of 30,000 high schoolers), he received a full scholarship to Peking University. There he plunged into graduate-level courses on the social sciences and statistics. One of his undergraduate theses looked at whether mass shootings in different congressional districts in the United States had the effect of reducing votes for the representatives from those districts. He collected and analyzed 10 years of historical data, geocoding the sites of mass shooting events.
    By the end of college, says Liu, “I realized I wanted a career where I could basically use statistics to solve problems about society, in either politics or economics.” After serving in business consulting and research positions, he felt certain that his future lay in academics, and that a graduate degree in political science, at MIT in particular, was the right path.
    Liu’s decision to study abroad coincided with a more open time in China. “My generation saw the country playing more according to international rules,” he says. “Demand was building from the government and Chinese business to give Chinese students international experience so we could help them communicate better with the rest of the world, to minimize confusion and misunderstanding.” This, says Liu, helped a large cohort of Chinese students to transform themselves.
    Cleaning up messy data
    At MIT, Liu was drawn to the Political Methodology Lab and the work of its director, Associate Professor Teppei Yamamoto. Under Yamamoto’s mentorship, Liu began to zero in on ways he might advance quantitative methods for political science research.
    Liu turned toward the problem of clearing up statistical noise in social science studies: “I was doing online surveys myself, and realized that some people were spending 30 seconds, and others, 30 minutes,” he recalls. Liu came to adopt terms for these distinct types of respondents: “fast-forwarders” are those who click through with little thought about the content, and the “wanderers” are those who take a longer time to answer questions because they are simultaneously looking at their mail or meandering through YouTube. “Both the fast-forwarders and wanderers don’t pay attention, and their answers lead to great randomness in the dataset.”
    Liu embarked upon developing a statistical program that can identify both types of respondents in a dataset. “It’s impossible to say precisely that one person is a wanderer and another is a fast-forwarder,” says Liu. “But my program uses probability to show that particular respondents are likely to be one of these types, and to kick them out of the dataset.”
    Liu believes this tool could prove useful to many social scientists whose research deploys online surveys. “We could save researchers who are struggling to find a pattern in the noise of their data,” he says.
    Forward through the pandemic
    Since departing Cambridge in March due to the Covid-19 pandemic, Liu has been preparing drafts of his working papers for publication, while residing in his family home in Shanghai. His physical absence from MIT has posed a range of challenges: “I miss face-to-face check-ins with my advisors and colleagues, which inspired me, gave me new ideas about research, and created a sense of community,” he says. He also pines for the regular dinners, karaoke parties, and hikes with friends. The tense state of affairs between the United States and China, restricting travel, is “making Chinese students’ lives difficult, including mine,” says Liu.
    But there are consolations to being home, including ample support from friends and family, along with home-cooked meals. “The pandemic is an obstacle for everyone, and we’re all trying to overcome it in our own ways,” he says. “For me, it means pushing forward on my dissertation, and exploring faculty and postdoc positions all around the world.” More

  • in

    3 Questions: Why getting ahead of Covid-19 requires modeling more than a health crisis

    Countries continue to have varying responses to the Covid-19 pandemic, with widely different outcomes in the number of confirmed cases and deaths as a result of the virus.
    A country’s actions, or lack of action, in responding to the pandemic is partly informed by models that predict the virus’ impact on various aspects of society. But Olivier de Weck, professor of aeronautics and astronautics and engineering systems at MIT, says that most of these models are short-sighted. He and experts from countries with wide-ranging responses to the pandemic have a paper in the September issue of Systems Engineering, addressing what they see as a crisis in Covid-19 modeling.
    The researchers show that the world’s scattered and inconsistent efforts to contain the virus can be traced, in part, to models that forecast impacts over just a few months and that view the pandemic as primarily a health crisis.
    Instead, the team is calling for a longer-range, holistic approach that models the Covid-19 pandemic as a complex system. The researchers have assembled a basic model that predicts impacts of Covid-19 that addresses the complex interactions between a society’s health and its economy. De Weck spoke with MIT News about some surprising trends that their model reveals, and why viewing the pandemic from a systems standpoint will help countries get ahead of the virus.
    Q: From the start of the pandemic, it became clear that preserving society’s health while maintaining the economy would be a huge challenge. Has it really been the case that existing models have not addressed both health and economic impacts of the pandemic?
    A: It is natural that when an epidemic starts, decision-makers and the public see (and hope) that it will be a short-term event and that it will affect the health of only a small fraction of the population. Classic epidemiological models subdivide the population into different cohorts and predict the spread and statistical outcomes of the disease. While these models are useful to test the effectiveness of different countermeasures, they usually fail to model the tradeoff between economic losses and human losses. This is a typical reflection of the fact that scientific inquiry mostly occurs in silos. Combining medical, economic, and governance models into a unified view is antithetical to the classic disciplinary approach.
    Our work shows that we need to provide a larger systems framework to think about and quantify the bidirectional coupling between the health system, the economic system, and the governance system. This needs to happen in real-time by connecting and integrating disease models, economic impact analysis, and long-term predictions across multiple scales.
    Q: You’ve assembled a simple version of such a systems-based model. How does it work?
    A: The key ideas in our work are that first, models that capture the underlying social network structure of society (in a statistical sense) are more robust than the simpler compartment models. The fact that most of us have primary contacts mainly within our family and work environments means that there is an inherent resilience to disease spreading. The second important point is that countermeasures such as strict lockdowns create an economic cost such as lost work, and if maintained for too long can become counterproductive. However, not taking any countermeasures at all also has a huge financial cost to society in terms of human lives lost.
    This brings up the most delicate point that few scholars and politicians are willing to address: What is the economic value of a human life lost? Based on the actions taken by governments we can actually infer implicitly how much economic value a government places on an average human life lost, or, said another way, how much it is willing to spend to prevent a fatality from occuring.
    Consider a scenario where there is a fast government response, such as ordering a lockdown within five days of detecting that 0.05 percent of the population has been infected by the virus, and maintaining strict compliance better than 80 percent for 30 days. We calculate that in this scenario the total losses, including the value of human lives lost (nominally valued at $1 million dollars each), are only 27.8 percent of the losses of a “do nothing” baseline. In order for a government to justify a “do nothing policy” over a quick reaction scenario, it would have to implicitly valuate a human life lost at less than $108,600 — only about 10 percent of the nominal value — which is the marginal difference in the economic loss of work divided by the difference in lives lost due to the epidemic.
    This may be the case in countries with low GDP, like Brazil, that have responded poorly to the pandemic. We note that policy models that rely on explicitly stating an economic value of human life to justify government action will always be contested and controversial. However, without including such economic models in the overall systemic model of society it is not possible to rationally justify any policy, whether interventionist or not.
    Q: What are some trends that emerged through your systemic modeling approach?
    A: We ran different scenarios for how a society might respond to the pandemic based on a set of actions and their timing. This includes taking no countermeasures, ordering a strict lockdown after some delay and some detection threshold, maintaining the lockdown for a certain duration, easing restrictions and also the level of rigor of following lockdown, social distancing, and mask wearing. In this way we found that the worst case is a situation where a lockdown is ordered late, only partially followed (with less than 80 percent compliance), and lifted too soon.
    This situation is close to what we observe in the United States, where we are hit by both extensive loss of human life and economic losses. The reason for this is that the late and only partial implementation of countermeasures allows the disease to become endemic, and leads to both high loss of human life and economic losses. Other countries such as China or Japan ordered strict lockdowns and were able to limit both loss of life and economic damages.
    In a nutshell, Covid-19 is a nonlinear control problem with delay and only partial observability — a huge challenge both in theory and in practice.
    Since our co-authors are from France, China, Singapore, Norway, and the U.S., we were able to essentially replicate the large variety of responses observed around the world. What we recommend going forward is the creation of an integrated information system at three levels — strategic, tactical, operational — that allows for rapid information flow and optimal responses at both the local and global level. The Covid Pass system at MIT, with weekly testing, health attestations, and detailed views by dorm, building, and department, comes close to what we recommend in this paper.
    Covid-19 is not simply a health crisis. It is a global crisis that couples the natural system, human society, the economic system and governance, in ways we have not seen in over a century. Only by viewing it and explicitly modeling it as a system of systems can we manage the crisis in an optimal way and move society to a better place. More

  • in

    Diverse international cohort first to earn MIT master's degrees in data, economics, and development policy

    This past January, 22 students from across the world joined the MIT campus as the first cohort in the new MIT master’s program in Data, Economics, and Development Policy (DEDP).
    Developed by MIT’s Department of Economics and the Abdul Latif Jameel Poverty Action Lab (J-PAL), the program represents a new approach to higher education by combining online coursework through the DEDP MicroMasters program with one residential semester at MIT and a summer capstone project. Eight months after they first arrived on campus, the inaugural cohort is now celebrating their graduation.
    As members of the program’s very first cohort, each student took a risk by enrolling — in some ways, they were building the program along with MIT and J-PAL. Though they came from a diverse range of backgrounds, ages, and countries, the cohort quickly forged a strong bond in their first weeks together, spent in “math boot camp” — a series of advanced math sessions common in many graduate programs designed to prepare students for their economics and statistics courses. 
    The students were united by their common goal to take an evidence-based and data-driven approach to affecting global change. This bond kept them together even while they were forced apart by the Covid-19 pandemic in March, when MIT moved classes online and asked students to return home. 
    Through the challenging turn of events that unfolded, the cohort continued to support one another across thousands of miles and 12-hour time differences, gathering for Zoom birthday celebrations and virtual reading groups. Uniquely positioned to learn and succeed online from their MicroMasters experience, the DEDP cohort adapted as MIT’s classes went virtual for the remainder of the spring semester. 
    Addressing the class at their graduation celebration, faculty director and program co-founder Esther Duflo recognized their achievements and continued support for one another in the face of the pandemic: “Despite the fact that you had hardly ever met before you had to return again to the two-dimensional worlds of computer screens, you managed to maintain this unity of spirit in your diversity of experiences.”
    Taking on some of the world’s most pressing challenges
    Following the spring semester, the cohort started applying their training to their capstone internships. Many took a data-driven approach to tackling big questions, exploring the functionalities of an open source framework for online social experiments, investigating the impact of trade policy on international trade flows, and tracking the evolution of new occupations over time. Others studied how research can inform policy-making and developed training frameworks to strengthen the capacity of public servants to implement policies.
    Several students quickly pivoted to work on projects to inform responses to Covid-19, from investigating intimate partner violence in Italy during the lockdown to surveying private schools’ responses to the pandemic in the Dominican Republic and looking at social networks to study the transmission of Covid-19 in India.
    Students worked with a broad group of organizations spread across sectors and regions for their capstones, and several worked directly with MIT professors and J-PAL affiliated professors. Many undertook projects with J-PAL regional offices and partner organizations like the United Nations and World Bank.
    A number of students received job offers from their capstone internships. Upon graduating, Gailius Praninskas will continue working as a field coordinator with the World Bank’s Bureaucracy Lab, and JingKai Ong will continue on as a research associate with Precision Agriculture for Development. Several students plan to go on to PhD programs, and many plan to work on researching and implementing evidence-based policies and programs in their home countries. 
    Many also hope to open doors for other scholars from their countries. At the program’s graduation celebration, Lovemore Mawere noted that the opportunity to come to MIT has been empowering, sharing, “I never thought that I’d be able to come here and get a degree from MIT, but it’s a reality now.” Manil Zenaki added “I never thought I would go to MIT, but life is full of surprises and I’m really thankful.”
    Paving the way for a new approach to higher education
    The first cohort in the DEDP master’s program at MIT pioneered a new approach to higher education, exceeding the expectations of faculty directors Abhijit Banerjee, Esther Duflo, and Benjamin Olken and program director Maya Duru when they developed the program. 
    “In a year of unprecedented challenges, we are heartened by the drive and accomplishments of these students, and the trail they have blazed for the future,” says J-PAL’s Global Executive Director Iqbal Dhaliwal.
    A testament to the potential of online and blended learning models, the DEDP 2020 graduates are setting high expectations for future cohorts and demonstrating the value of this effort to democratize higher education. More

  • in

    MIT-led team to develop software to help forecast space storms

    On a moonless night on Aug. 28, 1859, the sky began to bleed. The phenomenon behind the northern lights had gone global: an aurora stretching luminous, rainbow fingers across time zones and continents illuminated the night sky with an undulating backdrop of crimson. From New England to Australia, people stood in the streets looking up with admiration, inspiration, and fear as the night sky shimmered in Technicolor. But the beautiful display came with a cost. The global telegraph system — which at the time was responsible for nearly all long-distance communication — experienced widespread disruption. Some telegraph operators experienced electric shocks while sending and receiving messages; others witnessed sparks flying from cable pylons. Telegraph transmissions were halted for days.  
    The aurora and the damage that followed were later attributed to a geomagnetic storm caused by a series of coronal mass ejections (CMEs) that burst from the sun’s surface, raced across the solar system, and barraged our atmosphere with magnetic solar energy, wreaking havoc on the electricity that powered the telegraph system. Although we no longer rely on the global telegraph system to stay connected around the world, experiencing a geomagnetic storm on a similar scale in today’s world would still be catastrophic. Such a storm could cause worldwide blackouts, massive network failures, and widespread damage to the satellites that enable GPS and telecommunication — not to mention the potential threat to human health from increased levels of radiation. Unlike storms on Earth, solar storms’ arrival and intensity can be difficult to predict. Without a better understanding of space weather, we might not even see the next great solar storm coming until it’s too late.
    To advance our ability to forecast space weather like we do on weather Earth, Richard Linares, an assistant professor in the Department of Aeronautics and Astronautics (AeroAstro) at MIT, is leading a multidisciplinary team of researchers to develop software that can effectively address this challenge. With better models, we can use historical observational data to better predict the impact of space weather events like CMEs, solar wind, and other space plasma phenomena as they interact with our atmosphere. Under the Space Weather with Quantified Uncertainties (SWQU) program, a partnership between the U.S. National Science Foundation (NSF) and NASA, the team was awarded a $3 million grant for their proposal “Composable Next Generation Software Framework.”
    “By bringing together experts in geospace sciences, uncertainty quantification, software development, management, and sustainability, we hope to develop the next generation of software for space weather modeling and prediction,” says Linares. “Improving space weather predictions is a national need, and we saw a unique opportunity at MIT to combine the expertise we have across campus to solve this problem.”
    Linares’ MIT collaborators include Philip Erickson, assistant director at MIT Haystack Observatory and head of Haystack’s atmospheric and geospace sciences group; Jaime Peraire, the H.N. Slater Professor of Aeronautics and Astronautics; Youssef Marzouk, professor of aeronautics and astronautics; Ngoc Cuong Nguyen, a research scientist in AeroAstro; Alan Edelman, professor of applied mathematics; and Christopher Rackauckas, instructor in the Department of Mathematics. External collaborators include Aaron Ridley (University of Michigan) and Boris Kramer (University of California at San Diego). Together, the team will focus on resolving this gap by creating a model-focused composable software framework that allows a wide variety of observation data collected across the world to be ingested into a global model of the ionosphere/thermosphere system. 
    “MIT Haystack research programs include a focus on conditions in near-Earth space, and our NSF-sponsored Madrigal online distributed database provides the largest single repository of ground-based community data on space weather and its effects in the atmosphere using worldwide scientific observations. This extensive data includes ionospheric remote sensing information on total electron content (TEC), spanning the globe on a nearly continuous basis and calculated from networks of thousands of individual global navigation satellite system community receivers,” says Erickson. “TEC data, when analyzed jointly with results of next-generation atmosphere and magnetosphere modeling systems, provides a key future innovation that will significantly improve human understanding of critically important space weather effects.”
    The project aims to create a powerful, flexible software platform using cutting-edge computational tools to collect and analyze huge sets of observational data that can be easily shared and reproduced among researchers. The platform will also be designed to work even as computer technology rapidly advances and new researchers contribute to the project from new places, using new machines. Using Julia, a high-performance programming language developed by Edelman at MIT, researchers from all over the world will be able to tailor the software for their own purposes to contribute their data without having to rewrite the program from scratch.
    “I’m very excited that Julia, already fast becoming the language of scientific machine learning, and a great tool for collaborative software, can play a key role in space weather applications,” says Edelman. 
    According to Linares, the composable software framework will serve as a foundation that can be expanded and improved over time, growing both the space weather prediction capabilities and the space weather modeling community itself.
    The MIT-led project was one of six projects selected for three-year grant awards under the SWQU program. Motivated by the White House National Space Weather Strategy and Action Plan and the National Strategic Computing Initiative, the goal of the SWQU program is to bring together teams from across scientific disciplines to advance the latest statistical analysis and high-performance computing methods within the field of space weather modeling.
    “One key goal of the SWQU program is development of sustainable software with built-in capability to evaluate likelihood and magnitude of electromagnetic geospace disturbances based on sparse observational data,” says Vyacheslav Lukin, NSF program director in the Division of Physics. “We look forward to this multidisciplinary MIT-led team laying the foundations for such development to enable advances that will transform our future space weather forecasting capabilities.” More

  • in

    Modeling the impact of testing, tracing, and quarantine

    Testing, contact tracing, and quarantining infected people are all tools in the effort to mitigate the spread of Covid-19. So are mask-wearing and social distancing. But what impact does each have? A study co-authored by MIT researchers finds that robust testing, contact tracing, and quarantining by household can keep cases within the capacity of the health-care system — preventing a “second wave” — while allowing for the reopening of some economic activities.
    The paper, published Aug. 5 in Nature Human Behaviour, details a novel model that integrates anonymized, real-time mobility data with census and demographic data to map Covid-19 transmission in the Boston, Massachusetts area. The authors include Esteban Moro, a visiting research scientist in the MIT Media Lab and MIT Connection Science, and Alex “Sandy” Pentland, director of MIT Connection Science and a professor in the Media Lab and the Institute for Data, Systems, and Society (IDSS).
    This research sheds new light on possible pitfalls and solutions as cities look to lift restrictions that have been in place throughout the summer in many locations. Using data from approximately 85,000 people in the greater Boston area, combined with known information about Covid-19 transmission rates, duration of stages, and other data points, the authors’ model forecasts the number of new cases and hospitalizations under various scenarios of lifted restrictions.
    “If we want to re-scale our lives, economy, and cities, we need to understand better how the infection is spreading across people and communities,” says Moro. “Shutting down the whole economy and our cities because of a second wave might not be needed if we include accurate information about how people are behaving, moving, shopping, et cetera in our society.”
    In establishing a baseline, the study found that unmitigated lifting of restrictions would likely lead to a “second wave” that would quickly overwhelm Boston’s health-care facilities, with peak of daily incidence of 25.2 newly infected individuals per 1,000 people, leading to a need for about 12 times the available intensive-care unit (ICU) beds.
    A second scenario, referred to as LIFT, assumed an additional eight weeks of stay-at-home order, followed by another four of partial reopening, including work and community spaces, but not full reopening of restaurants and other spaces with mass social gatherings. After the total 12-week period, there would be a full lifting of all restrictions. In the LIFT scenario, the modeled impact was still well beyond the capacity of health-care facilities, with a need for over nine times the ICU beds available at the peak of the likely second wave.
    It might be that only a safe, effective, and widely distributed vaccine will allow the world to return to life as usual. However, the authors propose a third scenario — called LET, short for Lift and Enhanced Tracing — that keeps cases and hospitalizations manageable while allowing for a wide return to work and social activity.
    The LET scenario involves the same LIFT measures, but adds robust testing, contact tracing of symptomatic people, and quarantining of all household members of people who came in close contact with someone who tests positive for the virus. After lifting restrictions, at rates of 50 percent detection of positive cases within two days of onset of symptoms, tracing of 40 percent of contacts, and quarantine of all household members of those contacts, the model shows just 0.29 people per thousand in hospitals per day, compared with more than five per day under LIFT measures alone and more than seven under the unmitigated scenario. ICU beds would be more than adequate at all times under this scenario.
    The advantage of whole-household quarantine is that it simplifies contact tracing, working at the level of small groups of people, rather than individuals. Followup calls to check for compliance would also be streamlined. Furthermore, the model assumes no additional precautions, such as masks and social distancing. Therefore, it is expected that new cases and hospitalizations could be even lower if people were to continue some of the practices that have helped combat the spread of Covid-19 thus far.
    This approach is not without sacrifice. Quarantining full households presents unique challenges — it might be hard for quarantined families to obtain necessities, and quarantining together with others with known risk of infection may not be desirable. The study notes that at the peak, with 40 percent contact tracing, as many as 9 percent of all people in the city could be under quarantine. However, this number would gradually decline to around 3 percent. The total number in quarantine could be further reduced if testing ramps up more significantly. The authors suggest that the trade-off of higher numbers of people in quarantine compared with the massively disruptive long-term social isolation policies that would otherwise be needed to keep new infections manageable is well worth it. Life could return to some degree of normalcy, and the economy could begin to recover.
    Since the study was carried out, Massachusetts has moved toward a manual tracing strategy in which thousands of people have been hired to trace potential infections. Moro explains that this could work if the number of cases is small and controlled, but it might be insufficient if the number of cases scales up. He also notes that hiring contact tracers has been problematic. He suggests a possible solution to deal with sudden growth in the number of cases: combine manual and digital contact tracing via an app.
    The model used in the study will continue to be developed and enhanced, and the authors plan to examine other cities beyond Boston. They will use real-time behavior data to investigate how infection is actually propagating and detect when, where, and why spreading events are happening.
    MIT Connection Science is a research group hosted by the Sociotechnical Systems Research Center, a part of IDSS. More

  • in

    New gene regulation model provides insight into brain development

    In every cell, RNA-binding proteins (RBPs) help tune gene expression and control biological processes by binding to RNA sequences. Researchers often assume that individual RBPs latch tightly to just one RNA sequence. For instance, an essential family of RBPs, the Rbfox family, was thought to bind one particular RNA sequence alone. However, it’s becoming increasingly clear that this idea greatly oversimplifies Rbfox’s vital role in development.
    Members of the Rbfox family are among the best-studied RBPs and have been implicated in mammalian brain, heart, and muscle development since their discovery 25 years ago. They influence how RNA transcripts are “spliced” together to form a final RNA product, and have been associated with disorders like autism and epilepsy. But this family of RBPs is compelling for another reason as well: until recently, it was considered a classic example of predictable binding.
    More often than not, it seemed, Rbfox proteins bound to a very specific sequence, or motif, of nucleotide bases, “GCAUG.” Occasionally, binding analyses hinted that Rbfox proteins might attach to other RNA sequences as well, but these findings were usually discarded. Now, a team of biologists from MIT has found that Rbfox proteins actually bind less tightly — but no less frequently — to a handful of other RNA nucleotide sequences besides GCAUG. These so-called “secondary motifs” could be key to normal brain development, and help neurons grow and assume specific roles.
    “Previously, possible binding of Rbfox proteins to atypical sites had been largely ignored,” says Christopher Burge, professor of biology and the study’s senior author. “But we’ve helped demonstrate that these secondary motifs form their own separate class of binding sites with important physiological functions.”
    Graduate student Bridget Begg is the first author of the study, published Aug. 17 in Nature Structural & Molecular Biology.
    “Two-wave” regulation
    After the discovery that GCAUG was the primary RNA binding site for mammalian Rbfox proteins, researchers characterized its binding in living cells using a technique called CLIP (crosslinking-immunoprecipitation). However, CLIP has several limitations. For example, it can indicate where a protein is bound, but not how much protein is bound there. It’s also hampered by some technical biases, including substantial false-negative and false-positive results.
    To address these shortcomings, the Burge lab developed two complementary techniques to better quantify protein binding, this time in a test tube: RBNS (RNA Bind-n-Seq), and later, nsRBNS (RNA Bind-n-Seq with natural sequences), both of which incubate an RBP of interest with a synthetic RNA library. First author Begg performed nsRBNS with naturally-occurring mammalian RNA sequences, and identified a variety of intermediate-affinity secondary motifs that were bound in the absence of GCAUG. She then compared her own data with publicly-available CLIP results to examine the “aberrant” binding that had often been discarded, demonstrating that signals for these motifs existed across many CLIP datasets.
    To probe the biological role of these motifs, Begg performed reporter assays to show that the motifs could regulate Rbfox’s RNA splicing behavior. Subsequently, computational analyses by Begg and co-author Marvin Jens using mouse neuronal data established a handful of secondary motifs that appeared to be involved in neuronal differentiation and cellular diversification.
    Based on analyses of these key secondary motifs, Begg and colleagues devised a “two-wave” model. Early in development, they believe, Rbfox proteins bind predominantly to high-affinity RNA sequences like GCAUG, in order to tune gene expression. Later on, as the Rbfox concentration increases, those primary motifs become fully occupied and Rbfox additionally binds to the secondary motifs. This results in a second wave of Rbfox-regulated RNA splicing with a different set of genes.
    Begg theorizes that the first wave of Rbfox proteins binds GCAUG sequences early in development, and she showed that they regulate genes involved in nerve growth, like cytoskeleton and membrane organization. The second wave appears to help neurons establish electrical and chemical signaling. In other cases, secondary motifs might help neurons specialize into different subtypes with different jobs.
    John Conboy, a molecular biologist at Lawrence Berkeley National Laboratory and an expert in Rbfox binding, says the Burge lab’s two-wave model clearly shows how a single RBP can bind different RNA sequences — regulating splicing of distinct gene sets and influencing key processes during brain development. “This quantitative analysis of RNA-protein interactions, in a field that is often semi-quantitative at best, contributes fascinating new insights into the role of RNA splicing in cell type specification,” he says.
    A binding spectrum
    The researchers suspect that this two-wave model is not unique to Rbfox. “This is probably happening with many different RBPs that regulate development and other dynamic processes,” Burge says. “In the future, considering secondary motifs will help us to better understand developmental disorders and diseases, which can occur when RBPs are over- or under-expressed.”
    Begg adds that secondary motifs should be incorporated into computer models that predict gene expression, in order to probe cellular behavior. “I think it’s very exciting that these more finely-tuned developmental processes, like neuronal differentiation, could be regulated by secondary motifs,” she says.
    Both Begg and Burge agree it’s time to consider the entire spectrum of Rbfox binding, which are highly influenced by factors like protein concentration, binding strength, and timing. According to Begg, “Rbfox regulation is actually more complex than we sometimes give it credit for.”
    This research was funded by the EMBO Long Term Fellowship and by a grant from the National Institutes of Health. More