More stories

  • in

    Supporting sustainability, digital health, and the future of work

    The MIT and Accenture Convergence Initiative for Industry and Technology has selected three new research projects that will receive support from the initiative. The research projects aim to accelerate progress in meeting complex societal needs through new business convergence insights in technology and innovation.

    Established in MIT’s School of Engineering and now in its third year, the MIT and Accenture Convergence Initiative is furthering its mission to bring together technological experts from across business and academia to share insights and learn from one another. Recently, Thomas W. Malone, the Patrick J. McGovern (1959) Professor of Management, joined the initiative as its first-ever faculty lead. The research projects relate to three of the initiative’s key focus areas: sustainability, digital health, and the future of work.

    “The solutions these research teams are developing have the potential to have tremendous impact,” says Anantha Chandrakasan, dean of the School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science. “They embody the initiative’s focus on advancing data-driven research that addresses technology and industry convergence.”

    “The convergence of science and technology driven by advancements in generative AI, digital twins, quantum computing, and other technologies makes this an especially exciting time for Accenture and MIT to be undertaking this joint research,” says Kenneth Munie, senior managing director at Accenture Strategy, Life Sciences. “Our three new research projects focusing on sustainability, digital health, and the future of work have the potential to help guide and shape future innovations that will benefit the way we work and live.”

    The MIT and Accenture Convergence Initiative charter project researchers are described below.

    Accelerating the journey to net zero with industrial clusters

    Jessika Trancik is a professor at the Institute for Data, Systems, and Society (IDSS). Trancik’s research examines the dynamic costs, performance, and environmental impacts of energy systems to inform climate policy and accelerate beneficial and equitable technology innovation. Trancik’s project aims to identify how industrial clusters can enable companies to derive greater value from decarbonization, potentially making companies more willing to invest in the clean energy transition.

    To meet the ambitious climate goals that have been set by countries around the world, rising greenhouse gas emissions trends must be rapidly reversed. Industrial clusters — geographically co-located or otherwise-aligned groups of companies representing one or more industries — account for a significant portion of greenhouse gas emissions globally. With major energy consumers “clustered” in proximity, industrial clusters provide a potential platform to scale low-carbon solutions by enabling the aggregation of demand and the coordinated investment in physical energy supply infrastructure.

    In addition to Trancik, the research team working on this project will include Aliza Khurram, a postdoc in IDSS; Micah Ziegler, an IDSS research scientist; Melissa Stark, global energy transition services lead at Accenture; Laura Sanderfer, strategy consulting manager at Accenture; and Maria De Miguel, strategy senior analyst at Accenture.

    Eliminating childhood obesity

    Anette “Peko” Hosoi is the Neil and Jane Pappalardo Professor of Mechanical Engineering. A common theme in her work is the fundamental study of shape, kinematic, and rheological optimization of biological systems with applications to the emergent field of soft robotics. Her project will use both data from existing studies and synthetic data to create a return-on-investment (ROI) calculator for childhood obesity interventions so that companies can identify earlier returns on their investment beyond reduced health-care costs.

    Childhood obesity is too prevalent to be solved by a single company, industry, drug, application, or program. In addition to the physical and emotional impact on children, society bears a cost through excess health care spending, lost workforce productivity, poor school performance, and increased family trauma. Meaningful solutions require multiple organizations, representing different parts of society, working together with a common understanding of the problem, the economic benefits, and the return on investment. ROI is particularly difficult to defend for any single organization because investment and return can be separated by many years and involve asymmetric investments, returns, and allocation of risk. Hosoi’s project will consider the incentives for a particular entity to invest in programs in order to reduce childhood obesity.

    Hosoi will be joined by graduate students Pragya Neupane and Rachael Kha, both of IDSS, as well a team from Accenture that includes Kenneth Munie, senior managing director at Accenture Strategy, Life Sciences; Kaveh Safavi, senior managing director in Accenture Health Industry; and Elizabeth Naik, global health and public service research lead.

    Generating innovative organizational configurations and algorithms for dealing with the problem of post-pandemic employment

    Thomas Malone is the Patrick J. McGovern (1959) Professor of Management at the MIT Sloan School of Management and the founding director of the MIT Center for Collective Intelligence. His research focuses on how new organizations can be designed to take advantage of the possibilities provided by information technology. Malone will be joined in this project by John Horton, the Richard S. Leghorn (1939) Career Development Professor at the MIT Sloan School of Management, whose research focuses on the intersection of labor economics, market design, and information systems. Malone and Horton’s project will look to reshape the future of work with the help of lessons learned in the wake of the pandemic.

    The Covid-19 pandemic has been a major disrupter of work and employment, and it is not at all obvious how governments, businesses, and other organizations should manage the transition to a desirable state of employment as the pandemic recedes. Using natural language processing algorithms such as GPT-4, this project will look to identify new ways that companies can use AI to better match applicants to necessary jobs, create new types of jobs, assess skill training needed, and identify interventions to help include women and other groups whose employment was disproportionately affected by the pandemic.

    In addition to Malone and Horton, the research team will include Rob Laubacher, associate director and research scientist at the MIT Center for Collective Intelligence, and Kathleen Kennedy, executive director at the MIT Center for Collective Intelligence and senior director at MIT Horizon. The team will also include Nitu Nivedita, managing director of artificial intelligence at Accenture, and Thomas Hancock, data science senior manager at Accenture. More

  • in

    Understanding viral justice

    In the wake of the Covid-19 pandemic, the word “viral” has a new resonance, and it’s not necessarily positive. Ruha Benjamin, a scholar who investigates the social dimensions of science, medicine, and technology, advocates a shift in perspective. She thinks justice can also be contagious. That’s the premise of Benjamin’s award-winning book “Viral Justice: How We Grow the World We Want,” as she shared with MIT Libraries staff on a June 14 visit. 

    “If this pandemic has taught us anything, it’s that something almost undetectable can be deadly, and that we can transmit it without even knowing,” said Benjamin, professor of African American studies at Princeton University. “Doesn’t this imply that small things, seemingly minor actions, decisions, or habits, could have exponential effects in the other direction, tipping the scales towards justice?” 

    To seek a more just world, Benjamin exhorted library staff to notice the ways exclusion is built into our daily lives, showing examples of park benches with armrests at regular intervals. On the surface they appear welcoming, but they also make lying down — or sleeping — impossible. This idea is taken to the extreme with “Pay and Sit,” an art installation by Fabian Brunsing in the form of a bench that deploys sharp spikes on the seat if the user doesn’t pay a meter. It serves as a powerful metaphor for discriminatory design. 

    “Dr. Benjamin’s keynote was seriously mind-blowing,” said Cherry Ibrahim, human resources generalist in the MIT Libraries. “One part that really grabbed my attention was when she talked about benches purposely designed to prevent unhoused people from sleeping on them. There are these hidden spikes in our community that we might not even realize because they don’t directly impact us.” 

    Benjamin urged the audience to look for those “spikes,” which new technologies can make even more insidious — gender and racial bias in facial recognition, the use of racial data in software used to predict student success, algorithmic bias in health care — often in the guise of progress. She coined the term “the New Jim Code” to describe the combination of coded bias and the imagined objectivity we ascribe to technology. 

    “At the MIT Libraries, we’re deeply concerned with combating inequities through our work, whether it’s democratizing access to data or investigating ways disparate communities can participate in scholarship with minimal bias or barriers,” says Director of Libraries Chris Bourg. “It’s our mission to remove the ‘spikes’ in the systems through which we create, use, and share knowledge.”

    Calling out the harms encoded into our digital world is critical, argues Benjamin, but we must also create alternatives. This is where the collective power of individuals can be transformative. Benjamin shared examples of those who are “re-imagining the default settings of technology and society,” citing initiatives like Data for Black Lives movement and the Detroit Community Technology Project. “I’m interested in the way that everyday people are changing the digital ecosystem and demanding different kinds of rights and responsibilities and protections,” she said.

    In 2020, Benjamin founded the Ida B. Wells Just Data Lab with a goal of bringing together students, educators, activists, and artists to develop a critical and creative approach to data conception, production, and circulation. Its projects have examined different aspects of data and racial inequality: assessing the impact of Covid-19 on student learning; providing resources that confront the experience of Black mourning, grief, and mental health; or developing a playbook for Black maternal mental health. Through the lab’s student-led projects Benjamin sees the next generation re-imagining technology in ways that respond to the needs of marginalized people.

    “If inequity is woven into the very fabric of our society — we see it from policing to education to health care to work — then each twist, coil, and code is a chance for us to weave new patterns, practices, and politics,” she said. “The vastness of the problems that we’re up against will be their undoing.” More

  • in

    Study: Covid-19 has reduced diverse urban interactions

    The Covid-19 pandemic has reduced how often urban residents intersect with people from different income brackets, according to a new study led by MIT researchers.

    Examining the movement of people in four U.S. cities before and after the onset of the pandemic, the study found a 15 to 30 percent decrease in the number of visits residents were making to areas that are socioeconomically different than their own. In turn, this has reduced people’s opportunities to interact with others from varied social and economic spheres.

    “Income diversity of urban encounters decreased during the pandemic, and not just in the lockdown stages,” says Takahiro Yabe, a postdoc at the Media Lab and co-author of a newly published paper detailing the study’s results. “It decreased in the long term as well, after mobility patterns recovered.”

    Indeed, the study found a large immediate dropoff in urban movement in the spring of 2020, when new policies temporarily shuttered many types of institutions and businesses in the U.S. and much of the world due to the emergence of the deadly Covid-19 virus. But even after such restrictions were lifted and the overall amount of urban movement approached prepandemic levels, movement patterns within cities have narrowed; people now visit fewer places.

    “We see that changes like working from home, less exploration, more online shopping, all these behaviors add up,” says Esteban Moro, a research scientist at MIT’s Sociotechnical Systems Research Center (SSRC) and another of the paper’s co-authors. “Working from home is amazing and shopping online is great, but we are not seeing each other at the rates we were before.”

    The paper, “Behavioral changes during the Covid-19 pandemic decreased income diversity of urban encounters,” appears in Nature Communications. The co-authors are Yabe; Bernardo García Bulle Bueno, a doctoral candidate at MIT’s Institute for Data, Systems, and Society (IDSS); Xiaowen Dong, an associate professor at Oxford University; Alex Pentland, professor of media arts and sciences at MIT and the Toshiba Professor at the Media Lab; and Moro, who is also an associate professor at the University Carlos III of Madrid.

    A decline in exploration

    To conduct the study, the researchers examined anonymized cellphone data from 1 million users over a three-year period, starting in early 2019, with data focused on four U.S. cities: Boston, Dallas, Los Angeles, and Seattle. The researchers recorded visits to 433,000 specific “point of interest” locations in those cities, corroborated in part with records from Infogroup’s U.S. Business Database, an annual census of company information.  

    The researchers used U.S. Census Bureau data to categorize the socioeconomic status of the people in the study, placing everyone into one of four income quartiles, based on the average income of the census block (a small area) in which they live. The scholars made the same income-level assessment for every census block in the four cities, then recorded instances in which someone spent from 10 minutes to four hours in a census block other than their own, to see how often people visited areas in different income quartiles. 

    Ultimately, the researchers found that by late 2021, the amount of urban movement overall was returning to prepandemic levels, but the scope of places residents were visiting had become more restricted.

    Among other things, people made many fewer visits to museums, leisure venues, transport sites, and coffee shops. Visits to grocery stores remained fairly constant — but people tend not to leave their socioeconomic circles for grocery shopping.

    “Early in the pandemic, people reduced their mobility radius significantly,” Yabe says. “By late 2021, that decrease flattened out, and the average dwell time people spent at places other than work and home recovered to prepandemic levels. What’s different is that exploration substantially decreased, around 5 to 10 percent. We also see less visitation to fun places.” He adds: “Museums are the most diverse places you can find, parks — they took the biggest hit during the pandemic. Places that are [more] segregated, like grocery stores, did not.”

    Overall, Moro notes, “When we explore less, we go to places that are less diverse.”

    Different cities, same pattern

    Because the study encompassed four cities with different types of policies about reopening public sites and businesses during the pandemic, the researchers could also evaluate what impact public health policies had on urban movement. But even in these different settings, the same phenomenon emerged, with a narrower range of mobility occurring by late 2021.

    “Despite the substantial differences in how cities dealt with Covid-19, the decrease in diversity and the behavioral changes were surprisingly similar across the four cities,” Yabe observes.

    The researchers emphasize that these changes in urban movement can have long-term societal effects. Prior research has shown a significant association between a diversity of social connections and greater economic success for people in lower-income groups. And while some interactions between people in different income quartiles might be brief and transactional, the evidence suggests that, on aggregate, other more substantial connections have also been reduced. Additionally, the scholars note, the narrowing of experience can also weaken civic ties and valuable political connections.

    “It’s creating an urban fabric that is actually more brittle, in the sense that we are less exposed to other people,” Moro says. “We don’t get to know other people in the city, and that is very important for policies and public opinion. We need to convince people that new policies and laws would be fair. And the only way to do that is to know other people’s needs. If we don’t see them around the city, that will be impossible.”

    At the same time, Yabe adds, “I think there is a lot we can do from a policy standpoint to bring people back to places that used to be a lot more diverse.” The researchers are currently developing further studies related to cultural and public institutions, as well and transportation issues, to try to evaluate urban connectivity in additional detail.

    “The quantity of our mobility has recovered,” Yabe says. “The quality has really changed, and we’re more segregated as a result.” More

  • in

    Q&A: A fresh look at data science

    As the leaders of a developing field, data scientists must often deal with a frustratingly slippery question: What is data science, precisely, and what is it good for?

    Alfred Spector is a visiting scholar in the MIT Department of Electrical Engineering and Computer Science (EECS), an influential developer of distributed computing systems and applications, and a successful tech executive with companies including IBM and Google. Along with three co-authors — Peter Norvig at Stanford University and Google, Chris Wiggins at Columbia University and The New York Times, and Jeannette M. Wing at Columbia — Spector recently published “Data Science in Context: Foundations, Challenges, Opportunities” (Cambridge University Press), which provides a broad, conversational overview of the wide-ranging field driving change in sectors ranging from health care to transportation to commerce to entertainment. 

    Here, Spector talks about data-driven life, what makes a good data scientist, and how his book came together during the height of the Covid-19 pandemic.

    Q: One of the most common buzzwords Americans hear is “data-driven,” but many might not know what that term is supposed to mean. Can you unpack it for us?

    A: Data-driven broadly refers to techniques or algorithms powered by data — they either provide insight or reach conclusions, say, a recommendation or a prediction. The algorithms power models which are increasingly woven into the fabric of science, commerce, and life, and they often provide excellent results. The list of their successes is really too long to even begin to list. However, one concern is that the proliferation of data makes it easy for us as students, scientists, or just members of the public to jump to erroneous conclusions. As just one example, our own confirmation biases make us prone to believing some data elements or insights “prove” something we already believe to be true. Additionally, we often tend to see causal relationships where the data only shows correlation. It might seem paradoxical, but data science makes critical reading and analysis of data all the more important.

    Q: What, to your mind, makes a good data scientist?

    A: [In talking to students and colleagues] I optimistically emphasize the power of data science and the importance of gaining the computational, statistical, and machine learning skills to apply it. But, I also remind students that we are obligated to solve problems well. In our book, Chris [Wiggins] paraphrases danah boyd, who says that a successful application of data science is not one that merely meets some technical goal, but one that actually improves lives. More specifically, I exhort practitioners to provide a real solution to problems, or else clearly identify what we are not solving so that people see the limitations of our work. We should be extremely clear so that we do not generate harmful results or lead others to erroneous conclusions. I also remind people that all of us, including scientists and engineers, are human and subject to the same human foibles as everyone else, such as various biases. 

    Q: You discuss Covid-19 in your book. While some short-range models for mortality were very accurate during the heart of the pandemic, you note the failure of long-range models to predict any of 2020’s four major geotemporal Covid waves in the United States. Do you feel Covid was a uniquely hard situation to model? 

    A: Covid was particularly difficult to predict over the long term because of many factors — the virus was changing, human behavior was changing, political entities changed their minds. Also, we didn’t have fine-grained mobility data (perhaps, for good reasons), and we lacked sufficient scientific understanding of the virus, particularly in the first year.

    I think there are many other domains which are similarly difficult. Our book teases out many reasons why data-driven models may not be applicable. Perhaps it’s too difficult to get or hold the necessary data. Perhaps the past doesn’t predict the future. If data models are being used in life-and-death situations, we may not be able to make them sufficiently dependable; this is particularly true as we’ve seen all the motivations that bad actors have to find vulnerabilities. So, as we continue to apply data science, we need to think through all the requirements we have, and the capability of the field to meet them. They often align, but not always. And, as data science seeks to solve problems into ever more important areas such as human health, education, transportation safety, etc., there will be many challenges.

    Q: Let’s talk about the power of good visualization. You mention the popular, early 2000’s Baby Name Voyager website as one that changed your view on the importance of data visualization. Tell us how that happened. 

    A: That website, recently reborn as the Name Grapher, had two characteristics that I thought were brilliant. First, it had a really natural interface, where you type the initial characters of a name and it shows a frequency graph of all the names beginning with those letters, and their popularity over time. Second, it’s so much better than a spreadsheet with 140 columns representing years and rows representing names, despite the fact it contains no extra information. It also provided instantaneous feedback with its display graph dynamically changing as you type. To me, this showed the power of a very simple transformation that is done correctly.

    Q: When you and your co-authors began planning “Data Science In Context,” what did you hope to offer?

    A: We portray present data science as a field that’s already had enormous benefits, that provides even more future opportunities, but one that requires equally enormous care in its use. Referencing the word “context” in the title, we explain that the proper use of data science must consider the specifics of the application, the laws and norms of the society in which the application is used, and even the time period of its deployment. And, importantly for an MIT audience, the practice of data science must go beyond just the data and the model to the careful consideration of an application’s objectives, its security, privacy, abuse, and resilience risks, and even the understandability it conveys to humans. Within this expansive notion of context, we finally explain that data scientists must also carefully consider ethical trade-offs and societal implications.

    Q: How did you keep focus throughout the process?

    A: Much like in open-source projects, I played both the coordinating author role and also the role of overall librarian of all the material, but we all made significant contributions. Chris Wiggins is very knowledgeable on the Belmont principles and applied ethics; he was the major contributor of those sections. Peter Norvig, as the coauthor of a bestselling AI textbook, was particularly involved in the sections on building models and causality. Jeannette Wing worked with me very closely on our seven-element Analysis Rubric and recognized that a checklist for data science practitioners would end up being one of our book’s most important contributions. 

    From a nuts-and-bolts perspective, we wrote the book during Covid, using one large shared Google doc with weekly video conferences. Amazingly enough, Chris, Jeannette, and I didn’t meet in person at all, and Peter and I met only once — sitting outdoors on a wooden bench on the Stanford campus.

    Q: That is an unusual way to write a book! Do you recommend it?

    A: It would be nice to have had more social interaction, but a shared document, at least with a coordinating author, worked pretty well for something up to this size. The benefit is that we always had a single, coherent textual base, not dissimilar to how a programming team works together.

    This is a condensed, edited version of a longer interview that originally appeared on the MIT EECS website. More

  • in

    Companies use MIT research to identify and respond to supply chain risks

    In February 2020, MIT professor David Simchi-Levi predicted the future. In an article in Harvard Business Review, he and his colleague warned that the new coronavirus outbreak would throttle supply chains and shutter tens of thousands of businesses across North America and Europe by mid-March.

    For Simchi-Levi, who had developed new models of supply chain resiliency and advised major companies on how to best shield themselves from supply chain woes, the signs of disruption were plain to see. Two years later, the professor of engineering systems at the MIT Schwarzman College of Computing and the Department of Civil and Environmental Engineering, and director of the MIT Data Science Lab has found a “flood of interest” from companies anxious to apply his Risk Exposure Index (REI) research to identify and respond to hidden risks in their own supply chains.

    His work on “stress tests” for critical supply chains and ways to guide global supply chain recovery were included in the 2022 Economic Report of the President presented to the U.S. Congress in April.

    It is rare that data science research can influence policy at the highest levels, Simchi-Levi says, but his models reflect something that business needs now: a new world of continuing global crisis, without relying on historical precedent.

    “What the last two years showed is that you cannot plan just based on what happened last year or the last two years,” Simchi-Levi says.

    He recalled the famous quote, sometimes attributed to hockey great Wayne Gretzsky, that good players don’t skate to where the puck is, but where the puck is going to be. “We are not focusing on the state of the supply chain right now, but what may happen six weeks from now, eight weeks from now, to prepare ourselves today to prevent the problems of the future.”

    Finding hidden risks

    At the heart of REI is a mathematical model of the supply chain that focuses on potential failures at different supply chain nodes — a flood at a supplier’s factory, or a shortage of raw materials at another factory, for instance. By calculating variables such as “time-to-recover” (TTR), which measures how long it will take a particular node to be back at full function, and time-to-survive (TTS), which identifies the maximum duration that the supply chain can match supply with demand after a disruption, the model focuses on the impact of disruption on the supply chain, rather than the cause of disruption.

    Even before the pandemic, catastrophic events such as the 2010 Iceland volcanic eruption and the 2011 Tohoku earthquake and tsunami in Japan were threatening these nodes. “For many years, companies from a variety of industries focused mostly on efficiency, cutting costs as much as possible, using strategies like outsourcing and offshoring,” Simchi-Levi says. “They were very successful doing this, but it has dramatically increased their exposure to risk.”

    Using their model, Simchi-Levi and colleagues began working with Ford Motor Company in 2013 to improve the company’s supply chain resiliency. The partnership uncovered some surprising hidden risks.

    To begin with, the researchers found out that Ford’s “strategic suppliers” — the nodes of the supply chain where the company spent large amount of money each year — had only moderate exposure to risk. Instead, the biggest risk “tended to come from tiny suppliers that provide Ford with components that cost about 10 cents,” says Simchi-Levi.

    The analysis also found that risky suppliers are everywhere across the globe. “There is this idea that if you just move suppliers closer to market, to demand, to North America or to Mexico, you increase the resiliency of your supply chain. That is not supported by our data,” he says.

    Rewards of resiliency

    By creating a virtual representation, or “digital twin,” of the Ford supply chain, the researchers were able to test out strategies at each node to see what would increase supply chain resiliency. Should the company invest in more warehouses to store a key component? Should it shift production of a component to another factory?

    Companies are sometimes reluctant to invest in supply chain resiliency, Simchi-Levi says, but the analysis isn’t just about risk. “It’s also going to help you identify savings opportunities. The company may be building a lot of misplaced, costly inventory, for instance, and our method helps them to identify these inefficiencies and cut costs.”

    Since working with Ford, Simchi-Levi and colleagues have collaborated with many other companies, including a partnership with Accenture, to scale the REI technology to a variety of industries including high-tech, industrial equipment, home improvement retailers, fashion retailers, and consumer packaged goods.

    Annette Clayton, the CEO of Schneider Electric North America and previously its chief supply chain officer, has worked with Simchi-Levi for 17 years. “When I first went to work for Schneider, I asked David and his team to help us look at resiliency and inventory positioning in order to make the best cost, delivery, flexibility, and speed trade-offs for the North American supply chain,” she says. “As the pandemic unfolded, the very learnings in supply chain resiliency we had worked on before became even more important and we partnered with David and his team again,”

    “We have used TTR and TTS to determine places where we need to develop and duplicate supplier capability, from raw materials to assembled parts. We increased inventories where our time-to-recover because of extended logistics times exceeded our time-to-survive,” Clayton adds. “We have used TTR and TTS to prioritize our workload in supplier development, procurement and expanding our own manufacturing capacity.”

    The REI approach can even be applied to an entire country’s economy, as the U.N. Office for Disaster Risk Reduction has done for developing countries such as Thailand in the wake of disastrous flooding in 2011.

    Simchi-Levi and colleagues have been motivated by the pandemic to enhance the REI model with new features. “Because we have started collaborating with more companies, we have realized some interesting, company-specific business constraints,” he says, which are leading to more efficient ways of calculating hidden risk. More

  • in

    Study: With masking and distancing in place, NFL stadium openings in 2020 had no impact on local Covid-19 infections

    As with most everything in the world, football looked very different in 2020. As the Covid-19 pandemic unfolded, many National Football League (NFL) games were played in empty stadiums, while other stadiums opened to fans at significantly reduced capacity, with strict safety protocols in place.

    At the time it was unclear what impact such large sporting events would have on Covid-19 case counts, particularly at a time when vaccination against the virus was not widely available.

    Now, MIT engineers have taken a look back at the NFL’s 2020 regular season and found that for this specific period during the pandemic, opening stadiums to fans while requiring face coverings, social distancing, and other measures had no impact on the number of Covid-19 infections in those stadiums’ local counties.

    As they write in a new paper appearing this week in the Proceedings of the National Academy of Sciences, “the benefits of providing a tightly controlled outdoor spectating environment — including masking and distancing requirements — counterbalanced the risks associated with opening.”

    The study concentrates on the NFL’s 2020 regular season (September 2020 to early January 2021), at a time when earlier strains of the virus dominated, before the rise of more transmissible Delta and Omicron variants. Nevertheless, the results may inform decisions on whether and how to hold large outdoor gatherings in the face of future public health crises.

    “These results show that the measures adopted by the NFL were effective in safely opening stadiums,” says study author Anette “Peko” Hosoi, the Neil and Jane Pappalardo Professor of Mechanical Engineering at MIT. “If case counts start to rise again, we know what to do: mask people, put them outside, and distance them from each other.”

    The study’s co-authors are members of MIT’s Institue for Data, Systems, and Society (IDSS), and include Bernardo García Bulle, Dennis Shen, and Devavrat Shah, the Andrew and Erna Viterbi Professor in the Department of Electrical Engineering and Computer Science (EECS).

    Preseason patterns

    Last year a group led by the University of Southern Mississippi compared Covid-19 case counts in the counties of NFL stadiums that allowed fans in, versus those that did not. Their analysis showed that stadiums that opened to large numbers of fans led to “tangible increases” in the local county’s number of Covid-19 cases.

    But there are a number of factors in addition to a stadium’s opening that can affect case counts, including local policies, mandates, and attitudes. As the MIT team writes, “it is not at all obvious that one can attribute the differences in case spikes to the stadiums given the enormous number of confounding factors.”

    To truly isolate the effects of a stadium’s opening, one could imagine tracking Covid cases in a county with an open stadium through the 2020 season, then turning back the clock, closing the stadium, then tracking that same county’s Covid cases through the same season, all things being equal.

    “That’s the perfect experiment, with the exception that you would need a time machine,” Hosoi says.

    As it turns out, the next best thing is synthetic control — a statistical method that is used to determine the effect of an “intervention” (such as the opening of a stadium) compared with the exact same scenario without that intervention.

    In synthetic control, researchers use a weighted combination of groups to construct a “synthetic” version of an actual  scenario. In this case, the actual scenario is a county such as Dallas that hosts an open stadium. A synthetic version would be a county that looks similar to Dallas, only without a stadium. In the context of this study, a county that “looks” like Dallas has a similar preseason pattern of Covid-19 cases.

    To construct a synthetic Dallas, the researchers looked for surrounding counties without stadiums, that had similar Covid-19 trajectories leading up to the 2020 football season. They combined these counties in a way that best fit Dallas’ actual case trajectory. They then used data from the combined counties to calculate the number of Covid cases for this synthetic Dallas through the season, and compared these counts to the real Dallas.

    The team carried out this analysis for every “stadium county.” They determined a county to be a stadium county if more than 10 percent of a stadium’s fans came from that county, which the researchers estimated based on attendance data provided by the NFL.

    “Go outside”

    Of the stadiums included in the study, 13 were closed through the regular season, while 16 opened with reduced capacity and multiple pandemic requirements in place, such as required masking, distanced seating, mobile ticketing, and enhanced cleaning protocols.

    The researchers found the trajectory of infections in all stadium counties mirrored that of synthetic counties, showing that the number of infections would have been the same if the stadiums had remained closed. In other words, they found no evidence that NFL stadium openings led to any increase in local Covid case counts.

    To check that their method wasn’t missing any case spikes, they tested it on a known superspreader: the Sturgis Motorcycle Rally, which was held in August of 2020. The analysis successfully picked up an increase in cases in Meade, the host county, compared to a synthetic counterpart, in the two weeks following the rally.

    Surprisingly, the researchers found that several stadium counties’ case counts dipped slightly compared to their synthetic counterparts. In these counties — including Hamilton, Ohio, home of the Cincinnati Bengals — it appeared that opening the stadium to fans was tied to a dip in Covid-19 infections. Hosoi has a guess as to why:

    “These are football communities with dedicated fans. Rather than stay home alone, those fans may have gone to a sports bar or hosted indoor football gatherings if the stadium had not opened,” Hosoi proposes. “Opening the stadium under those circumstances would have been beneficial to the community because it makes people go outside.”

    The team’s analysis also revealed another connection: Counties with similar Covid trajectories also shared similar politics. To illustrate this point, the team mapped the county-wide temporal trajectories of Covid case counts in Ohio in 2020 and found them to be a strong predictor of the state’s 2020 electoral map.

    “That is not a coincidence,” Hosoi notes. “It tells us that local political leanings determined the temporal trajectory of the pandemic.”

    The team plans to apply their analysis to see how other factors may have influenced the pandemic.

    “Covid is a different beast [today],” she says. “Omicron is more transmissive, and more of the population is vaccinated. It’s possible we’d find something different if we ran this analysis on the upcoming season, and I think we probably should try.” More

  • in

    Physics and the machine-learning “black box”

    Machine-learning algorithms are often referred to as a “black box.” Once data are put into an algorithm, it’s not always known exactly how the algorithm arrives at its prediction. This can be particularly frustrating when things go wrong. A new mechanical engineering (MechE) course at MIT teaches students how to tackle the “black box” problem, through a combination of data science and physics-based engineering.

    In class 2.C161 (Physical Systems Modeling and Design Using Machine Learning), Professor George Barbastathis demonstrates how mechanical engineers can use their unique knowledge of physical systems to keep algorithms in check and develop more accurate predictions.

    “I wanted to take 2.C161 because machine-learning models are usually a “black box,” but this class taught us how to construct a system model that is informed by physics so we can peek inside,” explains Crystal Owens, a mechanical engineering graduate student who took the course in spring 2021.

    As chair of the Committee on the Strategic Integration of Data Science into Mechanical Engineering, Barbastathis has had many conversations with mechanical engineering students, researchers, and faculty to better understand the challenges and successes they’ve had using machine learning in their work.

    “One comment we heard frequently was that these colleagues can see the value of data science methods for problems they are facing in their mechanical engineering-centric research; yet they are lacking the tools to make the most out of it,” says Barbastathis. “Mechanical, civil, electrical, and other types of engineers want a fundamental understanding of data principles without having to convert themselves to being full-time data scientists or AI researchers.”

    Additionally, as mechanical engineering students move on from MIT to their careers, many will need to manage data scientists on their teams someday. Barbastathis hopes to set these students up for success with class 2.C161.

    Bridging MechE and the MIT Schwartzman College of Computing

    Class 2.C161 is part of the MIT Schwartzman College of Computing “Computing Core.” The goal of these classes is to connect data science and physics-based engineering disciplines, like mechanical engineering. Students take the course alongside 6.C402 (Modeling with Machine Learning: from Algorithms to Applications), taught by professors of electrical engineering and computer science Regina Barzilay and Tommi Jaakkola.

    The two classes are taught concurrently during the semester, exposing students to both fundamentals in machine learning and domain-specific applications in mechanical engineering.

    In 2.C161, Barbastathis highlights how complementary physics-based engineering and data science are. Physical laws present a number of ambiguities and unknowns, ranging from temperature and humidity to electromagnetic forces. Data science can be used to predict these physical phenomena. Meanwhile, having an understanding of physical systems helps ensure the resulting output of an algorithm is accurate and explainable.

    “What’s needed is a deeper combined understanding of the associated physical phenomena and the principles of data science, machine learning in particular, to close the gap,” adds Barbastathis. “By combining data with physical principles, the new revolution in physics-based engineering is relatively immune to the “black box” problem facing other types of machine learning.”

    Equipped with a working knowledge of machine-learning topics covered in class 6.C402 and a deeper understanding of how to pair data science with physics, students are charged with developing a final project that solves for an actual physical system.

    Developing solutions for real-world physical systems

    For their final project, students in 2.C161 are asked to identify a real-world problem that requires data science to address the ambiguity inherent in physical systems. After obtaining all relevant data, students are asked to select a machine-learning method, implement their chosen solution, and present and critique the results.

    Topics this past semester ranged from weather forecasting to the flow of gas in combustion engines, with two student teams drawing inspiration from the ongoing Covid-19 pandemic.

    Owens and her teammates, fellow graduate students Arun Krishnadas and Joshua David John Rathinaraj, set out to develop a model for the Covid-19 vaccine rollout.

    “We developed a method of combining a neural network with a susceptible-infected-recovered (SIR) epidemiological model to create a physics-informed prediction system for the spread of Covid-19 after vaccinations started,” explains Owens.

    The team accounted for various unknowns including population mobility, weather, and political climate. This combined approach resulted in a prediction of Covid-19’s spread during the vaccine rollout that was more reliable than using either the SIR model or a neural network alone.

    Another team, including graduate student Yiwen Hu, developed a model to predict mutation rates in Covid-19, a topic that became all too pertinent as the delta variant began its global spread.

    “We used machine learning to predict the time-series-based mutation rate of Covid-19, and then incorporated that as an independent parameter into the prediction of pandemic dynamics to see if it could help us better predict the trend of the Covid-19 pandemic,” says Hu.

    Hu, who had previously conducted research into how vibrations on coronavirus protein spikes affect infection rates, hopes to apply the physics-based machine-learning approaches he learned in 2.C161 to his research on de novo protein design.

    Whatever the physical system students addressed in their final projects, Barbastathis was careful to stress one unifying goal: the need to assess ethical implications in data science. While more traditional computing methods like face or voice recognition have proven to be rife with ethical issues, there is an opportunity to combine physical systems with machine learning in a fair, ethical way.

    “We must ensure that collection and use of data are carried out equitably and inclusively, respecting the diversity in our society and avoiding well-known problems that computer scientists in the past have run into,” says Barbastathis.

    Barbastathis hopes that by encouraging mechanical engineering students to be both ethics-literate and well-versed in data science, they can move on to develop reliable, ethically sound solutions and predictions for physical-based engineering challenges. More

  • in

    Studying learner engagement during the Covid-19 pandemic

    While massive open online classes (MOOCs) have been a significant trend in higher education for many years now, they have gained a new level of attention during the Covid-19 pandemic. Open online courses became a critical resource for a wide audience of new learners during the first stages of the pandemic — including students whose academic programs had shifted online, teachers seeking online resources, and individuals suddenly facing lockdown or unemployment and looking to build new skills.

    Mary Ellen Wiltrout, director of online and blended learning initiatives and lecturer in digital learning in the Department of Biology, and Virginia “Katie” Blackwell, currently an MIT PhD student in biology, published a paper this summer in the European MOOC Stakeholder Summit (EMOOCs 2021) conference proceedings evaluating data for the online course 7.00x (Introduction to Biology). Their research objective was to better understand whether the shift to online learning that occurred during the pandemic led to increased learner engagement in the course.Blackwell participated in this research as part of the Bernard S. and Sophie G. Gould MIT Summer Research Program (MSRP) in Biology, during the uniquely remote MSRPx-Biology 2020 student cohort. She collaborated on the project while working toward her bachelor’s degree in biochemistry and molecular biology from the University of Texas at Dallas, and collaborated on the research while in Texas. She has since applied and been accepted into MIT’s PhD program in biology.

    “MSRP Biology was a transformative experience for me. I learned a lot about the nature of research and the MIT community in a very short period of time and loved every second of the program. Without MSRP, I would never have even considered applying to MIT for my PhD. After MSRP and working with Mary Ellen, MIT biology became my first-choice program and I felt like I had a shot at getting in,” says Blackwell.

    Play video

    Many MOOC platforms experienced increased website traffic in 2020, with 30 new MOOC-based degrees and more than 60 million new learners.

    “We find that the tremendous, lifelong learning opportunities that MOOCs provide are even more important and sought-after when traditional education is disrupted. During the pandemic, people had to be at home more often, and some faced unemployment requiring a career transition,” says Wiltrout.

    Wiltrout and Blackwell wanted to build a deeper understanding of learner profiles rather than looking exclusively at enrollments. They looked at all available data, including: enrollment demographics (i.e., country and “.edu” participants); proportion of learners engaged with videos, problems, and forums; number of individual engagement events with videos, problems, and forums; verification and performance; and the course “track” level — including auditing (for free) and verified (paying and receiving access to additional course content, including access to a comprehensive competency exam). They analyzed data in these areas from five runs of 7.00x in this study, including three pre-pandemic runs of April, July, and November 2019 and two pandemic runs of March and July 2020. 

    The March 2020 run had the same count of verified-track participants as all three pre-pandemic runs combined. The July 2020 run enrolled nearly as many verified-track participants as the March 2020 run. Wiltrout says that introductory biology content may have attracted great attention during the early days and months of the Covid-19 pandemic, as people may have had a new (or renewed) interest in learning about (or reviewing) viruses, RNA, the inner workings of cells, and more.

    Wiltrout and Blackwell found that the enrollment count for the March 2020 run of the course increased at almost triple the rate of the three pre-pandemic runs. During the early days of March 2020, the enrollment metrics appeared similar to enrollment metrics for the April 2019 run — both in rate and count — but the enrollment rate increased sharply around March 15, 2020. The July 2020 run began with more than twice as many learners already enrolled by the first day of the course, but continued with half the enrollment rate of the March 2020 course. In terms of learner demographics, during the pandemic, there was a higher proportion of learners with .edu addresses, indicating that MOOCs were often used by students enrolled in other schools. 

    Viewings of course videos increased at the beginning of the pandemic. During the March 2020 run, both verified-track and certified participants viewed far more unique videos during March 2020 than in the pre-pandemic runs of the course; even auditor-track learners — not aiming for certification — still viewed all videos offered. During the July 2020 run, however, both verified-track and certified participants viewed far fewer unique videos than during all prior runs. The proportion of participants who viewed at least one video decreased in the July 2020 run to 53 percent, from a mean of 64 percent in prior runs. Blackwell and Wiltrout say that this decrease — as well as the overall dip in participation in July 2020 — might be attributed to shifting circumstances for learners that allowed for less time to watch videos and participate in the course, as well as some fatigue from the extra screen time.

    The study found that 4.4 percent of March 2020 participants and 4.5 percent of July 2020 participants engaged through forum posting — which was 1.4 to 3.3 times higher than pre-pandemic proportions of forum posting. The increase in forum engagement may point to a desire for community engagement during a time when many were isolated and sheltering in place.

    “Through the day-to-day work of my research team and also through the engagement of the learners in 7.00x, we can see that there is great potential for meaningful connections in remote experiences,” says Wiltrout. “An increase in participation for an online course may not always remain at the same high level, in the long term, but overall, we’re continuing to see an increase in the number of MOOCs and other online programs offered by all universities and institutions, as well as an increase in online learners.” More