More stories

  • in

    How to get more electric cars on the road

    A new study from researchers at MIT uncovers the kinds of infrastructure improvements that would make the biggest difference in increasing the number of electric cars on the road, a key step toward reducing greenhouse gas emissions from transportation.
    The researchers found that installing charging stations on residential streets, rather than just in central locations such as shopping malls, could have an outsized benefit. They also found that adding on high-speed charging stations along highways and making supplementary vehicles more easily available to people who need to travel beyond the single-charge range of their electric vehicles could greatly increase the vehicle electrification potential.
    The findings are reported today in the journal Nature Energy, in a paper by MIT associate professor of energy studies Jessika Trancik, graduate student Wei Wei, postdoc Sankaran Ramakrishnan, and former doctoral student Zachary Needell SM ’15, PhD ’18.
    The researchers developed a new methodology to identify charging solutions that would conveniently fit into people’s daily activities. They used data collected from GPS tracking devices in cars, as well as survey results about people’s daily driving habits and needs, including detailed data from the Seattle area and more general data from the U.S. as a whole. Greatly increasing the penetration of electric cars into the personal vehicle fleet is a central feature of climate mitigation policies at local, state, and federal levels, Trancik says. A goal of this study was “to better understand how to make these plans for rapid vehicle electrification a reality,” she adds.
    In deciding how to prioritize different kinds of improvements in vehicle charging infrastructure, she says, “the approach that we took methodologically was to emphasize building a better understanding of people’s detailed energy consuming behavior, throughout the day and year.”
    To do that, “we examine how different people are moving from location to location throughout the day, and where they are stopping,” she says. “And from there we’re able to look at when and where they would be able to charge without interrupting their daily travel activities.”
    The team looked at both regular daily activities and the variations that occur over the course of a year. “The longitudinal view is important for capturing the different kinds of trips that a driver makes over time, so that we can determine the kinds of charging infrastructure needed to support vehicle electrification,” Wei says. 
    While the vast majority of people’s daily driving needs can be met by the range provided by existing lower-cost electric cars, as Trancik and her colleagues have reported, there are typically a few times when people need to drive much farther. Or, they may need to make more short trips than usual in a day, with little time to stop and recharge. These “high-energy days,” as the researchers call them, when drivers are consuming more than the usual amount of energy for their transportation needs, may only happen a handful of times per year, but they can be the deciding factor in people’s decision making about whether to go electric.
    Even though battery technology is steadily improving and extending the maximum range of electric cars, that alone will not be enough to meet all drivers’ needs and achieve rapid emissions reductions. So, addressing the range issue through infrastructure is essential, Trancik says. The highest-capacity batteries tend to be the most expensive, and are not affordable to many, she points out, so getting infrastructure right is also important from an equity perspective.
    Being strategic in placing infrastructure where it can be most convenient and effective — and making drivers aware of it so they can easily envision where and when they will charge — could make a huge difference, Trancik says.
    “There are various ways to incentivize the expansion of such charging infrastructures,” she says. “There’s a role for policymakers at the federal level, for example, for incentives to encourage private sector competition in this space, and demonstration sites for testing out, through public-private partnerships, the rapid expansion of the charging infrastructure.” State and local governments can also play an important part in driving innovation by businesses, she says, and a number of them have already signaled their support for vehicle electrification.
    Providing easy access to alternative transportation for those high-energy days could also play a role, the study found. Vehicle companies may even find it advantageous to provide or partner with convenient rental services to help drive their electric car sales.
    In their analysis of driving habits in Seattle, for example, the team found that the impact of either adding highway fast-charging stations or increasing availability of supplementary long-range vehicles for up to four days a year meant that the number of homes that could meet their driving needs with a lower cost electric vehicle increased from 10 percent to 40 percent. This number rose to above 90 percent of households when fast-charging stations, workplace charging, overnight public charging, and up to 10 days of access to supplementary vehicles were all available. Importantly, charging options at residential locations (on or off-street) is key across all of these scenarios.
    The study’s findings highlight the importance of making overnight charging capabilities available to more people. While those who have their own garages or off-street parking can often already easily charge their cars at home, many people do not have that option and use public parking. “It’s really important to provide access — reliable, predictable access — to charging for people, wherever they park for longer periods of time near home, often overnight,” Trancik says.
    That includes locations such as hotels as well as residential neighborhoods, she says. “I think it’s so critical to emphasize these high-impact approaches, such as figuring out ways to do that on public streets, rather than haphazardly putting a charger at the grocery store or at the mall or any other public location.” Not that those aren’t also useful, she says, but public planning should be aiming to expand accessibility to a greater part of the population. Being strategic about infrastructure expansion will continue to be important even as fast chargers fall in cost and new designs begin to allow for more rapid charging, she adds.
    Being strategic in placing infrastructure where it can be most convenient and effective could make a huge difference in the wider adoption of clean vehicles, Trancik says. Courtesy of Trancik Lab
    The study should help to provide some guidance to policymakers at all levels who are looking for ways to facilitate the reduction of greenhouse gas emissions, since the transportation sector accounts for about a third of those emissions overall. “If you have limited funds, which you typically always do, then it’s just really important to prioritize,” Trancik says, noting that this study could indicate the areas that could provide the greatest return for those investments. The high-impact charging solutions they identify can be mixed and matched across different cities, towns, and regions, the reseachers note in their paper.
    The researchers’ approach to analyzing high-resolution, real-world driving patterns is “valuable, enabling several opportunities for further research,” says Lynette Cheah, an associate professor of engineering systems and design at Singapore University of Technology and Design, who was not associated with this work. “Real-world driving data can not only guide infrastructure and policy planning, but also optimal EV charging management and vehicle purchasing and usage decisions. … This can provide greater confidence to drivers about the feasibility and operational implications of switching to EVs.”
    The study was supported by the European Regional Development Fund, the Lisbon Portugal Regional Development Program, the Portuguese Foundation for Science and Technology, and the U.S. Department of Energy. More

  • in

    Understanding antibodies to avoid pandemics

    Last month, the world welcomed the rollout of vaccines that may finally curb the Covid-19 pandemic. Pamela Björkman, the David Baltimore Professor of Biology and Bioengineering at Caltech, wants to understand how antibodies like the ones elicited by these vaccines target the SARS-CoV-2 virus that causes Covid-19. She hopes this understanding will guide treatment strategies and help design vaccines against future pandemics. She shared her lab’s work during the MIT Department of Biology’s Independent Activities Period (IAP) seminar series, Immunity from Principles to Practice, on Jan. 12.
    “Pamela is an amazing scientist, a strong advocate for women in science, and has a stellar history of studying the structural biology of virus-antibody interactions,” says Whitehead Institute for Biomedical Research Member Pulin Li, the Eugene Bell Career Development Professor of Tissue Engineering and one of the organizers of this year’s lecture series.
    Immunology research often progresses from the lab bench to the clinic quickly, as was the case with Covid-19 vaccines, says Latham Family Career Development Professor of Biology and Whitehead Institute Member Sebastian Lourido, who organized the lecture series with Li. He and Li chose to focus this year’s seminar series on immunity because this field highlights the tie between basic molecular biology, which is a cornerstone of the Department of Biology, and practical applications.
    “Pamela’s work is an excellent example of how fundamental discoveries can be intimately tied to real-world applications,” Lourido says.
    Björkman’s lab has a long history of studying antibodies, which are protective proteins that the body generates in response to invading pathogens. Björkman focuses on neutralizing antibodies, which bind and jam up the molecular machines that let viruses reproduce in human cells. Last fall, the U.S. Food and Drug Administration (FDA) authorized a combination of two neutralizing antibodies, produced by the pharmaceutical company Regeneron, for emergency use in people with mild to moderate Covid-19. This remains one of the few treatments available for the disease.
    Together with Michel Nussenzweig’s lab at The Rockefeller University, Börkman’s lab identified four categories of neutralizing antibodies that prevent a protein that decorates SARS-CoV-2’s surface, called the spike protein, from binding to a human protein called ACE2. Spike acts like the virus’s key, with ACE2 being the lock it has to open to enter human cells. Some of the antibodies that Björkman’s lab characterized bind to the tip of spike so that it can’t fit into ACE2, like sticking a wad of chewing gum on top of the virus’s key. Others block spike proteins from interacting with ACE2 by preventing them from altering their orientations. Understanding the variety of ways that neutralizing antibodies work will let scientists figure out how to combine them into maximally effective treatments.
    Björkman isn’t satisfied with just designing treatments for this pandemic, however. “Coronavirus experts say this is going to keep happening,” she says. “We need to be prepared next time.”
    To this end, Björkman’s lab has put pieces of spike-like proteins from multiple animal coronaviruses onto nanoparticles and injected them into mice. This made the mice generate antibodies against a mix of pathogens that are poised to jump into humans, suggesting that scientists could use this approach to create vaccines before pandemics occur. Importantly, the nanoparticles still work after they’re freeze-dried, meaning that companies could stockpile them, and that they could be shipped at room temperature.
    Björkman’s talk was the second in the Immunity from Principles to Practice series, which was kicked off by Gabriel Victora from The Rockefeller University. Victora discussed how antibodies are produced in structures called germinal centers that are found in lymph nodes and the spleen.
    Next in the series is Chris Garcia from Stanford University, who will speak on Jan. 19 about his lab’s work on engineering immune signaling molecules to maximize their potential to elicit therapeutic responses. To round out the series, Yasmine Belkaid from the National Institute of Allergy and Infectious Disease will speak on Jan. 26 about interactions between the gut microbiome and the pathogens we ingest. These talks complement a number of career development seminars that were organized by graduate students Fiona Aguilar, Alex Chan, Chris Giuliano, Alice Herneisen, Jimmy Ly, and Aditya Nair. More

  • in

    Envisioning an equitable, inclusive low-carbon future

    “Some say working on climate is a marathon, not a sprint, but it’s more an ultramarathon — an endurance sport if ever there was one,” said Kate Gordon, the senior climate policy advisor to Governor Gavin Newsom of California. “And look, women excel at those: We know how to dig in and get stuff done.”
    Gordon’s remarks in her first-day keynote address set the tone for the ninth annual U.S. Clean Energy Education & Empowerment (C3E) Symposium and Awards, held virtually in December. The event brings together women researchers, government leaders, and entrepreneurs to share their insights and goals around moving the world to a low- and eventually carbon-free future. It also honors nine women for their outstanding leadership and accomplishments in clean energy.
    The MIT Energy Initiative (MITEI) hosted the event, in collaboration with the U.S. Department of Energy (DOE), the Stanford Precourt Institute for Energy, and the Texas A&M Energy Institute.
    The symposium is part of the broader U.S. C3E initiative to increase women’s participation in the clean energy transition. “While women make up about half of the total U.S. labor force, they comprise less than a third of those employed in the renewable energy sector,” said Maria T. Vargas, the senior program advisor for the Office of Energy Efficiency and Renewable Energy at the DOE, who leads the agency’s involvement in C3E. “This gender gap is continuing to grow as wage inequities and inadequate advancement opportunities prompt many to seek work elsewhere, depriving the energy sector of their talent, experience, and skills.”
    “It’s no longer about fairness and equality, but about increasing our chances of success in making strategic decisions around climate mitigation and adaptation,” added Martha Broad, MITEI executive director, in her opening remarks. “In order to meet a net-zero carbon emissions goal within the next few decades and fundamentally change the way we produce and consume energy, it’s obvious we need to have women at the table,” she said.
    An opportunity for deeper structural transformation
    The two-day event took place at the tail end of a year like no other: unprecedented environmental destruction from climate change, the Covid-19 pandemic, an economic downturn. The symposium acknowledged these complex and interlinked issues, inviting its participants to consider the topic, “Accelerating the clean energy transition in a changing world.”
    But for many women speaking in panel discussions and presentations, accelerating this transition was not so much a matter of reacting to a changing world as it was changing that world. Without gender equality, racial and economic justice, they made clear, the most ambitious climate mitigation and adaptation plans would sputter.
    The eight U.S. C3E mid-career award winners, professionals with outstanding accomplishments and leadership abilities, spoke passionately about expanding the reach of clean energy technologies to transform lives.
    Elizabeth Kaiga, recipient of the 2020 business leadership award, is an account director for Renewable Energy, DNV GL. “Energy transition and a more inclusive future are intertwined,” she said. “We must provide equitable access for underrepresented communities.” Kaiga plans to use her award to help train women living in off-grid communities in Africa to deliver electricity directly to their homes.
    This year’s international leadership award winner, María Hilda Rivera, grew up in Puerto Rico without reliable electricity. Today, as energy advisor for Power Africa, she is providing electricity to poor communities in sub-Saharan Africa. “To meet the needs of these end users, we are building and growing energy markets, with minigrids and batteries delivered to homes,” she said.
    Advocacy awardee Cristina Garcia, assistant director of New York City’s Building Electrification Initiative, strives to “increase inclusivity with those disproportionately excluded from the conversation about climate change,” she said. She provides internship and job opportunities in the sustainability field to Latino students. “We need all hands on deck, with gender, ethnic, and racial diversity, to generate better outcomes.”
    In a panel devoted to equitable access to clean energy, speakers hammered home the importance of ensuring underserved communities’ ownership of policy, design, resource allocation, and economic benefits. “We can’t have big wins on climate without having front-line engagement,” said Shalanda H. Baker, a professor of law, public policy, and urban affairs at Northeastern University. “The energy transition is an opportunity for deeper structural transformation, by giving communities a way to change their circumstances.”
    Removing barriers for future generations
    The work of these women in diversity and clean energy builds on the efforts of an earlier generation, well represented at the symposium. C3E lifetime achievement award winner Bobi Garrett served as chief operating officer and deputy laboratory chief of the National Renewable Energy Laboratory (NREL), and began some of the government’s earliest research into energy efficiency renewables, biomass, solar, and wind. “When I arrived, energy was not a consistent part of the national dialogue, only cropping up during a power outage or a spike in oil prices,” she recalled. “NREL went from a $200 million budget to a half-billion today.”
    Garrett also helped launch and build the NREL’s Women’s Network. “I saw my most important role as championing staff and removing barriers,” she said.
    Second-day keynote speaker Kristina M. Johnson, president of Ohio State University, described her accomplishments as a DOE undersecretary in the first Obama administration. She disbursed billions of dollars in stimulus money to energy and environment projects to help the nation’s recovery from the Great Recession, and she spearheaded design of the administration’s plan for reducing greenhouse gas emissions by 85 percent by 2050, relative to 2005. But among her proudest ventures, she said, is helping organize C3E.
    “At the Copenhagen Climate meeting in 2009, I noticed that lots of energy and environment leaders from around the world were women, and I thought it made sense for us to get together,” she recalled. As a mentor and a boss, she has “always looked for opportunities to involve women and underrepresented minorities, both because it’s the right thing to do, and because it’s a necessity for the workforce.”
    Johnson counseled C3E participants to find their passions: “Be able to state your ‘why,’” she said. Her own passion is decarbonizing the electric sector. She figures the cost of doing so amounts to a trillion dollars over the course of 25 years — 0.23 percent of GDP. “The last time we invested those kinds of resources was for the interstate highway system, between 1955 and 1980, when we spent 0.46 percent of our GDP,” she said. “Are we willing to do it again?  We need to decide now.”
    A sense of historic moment
    Many panelists discussed the increasing urgency of addressing global warming. “Climate policy has been uneven at best, and we’ve lost valuable time, which makes it extremely important to use resources wisely,” said Jessika Trancik, an associate professor in MIT’s Institute for Data, Systems and Society.
    Trancik’s career is dedicated to providing government with scientifically validated instruments, such as market stimulation or research funding, for achieving specific, measurable goals. Her computational models enable precise measurements of benefits and costs to inform better policies. “Putting quantitative targets out there will enable people to accelerate work in electric vehicles and grid-scale energy storage, among other technologies,” she said.
    C3E participants shared a sense of historic moment. “It’s really exciting seeing the decreasing cost of technology like offshore wind, solar, and batteries, and watching renewables become independent of government interventions financially,” said Johanna Doyle of Reactive Technologies Limited.
    Some of these new technologies were featured in the symposium’s poster competition, where winners showcased low-cost, high-efficiency solar cells, weatherproofing of city housing for energy savings, and radiation-tolerant materials for advanced nuclear reactors.
    “We’re at a potential inflection point around energy choices and deployment,” said Sue Reid, principal advisor in Mission 2020, a group moving the global finance sector toward the Paris Agreement’s goals. “There’s momentum around zero emissions commitments, with financial behemoths aiming for net zero by or before 2050.” She sees the next decade offering an historic opportunity as energy systems and resource distribution change rapidly “to get to enduring, resilient viable systems that work for humanity.”
    Expanding the ranks
    Meeting these clean energy goals will require rapidly expanding the ranks of qualified energy professionals. “We need to keep breaking down systemic barriers to women’s advancement in these sectors, and your participation and leadership is absolutely critical for our shared success in this challenge,” Robert C. Armstrong, MITEI’s director, told symposium participants. The virtual format, a necessity during the pandemic, may actually prove a productive, ongoing tool for catalyzing the connections and mentorship that flow from such gatherings, he suggested.
    One sign of the power of online communications: More than 1,100 people attended each day of this virtual symposium, almost five times the number of people who normally attend in person. One newcomer, Neil Hoffman, a retired architect, wrote the organizers: “I appreciate being able to ‘sit in’ on these events and learn about the great work women are doing in the Climate Crisis. I am inspired and reassured about my grandchildren’s future listening to you all.” More

  • in

    Making smart thermostats more efficient

    Buildings account for about 40 percent of U.S. energy consumption, and are responsible for one-third of global carbon dioxide emissions. Making buildings more energy-efficient is not only a cost-saving measure, but a crucial climate change mitigation strategy. Hence the rise of “smart” buildings, which are increasingly becoming the norm around the world.
    Smart buildings automate systems like heating, ventilation, and air conditioning (HVAC); lighting; electricity; and security. Automation requires sensory data, such as indoor and outdoor temperature and humidity, carbon dioxide concentration, and occupancy status. Smart buildings leverage data in a combination of technologies that can make them more energy-efficient.
    Since HVAC systems account for nearly half of a building’s energy use, smart buildings use smart thermostats, which automate HVAC controls and can learn the temperature preferences of a building’s occupants.
    In a paper published in the journal Applied Energy, researchers from the MIT Laboratory for Information and Decision Systems (LIDS), in collaboration with Skoltech scientists, have designed a new smart thermostat which uses data-efficient algorithms that can learn optimal temperature thresholds within a week.
    “Despite recent advances in internet-of-things technology and data analytics, implementation of smart buildings is impeded by the time-consuming process of data acquisition in buildings,” says co-author Munther Dahleh, professor of electrical engineering and computer science and director of the Institute for Data, Systems, and Society (IDSS). Smart thermostat algorithms use building data to learn how to operate optimally, but the data can take months to collect.
    To speed up the learning process, the researchers used a method called manifold learning, where complex and “high-dimensional” functions are represented by simpler and lower-dimensional functions called “manifolds.” By leveraging manifold learning and knowledge of building thermodynamics, the researchers replaced a generic control method, which can have many parameters, with a set of “threshold” policies that each have fewer, more interpretable parameters. Algorithms developed to learn optimal manifolds require fewer data, so they are more data-efficient.
    The algorithms developed for the thermostat employ a methodology called reinforcement learning (RL), a data-driven sequential decision-making and control approach that has gained much attention in recent years for mastering games like backgammon and Go. 
    “We have efficient simulation engines for computer games that can generate abundant data for the RL algorithms to learn a good playing strategy,” says Ashkan Haji Hosseinloo, a postdoc at LIDS and the lead author of the paper. “However, we do not have the luxury of big data for microclimate control in buildings.”
    With a background in mechanical engineering and training in methods like RL, Hosseinloo can apply insights from statistics and state-of-the-art computing to real-world physical systems. “My main motivation is to slow down, and even prevent, an energy and environmental crisis by improving the efficiency of these systems,” he says.
    The smart thermostat’s new RL algorithms are “event-triggered,” meaning they make decisions only when certain events occur, rather than on a predetermined schedule. These “events” are defined by certain conditions reaching a threshold — such as the temperature in a room dropping out of optimal range. “This enables less-frequent learning updates and makes our algorithms computationally less expensive,” Hosseinloo says. 
    Computational power is a potential constraint for learning algorithms, and computational resources depend on whether algorithms run in the cloud or on a device itself — such as a smart thermostat. “We need learning algorithms that are both computationally efficient and data-efficient,” says Hosseinloo.
    Energy-efficient buildings offer additional advantages beyond reducing emissions and cutting costs. A building’s “microclimate” and air quality can directly affect the productivity and decision-making performance of building occupants. Considering the many large-scale economic, environmental, and societal impacts, microclimate control has become an important issue for governments, building managers, and even homeowners.
    “The new generation of smart buildings aims to learn from data how to operate autonomously and with minimum user interventions,” says co-author Henni Ouerdane, a professor on the Skoltech side of the collaboration. “A learning thermostat can potentially learn how to adjust its set-point temperatures in coordination with other HVAC devices, or based on its prediction of electricity tariffs in order to save energy and cost.”
    Hosseinloo also believes their methodology and algorithms apply to a diverse range of other physics-based control problems in areas including robotics, autonomous vehicles, and transportation, where data- and computational efficiency are of paramount importance.
    This research was a Skoltech-MIT Joint Project conducted as part of the MIT Skoltech Next Generation Program. More

  • in

    Making data-informed Covid-19 testing plans

    Warehouses, manufacturing floors, offices, schools — organizations of all kinds have had to change their operations to adapt to life in a pandemic. By now, there is confidence in some ways to help mitigate Covid-19 spread: contact tracing, distancing and quarantining, ventilation, mask wearing. And there is one scientific tool that can play a critical role: testing.
    Implementing testing within an organization raises a number of questions. Who should be tested? How often? How do other mitigation efforts impact testing need? How much will it all cost? A new web-based Covid-19 testing impact calculator at WhenToTest.org has been developed by MIT researchers with the Institute for Data, Systems, and Society (IDSS), in collaboration with the Consortia for Improving Medicine with Innovation and Technology (CIMIT), to help organizations around the world answer these questions.
    “The calculator allows you to do a cost analysis of different trade-offs and enables rational decisions for deploying testing within an organization,” explains Anette “Peko” Hosoi, a professor of mechanical engineering and IDSS affiliate who co-developed the tool. “How much does wearing masks save me? How much does contact tracing save me? How much testing do I have to do if I can’t social distance?”

    The web calculator accepts four basic inputs: size of the organization, the percentage of those people who reliably wear masks, whether or not contact tracing is being employed, and the maximum number of people who interact closely without masks. There are also fields for cost considerations, since many organizations will pay to have testing conducted while also paying employees while they are being tested. With this information, the model provides two key estimates: how many people to test daily, and the weekly cost of that testing.
    By adjusting the input to these questions, organizational decision-makers can explore these trade-offs, such as increasing mask use and decreasing group size to meet tighter testing budgets. The tool also estimates how much testing should be conducted in situations where masks or distance aren’t possible.
    “The powerful thing about this tool is that we are not telling people what to do,” says Hosoi. “We are giving them information that empowers them to make rational, financial decisions that are tailored to their organization.”
    Isolat’s impact
    The model behind the website began as a project of the IDSS Covid-19 Collaboration called “Isolat,” a volunteer group of researchers at IDSS, MIT, and beyond who apply advanced statistical tools to Covid-19 data in order to help inform pandemic policy.
    “The IDSS community came together and formed Isolat to address difficult questions that emerged from this pandemic,” says Munther Dahleh, a professor of electrical engineering and computer science who directs IDSS. “It brought together expertise in data science, in systems, in control theory, in fluid mechanics and fluid dynamics, because this problem is very broad. And it paid off.”
    Isolat research produced a number of tangible impacts, informing not only MIT’s reopening and testing strategy, but also helping the state of Massachusetts establish guidelines for all its colleges and universities. The group built connections and provided insights to organizations in countries like India and Peru. It wasn’t long before Isolat research began to draw the attention of other groups working to solve pandemic challenges.
    “After we developed the model, a number of organizations reached out to us, including RADx,” says Hosoi. RADx is the Rapid Acceleration of Diagnostics, an NIH-funded initiative to accelerate Covid-19 testing technology. This led to the larger collaboration with CIMIT, a network of academic and medical institutions including Massachusetts General Hospital that partners with industry and government to accelerate the development of innovative health-care technologies. With additional support and funding from the National Institute for Biomedical Imaging and Bioengineering, the model — which once lived in a spreadsheet — was developed into a user-friendly website where organizations of all kinds can get actionable advice on implementing Covid-19 testing.
    “While test-technology development has been the main objective of RADx Tech, the program supports commercialization and deployment. The calculator is a major enabler for those activities,” says Paul Tessier, the tool’s co-developer and product development director at CIMIT. “We are excited to join forces with MIT’s IDSS to advance the calculator.”
    Testing = control
    The calculator can’t prevent all members of an organization from getting sick. But it can inform a testing plan that finds infected people and quarantines them more quickly, preventing further spread. Says Dahleh: “Testing is really the only mechanism for controlling a pandemic when you don’t have a vaccine. It’s testing, quarantining, and contact tracing that allow you to isolate infected people before they infect others.”
    Isolat’s recommendations for testing were first published as a series of “rules of thumb” for reopening the MIT campus. The first rule: Testing equals control. Covid testing data can do more than help researchers and medical practitioners understand the extent of Covid-19 spread, or predict peaks in contagion and hotspot locations. A well-designed strategy that combines testing and other mitigation efforts can prevent sickness and even death.
    The Covid-19 Testing Impact Calculator offers advanced options, allowing variables such as mask and contact tracing efficiency to be adjusted and modeling either typical (U.S. average) conditions or hotspot conditions when infections are spreading quickly. The site also provides a cost/benefit analysis of different testing methods and advice on how institutions should handle positive results.
    “We’ve built a tool that we think can really help businesses, schools, and all kinds of organizations to navigate some of the challenges of the coronavirus pandemic,” says Dahleh. “We’re excited to share it, to refine it as we get feedback and new data, and ultimately to see what impact it will have.” More

  • in

    Q&A: Holly Jackson on building a cosmic family tree

    Holly Jackson doesn’t think of herself as an astronomer, but her work has contributed to some of the most startling and original research in the field this century. A junior majoring in electrical engineering and computer science, Jackson has become a valued member of Professor Paula Jofré’s research team in the astronomy department at Diego Portales University in Santiago, Chile.
    As a participant in MISTI, MIT’s international internship program, Jackson traveled to Santiago in 2019, well before the Covid-19 pandemic shut down in-person international exchanges worldwide. Since then, she has been working remotely from her Cambridge, Massachusetts, apartment with the Chilean astronomy team and biologists in the United Kingdom to build “family trees” of stars in the Milky Way. Here, Jackson discusses her recent work.
    Q: Many people are probably familiar with the term evolution in biology, but you’re applying it to the cosmos. How can stellar populations “evolve”?
    A: I’m no astronomer, so I had to have this explained to me at the beginning of this project! Essentially, every single star that exists in the universe today (as long as it is not actively exploding) is a blueprint of the chemistry of the specific part of the galaxy where that star was born, at the time of its birth. So basically, you have the Big Bang, which created the lightest elements like hydrogen and helium, and this led to the formation of the first stars. When these first stars died, they donated their processed chemical material back to into galaxies to be recycled into the next waves of star formation. Over the span of gigayears, you begin to see stars containing incremental increases in most of the elements as this process continues.
    Essentially, each star is like a fossil of the exact chemical makeup and range of elements available at the time of its birth, and so chemical evolution as a field of study has existed in astronomy for some time now. The process, up until now, has been focused on classifying stars according to their chemical patterns which, in principle, allows us to find stars that are chemically similar and trace them back to their birthplaces if we have their ages and kinematic data. And often astronomers find themselves caught up in uncertainties in the ages, chemical makeup, and kinematics of the stars, hindering the definition of evolutionary relationships. No one had ever made the link to concepts in biological evolution like DNA or parental genealogy before, which introduces a new of class methods to help overcome these challenges.
    Q: But that’s changing, thanks in part to the research you’re working on with Paula Jofré to show the “family trees” of stars.
    A: In classical astrophysics, there are two methods of tracing stellar origin — the first one, which I talked about, is comparing chemical abundances. If two stars have really similar chemistries, it’s likely they were born in the same gas cloud around the same time. But when they have different chemistries, it gets really difficult to trace their origins. That’s where the second method, stellar kinematics, comes in: tracking the trajectory of a star to figure out where it came from. But all the values which you use in these methods have massive uncertainties; you can back-propagate the values and realize that “Oh no, all the stars I’m comparing could have come from the same place.”
    So instead, we’ve been using stellar chemistry as DNA, and treating the information in the same way that evolutionary biologists did through phylogenetics: in other words, we’re sketching out evolutionary trees.
    Q: One of the key characteristics of the objects you’re studying is that they are very, very, very far away. How do you get around the vast distances involved and learn the makeup of these stars?
    A: Right now, the amount of data on the chemical abundances of stars is in a huge boom, thanks to the European space mission Gaia, a cooperation that has motivated the creation of several ground-based spectroscopic surveys of thousands to millions of stellar data in a very short time. It’s analogous to the Genbank collection, an explosion of data availability in genetics research.
    So we have all this data, but it doesn’t necessarily reconcile this fact that DNA and elemental abundances are entirely different measurements. That’s where an interesting component of our project comes in and where we encounter a lot of skepticism: we are dealing with 30 different decimal abundance measurements with lots of room for error, but DNA is just four letters. Evolutionary biologists told us something very cool about when the field of evolutionary biology was just starting and they were dealing with similar issues; they would measure traits in terms of continuous decimal measurements, and those measurements, because they were highly subjective, produced data with a lot of errors. For instance, we can imagine early evolutionary biologists trying to track the evolution of dog breeds by measuring their leg lengths; you can just imagine all the different ways that two people could measure a bunch of different dogs’ legs and arrive at some very messy data. But, surprisingly, that doesn’t mean that the biologists’ initial trees didn’t have meaning, or weren’t influential in terms of creating models that worked.
    We are in a very similar starting point to those early biologists. We have now access to this high-quality data complementary to the Gaia mission, and the evo-bio methods we have been using to analyze that data have been specifically fitted to the astrophysics framework. We’ve honed in on which specific techniques have the most parallels within astronomy, and that work is what’s making these methods super-effective in tracing stellar evolution.
    Q: Since, as you said, so much of stellar data is subject to a lot of uncertainty, how can you verify your theory when it’s applied to any given star? Put another way: how do you know you’re right?
    A: There are several ways. First off, there is one star we know a lot about: our own sun! The sun is super-useful as a reference point, and our data about the sun is the highest-quality chemical abundance data we can get. So we use a dataset of “solar twins,” stars which have similar properties to the sun, which gives us an advantage and a known set of data.
    Additionally, we’ve taken verification methods from biology, such as a stability analysis, which can be performed on evolutionary trees to see how strong the various relationships are. We’ve taken a tree of 79 stars very close to the sun (including the sun itself), analyzed the relationships the tree produced, and then linked them to popular astrophysical theories. And it worked: Not only does the timeline match up, but the structure of the tree matches up as well. We found links between the structure of our tree and a rapid period of stellar evolution after the Gaia-Enceladus Sausage merger. We also related our tree to evidence that the Sagittarius galaxy collided with the Milky Way at a certain point.
    Remember: Our tree was produced using no methods from astronomy. It was produced using these methods from biology, and yet it is producing results consistent with current astronomical findings.
    Q: Tell me about some of the challenges of this research, especially given the Covid-19 pandemic.
    A: One good thing is that we were always very well-equipped to handle remote communication. All the evolutionary biologists involved in the project are in the UK; the astronomers are in Santiago, Chile; and I’m in the U.S. However, the pandemic has presented challenges, as has the political unrest in Santiago, which led to the temporary shutdown of our main astronomer, Paula Jofré’s, university. So everyone is working from home; two team members have young kids, and of course it’s been seriously crazy, but thankfully everyone is super sweet and understanding. All of us are doing this work as our secondary commitment, but that makes everyone more understanding of the challenges. This work would’ve been released a lot earlier without the pandemic, but we’ve been powering through nonetheless.
    Q: What skills have you learned as a result of working with Paula Jofré — how has this experience made you a better scientist?
    A: I knew nothing about astronomy before starting this project, so this was my first exposure. Paula is an incredibly clear and creative explainer, and presents these concepts in an incredibly intuitive way. The fearlessness with which she approaches her projects has inspired the way I want to work in the future. She has no fear of interdisciplinary research or how her work will be received. She has received a lot of clapback — astronomy can be a very traditional field, and she was also a female physicist presenting this brazen new work that takes theories of biology and applies it to astronomy. And guess what? That work has become super influential, and her 2017 paper got her named in the Time 100 NEXT. She’s just an awesome person, and on top of that such a skilled researcher that she can explain astrophysics to an engineer in just a few weeks; it’s just incredible. Keep an eye out for her upcoming release of her book, which tells the story of the history of the Milky Way intertwined with the inspiring stories of female scientists!
    Q: What questions does your research bring up that you’d like to tackle next?
    A: On one hand, this work is challenging people because it’s validating a new way to study galactic chemical evolution. There’s a lot of potential for combining techniques and also developing this technique further; we’re not presenting it as a substitution to current methods, because the current methods work. But this new method can be combined with the old methods. For example, people will run chemical simulations and try to reproduce the evolution of the Milky Way and get nowhere close, but if we combine chemical simulations with phylogenetic techniques, we could have potential to reveal more than either of these methods alone.
    In addition, we’re using some of the simplest methods in phylogenetics right now, but there’s some crazy stuff in evolutionary biology, including a probabilistic model where you input a lot of data about how you think an evolutionary system behaves. We could use the gold standard methods of astronomy and the gold standard methods of biology to create a really convincing map of the galaxy’s relationships, and I’m really excited about that.
    We’ve just posted the preprint, and my inbox is freaking out — the responses are coming not just from astronomers, but people in other fields who are excited about this interdisciplinary field. An astronomer reading this paper is going to have a very different opinion than a biologist reading this paper, but both of their opinions are going to help refine and improve this method. Whenever you can work with two fields you don’t see together too often, that is where the coolest research comes from. More

  • in

    Seeing the values behind the numbers

    In the early decades of the 20th century, city officials in the U.S. began collecting data like they never had before. In St. Louis, starting around 1915, planners fanned out across the city and obtained detailed information about the use and ownership of every property standing.
    From this, the city developed its first systematic planning and zoning policies. Some neighborhoods were designated for new industrial and manufacturing use, with nightclubs, liquor stores, and various less desirable businesses tossed in. On the surface, the goal was economic efficiency, based around distinct business districts.
    Below the surface — and not very far down, either — St. Louis’ planning had another effect. Officials had recorded the ethnicity of every property owner. The industrial and less-desirable zoning areas happened to be situated in and around Black neighborhoods by design. Those residential properties soon declined in value, due to their new settings, and this decrease meant Blacks couldn’t afford to move elsewhere.
    St. Louis’ chief planner, Harland Bartholomew, became a national expert on the basis of this kind of work. But such data-driven policies have “reinforced structural racism” in cities, MIT scholar Sarah Williams points out in a new book about data and urban life.
    “I believe that often when people think of datasets, they think of them as being the truth, facts, raw information, something not to be questioned,” says Williams, an associate professor of technology and urban planning in MIT’s Department of Urban Studies and Planning. “But I really want everybody to question their data before they go out and use it.”
    Now in her book “Data Action,” published today by the MIT Press, Williams provides a guide for deploying data in city life, one that draws on historical examples, current developments, and her own research as case studies.
    “It’s a call to action to think about the way that data is used in society today,” says Williams.
    Build it, hack it, share it
    As a guide to action, Williams’ book is structured around three main chapters. One of these, “Build it!”, encourages planners, activists, and scholars to create their own data collection projects. For instance, OpenStreetMap, a widely used alternative to Google Maps, developed out of frustration that basic data were not freely available. This open-source mapping project has been developed from data contributed by people all over the world.
    For Williams’ part, she helped create the Beijing Air Tracks project, which used low-cost portable sensors to measure air quality at the 2008 Olympics. Developed along with the Associated Press, the project brought significant attention to China’s pollution and air-quality problems, at a low cost. Indeed, inexpensive mobile technology means people engaged with urban issues can find new ways to study questions.
    “It’s hugely different,” says Williams, a faculty affiliate of MIT’s Institute for Data, Systems, and Society. “Fifteen years ago, we didn’t all have smartphones in our pockets that can gather all kinds of data. This means now anyone can collect data, not just the people who have resources. Really the playing field of data collection has been changed by the mobile technologies that are available. … It’s just been transformative.”
    In another chapter, titled “Hack it!,” Williams suggests that researchers should be resourceful about collecting large-scale data from private institutions when no comparable public data source exists. This does not mean literally hacking into databases — rather, as Williams writes, “it’s about being creative in the way big data might be used to substitute for missing government data, such as essential population information.” Williams outlines a process that ensures the ethical use of data scraped off websites, and always lets the people running those information sources know about her efforts.
    In one study Williams helped run, the “Ghost Cities in China” project, she and her colleagues collected data off Chinese social media sites. The geographic sources of those posts, along with photographs and even drone imagery helped indicate where people were residing — and, in turn, where the Chinese government had overdeveloped some of its massive building projects. This provided a kind of real-time picture of a housing boom and bust, which helped create a new dialogue among planners and policymakers about China’s growth.
    But collecting data and conducting rigorous studies are just two elements of using data effectively. In another chapter, “Share it!,” Williams contends that the effective visual presentation of information is an essential part of data-driven research.
    When asked, Williams will cite her participation in the “Million Dollar Blocks” project — along with researchers from Columbia University and the Justice Mapping Center — as a good example of data visualization from her own career. That project mapped the places where residents of a Brooklyn block had been incarcerated, while highlighting the costs of incarceration. The project helped provide impetus for the Criminal Justice Reinvestment Act of 2010, which funded job-training programs for former prisoners. And the maps wound up being exhibited at New York’s Museum of Modern Art.
    “To open up data for everyone, you have to communicate it visually,” Williams says. “Seeing it on a map opens it up to a much broader public, including policy experts or legislators or a company or a business analyst. Remember that communication is as big a part of data analytics as the statistics and insights themselves.”
    “Data is not neutral”
    To be sure, Williams acknowledges, she herself enters into data-mapping projects with her own ethical views and advocacy goals. What matters, in her view, is being transparent about this.
    “I think it’s always important to ask yourself: What is your objective? One thing I say to my students about the Million-Dollar Blocks map is, ‘Yeah, My map is biased. Is that okay?’ I think it’s okay, because the story I’m telling, and the position I’m taking is one that I hope will benefit society and be used for a public good. Everybody’s using their data to act in some kind of way. … No matter how much you try, data is not neutral.”
    “Data Action” has received praise from other scholars. Shannon Mattern, a professor of anthropology at The New School, calls it a “perfect fusion of historical framing, critical reflection, and how-to instruction” that “powerfully demonstrates how collaborative, methodologically pluralistic, reflective, and publicly responsive modes of data design can incite civic change.”
    Williams, for her part, hopes to reach a broad audience, from scholars and planners to activists and anyone who like cities, data, or both. And she emphasizes that using data to make cities better is not a passive activity for observers — it’s a process that helps communities and advocacy groups form and then sustain themselves.
    “Empowering people to do data collection is part of what I hope this book does,” Williams says. “If something’s going on in your community, collect that data. It also creates an organizing framework and a community. People who were previously working independently now come together on a particular project. Moving toward this common goal often helps them build the energy and the capacity to work for the changes they need.” More

  • in

    Center to advance predictive simulation research established at MIT Schwarzman College of Computing

    Understanding the degradation of materials in extreme environments is a scientific problem with major technological applications, ranging from spaceflight to industrial and nuclear safety. Yet it presents an intrinsic challenge: Researchers cannot easily reproduce these environments in the laboratory or observe essential degradation processes in real-time. Computational modeling and simulation have consequently become indispensable tools in helping to predict the behavior of complex materials across a range of strenuous conditions
    At MIT, a new research effort aims to advance the state-of-the-art in predictive simulation as well as shape new interdisciplinary graduate education programs at the intersection of computational science and computer science.
    Strengthening engagement with the sciences
    The Center for Exascale Simulation of Materials in Extreme Environments (CESMIX) — based at the Center for Computational Science and Engineering (CCSE) within the MIT Stephen A. Schwarzman College of Computing — will bring together researchers in numerical algorithms and scientific computing, quantum chemistry, materials science, and computer science to connect quantum and molecular simulations of materials with advanced programming languages, compiler technologies, and software performance engineering tools, underpinned by rigorous approaches to statistical inference and uncertainty quantification.
    “One of the goals of CESMIX is to build a substantive link between computer science and computational science and engineering, something that historically has been hard to do, but is sorely needed,” says Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing. “The center will also provide opportunities for faculty, researchers, and students across MIT to interact intellectually and create a new synthesis of different disciplines, which is central to the mission of the college.”
    Leading the project as principal investigator is Youssef Marzouk, professor of aeronautics and astronautics and co-director of CCSE, which was renamed from the Center of Computational Engineering in January to reflect its strengthening engagement with the sciences at MIT. Marzouk, who is also a member of the Statistics and Data Science Center, notes that “CESMIX is trying to do two things simultaneously. On the one hand, we want to solve an incredibly challenging multiscale simulation problem, harnessing quantum mechanical models of complex materials to achieve unprecedented accuracy at the engineering scale. On the other hand, we want to create tools that make development and holistic performance engineering of the associated software stack as easy as possible, to achieve top performance on the coming generation of exascale computational hardware.”
    The project involves participation from an interdisciplinary cohort of eight faculty members, serving as co-PIs, and research scientists spanning multiple labs and departments at MIT. The full list of participants includes:
    Youssef Marzouk, PI, professor of aeronautics and astronautics and co-director of CCSE;
    Saman Amarasinghe, co-PI, professor of computer science and engineering;
    Alan Edelman, co-PI, professor of applied mathematics;
    Nicolas Hadjiconstantinou, co-PI, professor of mechanical engineering and co-director of CCSE;
    Asegun Henry, co-PI, associate professor of mechanical engineering;
    Heather Kulik, co-PI, associate professor of chemical engineering;
    Charles Leiserson, co-PI, the Edwin Sibley Webster Professor of Electrical Engineering;
    Jaime Peraire, co-PI, the H.N. Slater Professor of Aeronautics and Astronautics;
    Cuong Nguyen, principal research scientist of aeronautics and astronautics;
    Tao B. Schardl, research scientist in the Computer Science and Artificial Intelligence Laboratory; and
    Mehdi Pishahang, research scientist of mechanical engineering.
    MIT was among a total of nine universities selected as part of the Predictive Science Academic Alliance Program (PSAAP) III to form a new center to support science-based modeling and simulation and exascale computing technologies. This is the third time that PSAAP centers have been awarded by the U.S. Department of Energy’s National Nuclear Security Administration (DoE/NNSA) since the program launched in 2008 and is the first time that the Institute has ever been selected. MIT is one of just two institutions nationwide chosen to establish a Single-Discipline Center in this round and will receive up to $9.5 million in funding through a cooperative agreement over five years.
    Advancing predictive simulation
    CESMIX will focus on exascale simulation of materials in hypersonic flow environments. It will also drive the development of new predictive simulation paradigms and computer science tools for the exascale. Researchers will specifically aim to predict the degradation of complex (disordered and multi-component) materials under extreme loading inaccessible to direct experimental observation — an application representing a technology domain of intense current interest, and one that ­­­exemplifies an important class of scientific problems involving material interfaces in extreme environments.
    “A big challenge here is in being able to predict what reactions will occur and what new molecules will form under these conditions. While quantum mechanical modeling will enable us to predict these events, we also need to be able to address the times and length scales of these processes,” says Kulik, who is also a faculty member of CCSE. “Our efforts will be focused on developing the needed software and machine learning tools that tell us when more affordable physical models can address the length scale challenge and when we need quantum mechanics to address the accuracy challenge.”
    CESMIX researchers plan on disseminating their results via multiple open-source software projects, engaging their developer and user communities. The project will also support the work of postdocs, graduate students, and research scientists at MIT with the overarching goal of creating new paradigms of practice for the next generation of computational scientists. More