More stories

  • in

    Study finds health risks in switching ships from diesel to ammonia fuel

    As container ships the size of city blocks cross the oceans to deliver cargo, their huge diesel engines emit large quantities of air pollutants that drive climate change and have human health impacts. It has been estimated that maritime shipping accounts for almost 3 percent of global carbon dioxide emissions and the industry’s negative impacts on air quality cause about 100,000 premature deaths each year.Decarbonizing shipping to reduce these detrimental effects is a goal of the International Maritime Organization, a U.N. agency that regulates maritime transport. One potential solution is switching the global fleet from fossil fuels to sustainable fuels such as ammonia, which could be nearly carbon-free when considering its production and use.But in a new study, an interdisciplinary team of researchers from MIT and elsewhere caution that burning ammonia for maritime fuel could worsen air quality further and lead to devastating public health impacts, unless it is adopted alongside strengthened emissions regulations.Ammonia combustion generates nitrous oxide (N2O), a greenhouse gas that is about 300 times more potent than carbon dioxide. It also emits nitrogen in the form of nitrogen oxides (NO and NO2, referred to as NOx), and unburnt ammonia may slip out, which eventually forms fine particulate matter in the atmosphere. These tiny particles can be inhaled deep into the lungs, causing health problems like heart attacks, strokes, and asthma.The new study indicates that, under current legislation, switching the global fleet to ammonia fuel could cause up to about 600,000 additional premature deaths each year. However, with stronger regulations and cleaner engine technology, the switch could lead to about 66,000 fewer premature deaths than currently caused by maritime shipping emissions, with far less impact on global warming.“Not all climate solutions are created equal. There is almost always some price to pay. We have to take a more holistic approach and consider all the costs and benefits of different climate solutions, rather than just their potential to decarbonize,” says Anthony Wong, a postdoc in the MIT Center for Global Change Science and lead author of the study.His co-authors include Noelle Selin, an MIT professor in the Institute for Data, Systems, and Society and the Department of Earth, Atmospheric and Planetary Sciences (EAPS); Sebastian Eastham, a former principal research scientist who is now a senior lecturer at Imperial College London; Christine Mounaïm-Rouselle, a professor at the University of Orléans in France; Yiqi Zhang, a researcher at the Hong Kong University of Science and Technology; and Florian Allroggen, a research scientist in the MIT Department of Aeronautics and Astronautics. The research appears this week in Environmental Research Letters.Greener, cleaner ammoniaTraditionally, ammonia is made by stripping hydrogen from natural gas and then combining it with nitrogen at extremely high temperatures. This process is often associated with a large carbon footprint. The maritime shipping industry is betting on the development of “green ammonia,” which is produced by using renewable energy to make hydrogen via electrolysis and to generate heat.“In theory, if you are burning green ammonia in a ship engine, the carbon emissions are almost zero,” Wong says.But even the greenest ammonia generates nitrous oxide (N2O), nitrogen oxides (NOx) when combusted, and some of the ammonia may slip out, unburnt. This nitrous oxide would escape into the atmosphere, where the greenhouse gas would remain for more than 100 years. At the same time, the nitrogen emitted as NOx and ammonia would fall to Earth, damaging fragile ecosystems. As these emissions are digested by bacteria, additional N2O  is produced.NOx and ammonia also mix with gases in the air to form fine particulate matter. A primary contributor to air pollution, fine particulate matter kills an estimated 4 million people each year.“Saying that ammonia is a ‘clean’ fuel is a bit of an overstretch. Just because it is carbon-free doesn’t necessarily mean it is clean and good for public health,” Wong says.A multifaceted modelThe researchers wanted to paint the whole picture, capturing the environmental and public health impacts of switching the global fleet to ammonia fuel. To do so, they designed scenarios to measure how pollutant impacts change under certain technology and policy assumptions.From a technological point of view, they considered two ship engines. The first burns pure ammonia, which generates higher levels of unburnt ammonia but emits fewer nitrogen oxides. The second engine technology involves mixing ammonia with hydrogen to improve combustion and optimize the performance of a catalytic converter, which controls both nitrogen oxides and unburnt ammonia pollution.They also considered three policy scenarios: current regulations, which only limit NOx emissions in some parts of the world; a scenario that adds ammonia emission limits over North America and Western Europe; and a scenario that adds global limits on ammonia and NOx emissions.The researchers used a ship track model to calculate how pollutant emissions change under each scenario and then fed the results into an air quality model. The air quality model calculates the impact of ship emissions on particulate matter and ozone pollution. Finally, they estimated the effects on global public health.One of the biggest challenges came from a lack of real-world data, since no ammonia-powered ships are yet sailing the seas. Instead, the researchers relied on experimental ammonia combustion data from collaborators to build their model.“We had to come up with some clever ways to make that data useful and informative to both the technology and regulatory situations,” he says.A range of outcomesIn the end, they found that with no new regulations and ship engines that burn pure ammonia, switching the entire fleet would cause 681,000 additional premature deaths each year.“While a scenario with no new regulations is not very realistic, it serves as a good warning of how dangerous ammonia emissions could be. And unlike NOx, ammonia emissions from shipping are currently unregulated,” Wong says.However, even without new regulations, using cleaner engine technology would cut the number of premature deaths down to about 80,000, which is about 20,000 fewer than are currently attributed to maritime shipping emissions. With stronger global regulations and cleaner engine technology, the number of people killed by air pollution from shipping could be reduced by about 66,000.“The results of this study show the importance of developing policies alongside new technologies,” Selin says. “There is a potential for ammonia in shipping to be beneficial for both climate and air quality, but that requires that regulations be designed to address the entire range of potential impacts, including both climate and air quality.”Ammonia’s air quality impacts would not be felt uniformly across the globe, and addressing them fully would require coordinated strategies across very different contexts. Most premature deaths would occur in East Asia, since air quality regulations are less stringent in this region. Higher levels of existing air pollution cause the formation of more particulate matter from ammonia emissions. In addition, shipping volume over East Asia is far greater than elsewhere on Earth, compounding these negative effects.In the future, the researchers want to continue refining their analysis. They hope to use these findings as a starting point to urge the marine industry to share engine data they can use to better evaluate air quality and climate impacts. They also hope to inform policymakers about the importance and urgency of updating shipping emission regulations.This research was funded by the MIT Climate and Sustainability Consortium. More

  • in

    MIT researchers introduce generative AI for databases

    A new tool makes it easier for database users to perform complicated statistical analyses of tabular data without the need to know what is going on behind the scenes.GenSQL, a generative AI system for databases, could help users make predictions, detect anomalies, guess missing values, fix errors, or generate synthetic data with just a few keystrokes.For instance, if the system were used to analyze medical data from a patient who has always had high blood pressure, it could catch a blood pressure reading that is low for that particular patient but would otherwise be in the normal range.GenSQL automatically integrates a tabular dataset and a generative probabilistic AI model, which can account for uncertainty and adjust their decision-making based on new data.Moreover, GenSQL can be used to produce and analyze synthetic data that mimic the real data in a database. This could be especially useful in situations where sensitive data cannot be shared, such as patient health records, or when real data are sparse.This new tool is built on top of SQL, a programming language for database creation and manipulation that was introduced in the late 1970s and is used by millions of developers worldwide.“Historically, SQL taught the business world what a computer could do. They didn’t have to write custom programs, they just had to ask questions of a database in high-level language. We think that, when we move from just querying data to asking questions of models and data, we are going to need an analogous language that teaches people the coherent questions you can ask a computer that has a probabilistic model of the data,” says Vikash Mansinghka ’05, MEng ’09, PhD ’09, senior author of a paper introducing GenSQL and a principal research scientist and leader of the Probabilistic Computing Project in the MIT Department of Brain and Cognitive Sciences.When the researchers compared GenSQL to popular, AI-based approaches for data analysis, they found that it was not only faster but also produced more accurate results. Importantly, the probabilistic models used by GenSQL are explainable, so users can read and edit them.“Looking at the data and trying to find some meaningful patterns by just using some simple statistical rules might miss important interactions. You really want to capture the correlations and the dependencies of the variables, which can be quite complicated, in a model. With GenSQL, we want to enable a large set of users to query their data and their model without having to know all the details,” adds lead author Mathieu Huot, a research scientist in the Department of Brain and Cognitive Sciences and member of the Probabilistic Computing Project.They are joined on the paper by Matin Ghavami and Alexander Lew, MIT graduate students; Cameron Freer, a research scientist; Ulrich Schaechtel and Zane Shelby of Digital Garage; Martin Rinard, an MIT professor in the Department of Electrical Engineering and Computer Science and member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and Feras Saad ’15, MEng ’16, PhD ’22, an assistant professor at Carnegie Mellon University. The research was recently presented at the ACM Conference on Programming Language Design and Implementation.Combining models and databasesSQL, which stands for structured query language, is a programming language for storing and manipulating information in a database. In SQL, people can ask questions about data using keywords, such as by summing, filtering, or grouping database records.However, querying a model can provide deeper insights, since models can capture what data imply for an individual. For instance, a female developer who wonders if she is underpaid is likely more interested in what salary data mean for her individually than in trends from database records.The researchers noticed that SQL didn’t provide an effective way to incorporate probabilistic AI models, but at the same time, approaches that use probabilistic models to make inferences didn’t support complex database queries.They built GenSQL to fill this gap, enabling someone to query both a dataset and a probabilistic model using a straightforward yet powerful formal programming language.A GenSQL user uploads their data and probabilistic model, which the system automatically integrates. Then, she can run queries on data that also get input from the probabilistic model running behind the scenes. This not only enables more complex queries but can also provide more accurate answers.For instance, a query in GenSQL might be something like, “How likely is it that a developer from Seattle knows the programming language Rust?” Just looking at a correlation between columns in a database might miss subtle dependencies. Incorporating a probabilistic model can capture more complex interactions.   Plus, the probabilistic models GenSQL utilizes are auditable, so people can see which data the model uses for decision-making. In addition, these models provide measures of calibrated uncertainty along with each answer.For instance, with this calibrated uncertainty, if one queries the model for predicted outcomes of different cancer treatments for a patient from a minority group that is underrepresented in the dataset, GenSQL would tell the user that it is uncertain, and how uncertain it is, rather than overconfidently advocating for the wrong treatment.Faster and more accurate resultsTo evaluate GenSQL, the researchers compared their system to popular baseline methods that use neural networks. GenSQL was between 1.7 and 6.8 times faster than these approaches, executing most queries in a few milliseconds while providing more accurate results.They also applied GenSQL in two case studies: one in which the system identified mislabeled clinical trial data and the other in which it generated accurate synthetic data that captured complex relationships in genomics.Next, the researchers want to apply GenSQL more broadly to conduct largescale modeling of human populations. With GenSQL, they can generate synthetic data to draw inferences about things like health and salary while controlling what information is used in the analysis.They also want to make GenSQL easier to use and more powerful by adding new optimizations and automation to the system. In the long run, the researchers want to enable users to make natural language queries in GenSQL. Their goal is to eventually develop a ChatGPT-like AI expert one could talk to about any database, which grounds its answers using GenSQL queries.   This research is funded, in part, by the Defense Advanced Research Projects Agency (DARPA), Google, and the Siegel Family Foundation. More

  • in

    Scientists preserve DNA in an amber-like polymer

    In the movie “Jurassic Park,” scientists extracted DNA that had been preserved in amber for millions of years, and used it to create a population of long-extinct dinosaurs.Inspired partly by that film, MIT researchers have developed a glassy, amber-like polymer that can be used for long-term storage of DNA, whether entire human genomes or digital files such as photos.Most current methods for storing DNA require freezing temperatures, so they consume a great deal of energy and are not feasible in many parts of the world. In contrast, the new amber-like polymer can store DNA at room temperature while protecting the molecules from damage caused by heat or water.The researchers showed that they could use this polymer to store DNA sequences encoding the theme music from Jurassic Park, as well as an entire human genome. They also demonstrated that the DNA can be easily removed from the polymer without damaging it.“Freezing DNA is the number one way to preserve it, but it’s very expensive, and it’s not scalable,” says James Banal, a former MIT postdoc. “I think our new preservation method is going to be a technology that may drive the future of storing digital information on DNA.”Banal and Jeremiah Johnson, the A. Thomas Geurtin Professor of Chemistry at MIT, are the senior authors of the study, published yesterday in the Journal of the American Chemical Society. Former MIT postdoc Elizabeth Prince and MIT postdoc Ho Fung Cheng are the lead authors of the paper.Capturing DNADNA, a very stable molecule, is well-suited for storing massive amounts of information, including digital data. Digital storage systems encode text, photos, and other kind of information as a series of 0s and 1s. This same information can be encoded in DNA using the four nucleotides that make up the genetic code: A, T, G, and C. For example, G and C could be used to represent 0 while A and T represent 1.DNA offers a way to store this digital information at very high density: In theory, a coffee mug full of DNA could store all of the world’s data. DNA is also very stable and relatively easy to synthesize and sequence.In 2021, Banal and his postdoc advisor, Mark Bathe, an MIT professor of biological engineering, developed a way to store DNA in particles of silica, which could be labeled with tags that revealed the particles’ contents. That work led to a spinout called Cache DNA.One downside to that storage system is that it takes several days to embed DNA into the silica particles. Furthermore, removing the DNA from the particles requires hydrofluoric acid, which can be hazardous to workers handling the DNA.To come up with alternative storage materials, Banal began working with Johnson and members of his lab. Their idea was to use a type of polymer known as a degradable thermoset, which consists of polymers that form a solid when heated. The material also includes cleavable links that can be easily broken, allowing the polymer to be degraded in a controlled way.“With these deconstructable thermosets, depending on what cleavable bonds we put into them, we can choose how we want to degrade them,” Johnson says.For this project, the researchers decided to make their thermoset polymer from styrene and a cross-linker, which together form an amber-like thermoset called cross-linked polystyrene. This thermoset is also very hydrophobic, so it can prevent moisture from getting in and damaging the DNA. To make the thermoset degradable, the styrene monomers and cross-linkers are copolymerized with monomers called thionolactones. These links can be broken by treating them with a molecule called cysteamine.Because styrene is so hydrophobic, the researchers had to come up with a way to entice DNA — a hydrophilic, negatively charged molecule — into the styrene.To do that, they identified a combination of three monomers that they could turn into polymers that dissolve DNA by helping it interact with styrene. Each of the monomers has different features that cooperate to get the DNA out of water and into the styrene. There, the DNA forms spherical complexes, with charged DNA in the center and hydrophobic groups forming an outer layer that interacts with styrene. When heated, this solution becomes a solid glass-like block, embedded with DNA complexes.The researchers dubbed their method T-REX (Thermoset-REinforced Xeropreservation). The process of embedding DNA into the polymer network takes a few hours, but that could become shorter with further optimization, the researchers say.To release the DNA, the researchers first add cysteamine, which cleaves the bonds holding the polystyrene thermoset together, breaking it into smaller pieces. Then, a detergent called SDS can be added to remove the DNA from polystyrene without damaging it.Storing informationUsing these polymers, the researchers showed that they could encapsulate DNA of varying length, from tens of nucleotides up to an entire human genome (more than 50,000 base pairs). They were able to store DNA encoding the Emancipation Proclamation and the MIT logo, in addition to the theme music from “Jurassic Park.”After storing the DNA and then removing it, the researchers sequenced it and found that no errors had been introduced, which is a critical feature of any digital data storage system.The researchers also showed that the thermoset polymer can protect DNA from temperatures up to 75 degrees Celsius (167 degrees Fahrenheit). They are now working on ways to streamline the process of making the polymers and forming them into capsules for long-term storage.Cache DNA, a company started by Banal and Bathe, with Johnson as a member of the scientific advisory board, is now working on further developing DNA storage technology. The earliest application they envision is storing genomes for personalized medicine, and they also anticipate that these stored genomes could undergo further analysis as better technology is developed in the future.“The idea is, why don’t we preserve the master record of life forever?” Banal says. “Ten years or 20 years from now, when technology has advanced way more than we could ever imagine today, we could learn more and more things. We’re still in the very infancy of understanding the genome and how it relates to disease.”The research was funded by the National Science Foundation. More

  • in

    Making climate models relevant for local decision-makers

    Climate models are a key technology in predicting the impacts of climate change. By running simulations of the Earth’s climate, scientists and policymakers can estimate conditions like sea level rise, flooding, and rising temperatures, and make decisions about how to appropriately respond. But current climate models struggle to provide this information quickly or affordably enough to be useful on smaller scales, such as the size of a city. Now, authors of a new open-access paper published in the Journal of Advances in Modeling Earth Systems have found a method to leverage machine learning to utilize the benefits of current climate models, while reducing the computational costs needed to run them. “It turns the traditional wisdom on its head,” says Sai Ravela, a principal research scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS) who wrote the paper with EAPS postdoc Anamitra Saha. Traditional wisdomIn climate modeling, downscaling is the process of using a global climate model with coarse resolution to generate finer details over smaller regions. Imagine a digital picture: A global model is a large picture of the world with a low number of pixels. To downscale, you zoom in on just the section of the photo you want to look at — for example, Boston. But because the original picture was low resolution, the new version is blurry; it doesn’t give enough detail to be particularly useful. “If you go from coarse resolution to fine resolution, you have to add information somehow,” explains Saha. Downscaling attempts to add that information back in by filling in the missing pixels. “That addition of information can happen two ways: Either it can come from theory, or it can come from data.” Conventional downscaling often involves using models built on physics (such as the process of air rising, cooling, and condensing, or the landscape of the area), and supplementing it with statistical data taken from historical observations. But this method is computationally taxing: It takes a lot of time and computing power to run, while also being expensive. A little bit of both In their new paper, Saha and Ravela have figured out a way to add the data another way. They’ve employed a technique in machine learning called adversarial learning. It uses two machines: One generates data to go into our photo. But the other machine judges the sample by comparing it to actual data. If it thinks the image is fake, then the first machine has to try again until it convinces the second machine. The end-goal of the process is to create super-resolution data. Using machine learning techniques like adversarial learning is not a new idea in climate modeling; where it currently struggles is its inability to handle large amounts of basic physics, like conservation laws. The researchers discovered that simplifying the physics going in and supplementing it with statistics from the historical data was enough to generate the results they needed. “If you augment machine learning with some information from the statistics and simplified physics both, then suddenly, it’s magical,” says Ravela. He and Saha started with estimating extreme rainfall amounts by removing more complex physics equations and focusing on water vapor and land topography. They then generated general rainfall patterns for mountainous Denver and flat Chicago alike, applying historical accounts to correct the output. “It’s giving us extremes, like the physics does, at a much lower cost. And it’s giving us similar speeds to statistics, but at much higher resolution.” Another unexpected benefit of the results was how little training data was needed. “The fact that that only a little bit of physics and little bit of statistics was enough to improve the performance of the ML [machine learning] model … was actually not obvious from the beginning,” says Saha. It only takes a few hours to train, and can produce results in minutes, an improvement over the months other models take to run. Quantifying risk quicklyBeing able to run the models quickly and often is a key requirement for stakeholders such as insurance companies and local policymakers. Ravela gives the example of Bangladesh: By seeing how extreme weather events will impact the country, decisions about what crops should be grown or where populations should migrate to can be made considering a very broad range of conditions and uncertainties as soon as possible.“We can’t wait months or years to be able to quantify this risk,” he says. “You need to look out way into the future and at a large number of uncertainties to be able to say what might be a good decision.”While the current model only looks at extreme precipitation, training it to examine other critical events, such as tropical storms, winds, and temperature, is the next step of the project. With a more robust model, Ravela is hoping to apply it to other places like Boston and Puerto Rico as part of a Climate Grand Challenges project.“We’re very excited both by the methodology that we put together, as well as the potential applications that it could lead to,” he says.  More

  • in

    Janabel Xia: Algorithms, dance rhythms, and the drive to succeed

    Senior math major Janabel Xia is a study of a person in constant motion.When she isn’t sorting algorithms and improving traffic control systems for driverless vehicles, she’s dancing as a member of at least four dance clubs. She’s joined several social justice organizations, worked on cryptography and web authentication technology, and created a polling app that allows users to vote anonymously.In her final semester, she’s putting the pedal to the metal, with a green light to lessen the carbon footprint of urban transportation by using sensors at traffic light intersections.First stepsGrowing up in Lexington, Massachusetts, Janabel has been competing on math teams since elementary school. On her math team, which met early mornings before the start of school, she discovered a love of problem-solving that challenged her more than her classroom “plug-and-chug exercises.”At Lexington High School, she was math team captain, a two-time Math Olympiad attendee, and a silver medalist for Team USA at the European Girls’ Mathematical Olympiad.As a math major, she studies combinatorics and theoretical computer science, including theoretical and applied cryptography. In her sophomore year, she was a researcher in the Cryptography and Information Security Group at the MIT Computer Science and Artificial Intelligence Laboratory, where she conducted cryptanalysis research under Professor Vinod Vaikuntanathan.Part of her interests in cryptography stem from the beauty of the underlying mathematics itself — the field feels like clever engineering with mathematical tools. But another part of her interest in cryptography stems from its political dimensions, including its potential to fundamentally change existing power structures and governance. Xia and students at the University of California at Berkeley and Stanford University created zkPoll, a private polling app written with the Circom programming language, that allows users to create polls for specific sets of people, while generating a zero-knowledge proof that keeps personal information hidden to decrease negative voting influences from public perception.Her participation in the PKG Center’s Active Community Engagement Freshman Pre-Orientation Program introduced her to local community organizations focusing on food security, housing for formerly incarcerated individuals, and access to health care. She is also part of Reading for Revolution, a student book club that discusses race, class, and working-class movements within MIT and the Greater Boston area.Xia’s educational journey led to her ongoing pursuit of combining mathematical and computational methods in areas adjacent to urban planning.  “When I realized how much planning was concerned with social justice as it was concerned with design, I became more attracted to the field.”Going on autopilotShe took classes with the Department of Urban Studies and Planning and is currently working on an Undergraduate Research Opportunities Program (UROP) project with Professor Cathy Wu in the Institute for Data, Systems, and Society.Recent work on eco-driving by Wu and doctoral student Vindula Jayawardana investigated semi-autonomous vehicles that communicate with sensors localized at traffic intersections, which in theory could reduce carbon emissions by up to 21 percent.Xia aims to optimize the implementation scheme for these sensors at traffic intersections, considering a graded scheme where perhaps only 20 percent of all sensors are initially installed, and more sensors get added in waves. She wants to maximize the emission reduction rates at each step of the process, as well as ensure there is no unnecessary installation and de-installation of such sensors.  Dance numbersMeanwhile, Xia has been a member of MIT’s Fixation, Ridonkulous, and MissBehavior groups, and as a traditional Chinese dance choreographer for the MIT Asian Dance Team. A dancer since she was 3, Xia started with Chinese traditional dance, and later added ballet and jazz. Because she is as much of a dancer as a researcher, she has figured out how to make her schedule work.“Production weeks are always madness, with dancers running straight from class to dress rehearsals and shows all evening and coming back early next morning to take down lights and roll up marley [material that covers the stage floor],” she says. “As busy as it keeps me, I couldn’t have survived MIT without dance. I love the discipline, creativity, and most importantly the teamwork that dance demands of us. I really love the dance community here with my whole heart. These friends have inspired me and given me the love to power me through MIT.”Xia lives with her fellow Dance Team members at the off-campus Women’s Independent Living Group (WILG).  “I really value WILG’s culture of independence, both in lifestyle — cooking, cleaning up after yourself, managing house facilities, etc. — and thought — questioning norms, staying away from status games, finding new passions.”In addition to her UROP, she’s wrapping up some graduation requirements, finishing up a research paper on sorting algorithms from her summer at the University of Minnesota Duluth Research Experience for Undergraduates in combinatorics, and deciding between PhD programs in math and computer science.  “My biggest goal right now is to figure out how to combine my interests in mathematics and urban studies, and more broadly connect technical perspectives with human-centered work in a way that feels right to me,” she says.“Overall, MIT has given me so many avenues to explore that I would have never thought about before coming here, for which I’m infinitely grateful. Every time I find something new, it’s hard for me not to find it cool. There’s just so much out there to learn about. While it can feel overwhelming at times, I hope to continue that learning and exploration for the rest of my life.” More

  • in

    Exploring the cellular neighborhood

    Cells rely on complex molecular machines composed of protein assemblies to perform essential functions such as energy production, gene expression, and protein synthesis. To better understand how these machines work, scientists capture snapshots of them by isolating proteins from cells and using various methods to determine their structures. However, isolating proteins from cells also removes them from the context of their native environment, including protein interaction partners and cellular location.

    Recently, cryogenic electron tomography (cryo-ET) has emerged as a way to observe proteins in their native environment by imaging frozen cells at different angles to obtain three-dimensional structural information. This approach is exciting because it allows researchers to directly observe how and where proteins associate with each other, revealing the cellular neighborhood of those interactions within the cell.

    With the technology available to image proteins in their native environment, MIT graduate student Barrett Powell wondered if he could take it one step further: What if molecular machines could be observed in action? In a paper published March 8 in Nature Methods, Powell describes the method he developed, called tomoDRGN, for modeling structural differences of proteins in cryo-ET data that arise from protein motions or proteins binding to different interaction partners. These variations are known as structural heterogeneity. 

    Although Powell had joined the lab of MIT associate professor of biology Joey Davis as an experimental scientist, he recognized the potential impact of computational approaches in understanding structural heterogeneity within a cell. Previously, the Davis Lab developed a related methodology named cryoDRGN to understand structural heterogeneity in purified samples. As Powell and Davis saw cryo-ET rising in prominence in the field, Powell took on the challenge of re-imagining this framework to work in cells.

    When solving structures with purified samples, each particle is imaged only once. By contrast, cryo-ET data is collected by imaging each particle more than 40 times from different angles. That meant tomoDRGN needed to be able to merge the information from more than 40 images, which was where the project hit a roadblock: the amount of data led to an information overload.

    To address this, Powell successfully rebuilt the cryoDRGN model to prioritize only the highest-quality data. When imaging the same particle multiple times, radiation damage occurs. The images acquired earlier, therefore, tend to be of higher quality because the particles are less damaged.

    “By excluding some of the lower-quality data, the results were actually better than using all of the data — and the computational performance was substantially faster,” Powell says.

    Just as Powell was beginning work on testing his model, he had a stroke of luck: The authors of a groundbreaking new study that visualized, for the first time, ribosomes inside cells at near-atomic resolution, shared their raw data on the Electric Microscopy Public Image Archive (EMPIAR). This dataset was an exemplary test case for Powell, through which he demonstrated that tomoDRGN could uncover structural heterogeneity within cryo-ET data. 

    According to Powell, one exciting result is what tomoDRGN found surrounding a subset of ribosomes in the EMPIAR dataset. Some of the ribosomal particles were associated with a bacterial cell membrane and engaged in a process called cotranslational translocation. This occurs when a protein is being simultaneously synthesized and transported across a membrane. Researchers can use this result to make new hypotheses about how the ribosome functions with other protein machinery integral to transporting proteins outside of the cell, now guided by a structure of the complex in its native environment. 

    After seeing that tomoDRGN could resolve structural heterogeneity from a structurally diverse dataset, Powell was curious: How small of a population could tomoDRGN identify? For that test, he chose a protein named apoferritin, which is a commonly used benchmark for cryo-ET and is often treated as structurally homogeneous. Ferritin is a protein used for iron storage and is referred to as apoferritin when it lacks iron.

    Surprisingly, in addition to the expected particles, tomoDRGN revealed a minor population of ferritin particles — with iron bound — making up just 2 percent of the dataset, that was not previously reported. This result further demonstrated tomoDRGN’s ability to identify structural states that occur so infrequently that they would be averaged out of a 3D reconstruction. 

    Powell and other members of the Davis Lab are excited to see how tomoDRGN can be applied to further ribosomal studies and to other systems. Davis works on understanding how cells assemble, regulate, and degrade molecular machines, so the next steps include exploring ribosome biogenesis within cells in greater detail using this new tool.

    “What are the possible states that we may be losing during purification?” Davis asks. “Perhaps more excitingly, we can look at how they localize within the cell and what partners and protein complexes they may be interacting with.” More

  • in

    Study: Global deforestation leads to more mercury pollution

    About 10 percent of human-made mercury emissions into the atmosphere each year are the result of global deforestation, according to a new MIT study.

    The world’s vegetation, from the Amazon rainforest to the savannahs of sub-Saharan Africa, acts as a sink that removes the toxic pollutant from the air. However, if the current rate of deforestation remains unchanged or accelerates, the researchers estimate that net mercury emissions will keep increasing.

    “We’ve been overlooking a significant source of mercury, especially in tropical regions,” says Ari Feinberg, a former postdoc in the Institute for Data, Systems, and Society (IDSS) and lead author of the study.

    The researchers’ model shows that the Amazon rainforest plays a particularly important role as a mercury sink, contributing about 30 percent of the global land sink. Curbing Amazon deforestation could thus have a substantial impact on reducing mercury pollution.

    The team also estimates that global reforestation efforts could increase annual mercury uptake by about 5 percent. While this is significant, the researchers emphasize that reforestation alone should not be a substitute for worldwide pollution control efforts.

    “Countries have put a lot of effort into reducing mercury emissions, especially northern industrialized countries, and for very good reason. But 10 percent of the global anthropogenic source is substantial, and there is a potential for that to be even greater in the future. [Addressing these deforestation-related emissions] needs to be part of the solution,” says senior author Noelle Selin, a professor in IDSS and MIT’s Department of Earth, Atmospheric and Planetary Sciences.

    Feinberg and Selin are joined on the paper by co-authors Martin Jiskra, a former Swiss National Science Foundation Ambizione Fellow at the University of Basel; Pasquale Borrelli, a professor at Roma Tre University in Italy; and Jagannath Biswakarma, a postdoc at the Swiss Federal Institute of Aquatic Science and Technology. The paper appears today in Environmental Science and Technology.

    Modeling mercury

    Over the past few decades, scientists have generally focused on studying deforestation as a source of global carbon dioxide emissions. Mercury, a trace element, hasn’t received the same attention, partly because the terrestrial biosphere’s role in the global mercury cycle has only recently been better quantified.

    Plant leaves take up mercury from the atmosphere, in a similar way as they take up carbon dioxide. But unlike carbon dioxide, mercury doesn’t play an essential biological function for plants. Mercury largely stays within a leaf until it falls to the forest floor, where the mercury is absorbed by the soil.

    Mercury becomes a serious concern for humans if it ends up in water bodies, where it can become methylated by microorganisms. Methylmercury, a potent neurotoxin, can be taken up by fish and bioaccumulated through the food chain. This can lead to risky levels of methylmercury in the fish humans eat.

    “In soils, mercury is much more tightly bound than it would be if it were deposited in the ocean. The forests are doing a sort of ecosystem service, in that they are sequestering mercury for longer timescales,” says Feinberg, who is now a postdoc in the Blas Cabrera Institute of Physical Chemistry in Spain.

    In this way, forests reduce the amount of toxic methylmercury in oceans.

    Many studies of mercury focus on industrial sources, like burning fossil fuels, small-scale gold mining, and metal smelting. A global treaty, the 2013 Minamata Convention, calls on nations to reduce human-made emissions. However, it doesn’t directly consider impacts of deforestation.

    The researchers launched their study to fill in that missing piece.

    In past work, they had built a model to probe the role vegetation plays in mercury uptake. Using a series of land use change scenarios, they adjusted the model to quantify the role of deforestation.

    Evaluating emissions

    This chemical transport model tracks mercury from its emissions sources to where it is chemically transformed in the atmosphere and then ultimately to where it is deposited, mainly through rainfall or uptake into forest ecosystems.

    They divided the Earth into eight regions and performed simulations to calculate deforestation emissions factors for each, considering elements like type and density of vegetation, mercury content in soils, and historical land use.

    However, good data for some regions were hard to come by.

    They lacked measurements from tropical Africa or Southeast Asia — two areas that experience heavy deforestation. To get around this gap, they used simpler, offline models to simulate hundreds of scenarios, which helped them improve their estimations of potential uncertainties.

    They also developed a new formulation for mercury emissions from soil. This formulation captures the fact that deforestation reduces leaf area, which increases the amount of sunlight that hits the ground and accelerates the outgassing of mercury from soils.

    The model divides the world into grid squares, each of which is a few hundred square kilometers. By changing land surface and vegetation parameters in certain squares to represent deforestation and reforestation scenarios, the researchers can capture impacts on the mercury cycle.

    Overall, they found that about 200 tons of mercury are emitted to the atmosphere as the result of deforestation, or about 10 percent of total human-made emissions. But in tropical and sub-tropical countries, deforestation emissions represent a higher percentage of total emissions. For example, in Brazil deforestation emissions are 40 percent of total human-made emissions.

    In addition, people often light fires to prepare tropical forested areas for agricultural activities, which causes more emissions by releasing mercury stored by vegetation.

    “If deforestation was a country, it would be the second highest emitting country, after China, which emits around 500 tons of mercury a year,” Feinberg adds.

    And since the Minamata Convention is now addressing primary mercury emissions, scientists can expect deforestation to become a larger fraction of human-made emissions in the future.

    “Policies to protect forests or cut them down have unintended effects beyond their target. It is important to consider the fact that these are systems, and they involve human activities, and we need to understand them better in order to actually solve the problems that we know are out there,” Selin says.

    By providing this first estimate, the team hopes to inspire more research in this area.

    In the future, they want to incorporate more dynamic Earth system models into their analysis, which would enable them to interactively track mercury uptake and better model the timescale of vegetation regrowth.

    “This paper represents an important advance in our understanding of global mercury cycling by quantifying a pathway that has long been suggested but not yet quantified. Much of our research to date has focused on primary anthropogenic emissions — those directly resulting from human activity via coal combustion or mercury-gold amalgam burning in artisanal and small-scale gold mining,” says Jackie Gerson, an assistant professor in the Department of Earth and Environmental Sciences at Michigan State University, who was not involved with this research. “This research shows that deforestation can also result in substantial mercury emissions and needs to be considered both in terms of global mercury models and land management policies. It therefore has the potential to advance our field scientifically as well as to promote policies that reduce mercury emissions via deforestation.

    This work was funded, in part, by the U.S. National Science Foundation, the Swiss National Science Foundation, and Swiss Federal Institute of Aquatic Science and Technology. More

  • in

    Creating new skills and new connections with MIT’s Quantitative Methods Workshop

    Starting on New Year’s Day, when many people were still clinging to holiday revelry, scores of students and faculty members from about a dozen partner universities instead flipped open their laptops for MIT’s Quantitative Methods Workshop, a jam-packed, weeklong introduction to how computational and mathematical techniques can be applied to neuroscience and biology research. But don’t think of QMW as a “crash course.” Instead the program’s purpose is to help elevate each participant’s scientific outlook, both through the skills and concepts it imparts and the community it creates.

    “It broadens their horizons, it shows them significant applications they’ve never thought of, and introduces them to people whom as researchers they will come to know and perhaps collaborate with one day,” says Susan L. Epstein, a Hunter College computer science professor and education coordinator of MIT’s Center for Brains, Minds, and Machines, which hosts the program with the departments of Biology and Brain and Cognitive Sciences and The Picower Institute for Learning and Memory. “It is a model of interdisciplinary scholarship.”

    This year 83 undergraduates and faculty members from institutions that primarily serve groups underrepresented in STEM fields took part in the QMW, says organizer Mandana Sassanfar, senior lecturer and director of diversity and science outreach across the four hosting MIT entities. Since the workshop launched in 2010, it has engaged more than 1,000 participants, of whom more than 170 have gone on to participate in MIT Summer Research Programs (such as MSRP-BIO), and 39 have come to MIT for graduate school.

    Individual goals, shared experience

    Undergraduates and faculty in various STEM disciplines often come to QMW to gain an understanding of, or expand their expertise in, computational and mathematical data analysis. Computer science- and statistics-minded participants come to learn more about how such techniques can be applied in life sciences fields. In lectures; in hands-on labs where they used the computer programming language Python to process, analyze, and visualize data; and in less formal settings such as tours and lunches with MIT faculty, participants worked and learned together, and informed each other’s perspectives.

    Brain and Cognitive Sciences Professor Nancy Kanwisher delivers a lecture in MIT’s Building 46 on functional brain imaging to QMW participants.

    Photo: Mandana Sassanfar

    Previous item
    Next item

    And regardless of their field of study, participants made connections with each other and with the MIT students and faculty who taught and spoke over the course of the week.

    Hunter College computer science sophomore Vlad Vostrikov says that while he has already worked with machine learning and other programming concepts, he was interested to “branch out” by seeing how they are used to analyze scientific datasets. He also valued the chance to learn the experiences of the graduate students who teach QMW’s hands-on labs.

    “This was a good way to explore computational biology and neuroscience,” Vostrikov says. “I also really enjoy hearing from the people who teach us. It’s interesting to hear where they come from and what they are doing.”

    Jariatu Kargbo, a biology and chemistry sophomore at University of Maryland Baltimore County, says when she first learned of the QMW she wasn’t sure it was for her. It seemed very computation-focused. But her advisor Holly Willoughby encouraged Kargbo to attend to learn about how programming could be useful in future research — currently she is taking part in research on the retina at UMBC. More than that, Kargbo also realized it would be a good opportunity to make connections at MIT in advance of perhaps applying for MSRP this summer.

    “I thought this would be a great way to meet up with faculty and see what the environment is like here because I’ve never been to MIT before,” Kargbo says. “It’s always good to meet other people in your field and grow your network.”

    QMW is not just for students. It’s also for their professors, who said they can gain valuable professional education for their research and teaching.

    Fayuan Wen, an assistant professor of biology at Howard University, is no stranger to computational biology, having performed big data genetic analyses of sickle cell disease (SCD). But she’s mostly worked with the R programming language and QMW’s focus is on Python. As she looks ahead to projects in which she wants analyze genomic data to help predict disease outcomes in SCD and HIV, she says a QMW session delivered by biology graduate student Hannah Jacobs was perfectly on point.

    “This workshop has the skills I want to have,” Wen says.

    Moreover, Wen says she is looking to start a machine-learning class in the Howard biology department and was inspired by some of the teaching materials she encountered at QMW — for example, online curriculum modules developed by Taylor Baum, an MIT graduate student in electrical engineering and computer science and Picower Institute labs, and Paloma Sánchez-Jáuregui, a coordinator who works with Sassanfar.

    Tiziana Ligorio, a Hunter College computer science doctoral lecturer who together with Epstein teaches a deep machine-learning class at the City University of New York campus, felt similarly. Rather than require a bunch of prerequisites that might drive students away from the class, Ligorio was looking to QMW’s intense but introductory curriculum as a resource for designing a more inclusive way of getting students ready for the class.

    Instructive interactions

    Each day runs from 9 a.m. to 5 p.m., including morning and afternoon lectures and hands-on sessions. Class topics ranged from statistical data analysis and machine learning to brain-computer interfaces, brain imaging, signal processing of neural activity data, and cryogenic electron microscopy.

    “This workshop could not happen without dedicated instructors — grad students, postdocs, and faculty — who volunteer to give lectures, design and teach hands-on computer labs, and meet with students during the very first week of January,” Saassanfar says.

    MIT assistant professor of biology Brady Weissbourd (center) converses with QMW student participants during a lunch break.

    Photo: Mandana Sassanfar

    Previous item
    Next item

    The sessions surround student lunches with MIT faculty members. For example, at midday Jan. 2, assistant professor of biology Brady Weissbourd, an investigator in the Picower Institute, sat down with seven students in one of Building 46’s curved sofas to field questions about his neuroscience research in jellyfish and how he uses quantitative techniques as part of that work. He also described what it’s like to be a professor, and other topics that came to the students’ minds.

    Then the participants all crossed Vassar Street to Building 26’s Room 152, where they formed different but similarly sized groups for the hands-on lab “Machine learning applications to studying the brain,” taught by Baum. She guided the class through Python exercises she developed illustrating “supervised” and “unsupervised” forms of machine learning, including how the latter method can be used to discern what a person is seeing based on magnetic readings of brain activity.

    As students worked through the exercises, tablemates helped each other by supplementing Baum’s instruction. Ligorio, Vostrikov, and Kayla Blincow, assistant professor of biology at the University of the Virgin Islands, for instance, all leapt to their feet to help at their tables.

    Hunter College lecturer of computer science Tiziana Ligorio (standing) explains a Python programming concept to students at her table during a workshop session.

    Photo: David Orenstein

    Previous item
    Next item

    At the end of the class, when Baum asked students what they had learned, they offered a litany of new knowledge. Survey data that Sassanfar and Sánchez-Jáuregui use to anonymously track QMW outcomes, revealed many more such attestations of the value of the sessions. With a prompt asking how one might apply what they’ve learned, one respondent wrote: “Pursue a research career or endeavor in which I apply the concepts of computer science and neuroscience together.”

    Enduring connections

    While some new QMW attendees might only be able to speculate about how they’ll apply their new skills and relationships, Luis Miguel de Jesús Astacio could testify to how attending QMW as an undergraduate back in 2014 figured into a career where he is now a faculty member in physics at the University of Puerto Rico Rio Piedras Campus. After QMW, he returned to MIT that summer as a student in the lab of neuroscientist and Picower Professor Susumu Tonegawa. He came back again in 2016 to the lab of physicist and Francis Friedman Professor Mehran Kardar. What’s endured for the decade has been his connection to Sassanfar. So while he was once a student at QMW, this year he was back with a cohort of undergraduates as a faculty member.

    Michael Aldarondo-Jeffries, director of academic advancement programs at the University of Central Florida, seconded the value of the networking that takes place at QMW. He has brought students for a decade, including four this year. What he’s observed is that as students come together in settings like QMW or UCF’s McNair program, which helps to prepare students for graduate school, they become inspired about a potential future as researchers.

    “The thing that stands out is just the community that’s formed,” he says. “For many of the students, it’s the first time that they’re in a group that understands what they’re moving toward. They don’t have to explain why they’re excited to read papers on a Friday night.”

    Or why they are excited to spend a week including New Year’s Day at MIT learning how to apply quantitative methods to life sciences data. More