More stories

  • in

    Helping the cause of environmental resilience

    Haruko Wainwright, the Norman C. Rasmussen Career Development Professor in Nuclear Science and Engineering (NSE) and assistant professor in civil and environmental engineering at MIT, grew up in rural Japan, where many nuclear facilities are located. She remembers worrying about the facilities as a child. Wainwright was only 6 at the time of the Chernobyl accident in 1986, but still recollects it vividly.

    Those early memories have contributed to Wainwright’s determination to research how technologies can mold environmental resilience — the capability of mitigating the consequences of accidents and recovering from contamination.

    Wainwright believes that environmental monitoring can help improve resilience. She co-leads the U.S. Department of Energy (DOE)’s Advanced Long-term Environmental Monitoring Systems (ALTEMIS) project, which integrates technologies such as in situ sensors, geophysics, remote sensing, simulations, and artificial intelligence to establish new paradigms for monitoring. The project focuses on soil and groundwater contamination at more than 100 U.S. sites that were used for nuclear weapons production.

    As part of this research, which was featured last year in Environmental Science & Technology Journal, Wainwright is working on a machine learning framework for improving environmental monitoring strategies. She hopes the ALTEMIS project will enable the rapid detection of anomalies while ensuring the stability of residual contamination and waste disposal facilities.

    Childhood in rural Japan

    Even as a child, Wainwright was interested in physics, history, and a variety of other subjects.

    But growing up in a rural area was not ideal for someone interested in STEM. There were no engineers or scientists in the community and no science museums, either. “It was not so cool to be interested in science, and I never talked about my interest with anyone,” Wainwright recalls.

    Television and books were the only door to the world of science. “I did not study English until middle school and I had never been on a plane until college. I sometimes find it miraculous that I am now working in the U.S. and teaching at MIT,” she says.

    As she grew a little older, Wainwright heard a lot of discussions about nuclear facilities in the region and many stories about Hiroshima and Nagasaki.

    At the same time, giants like Marie Curie inspired her to pursue science. Nuclear physics was particularly fascinating. “At some point during high school, I started wondering ‘what are radiations, what is radioactivity, what is light,’” she recalls. Reading Richard Feynman’s books and trying to understand quantum mechanics made her want to study physics in college.

    Pursuing research in the United States

    Wainwright pursued an undergraduate degree in engineering physics at Kyoto University. After two research internships in the United States, Wainwright was impressed by the dynamic and fast-paced research environment in the country.

    And compared to Japan, there were “more women in science and engineering,” Wainwright says. She enrolled at the University of California at Berkeley in 2005, where she completed her doctorate in nuclear engineering with minors in statistics and civil and environmental engineering.

    Before moving to MIT NSE in 2022, Wainwright was a staff scientist in the Earth and Environmental Area at Lawrence Berkeley National Laboratory (LBNL). She worked on a variety of topics, including radioactive contamination, climate science, CO2 sequestration, precision agriculture, and watershed science. Her time at LBNL helped Wainwright build a solid foundation about a variety of environmental sensors and monitoring and simulation methods across different earth science disciplines.   

    Empowering communities through monitoring

    One of the most compelling takeaways from Wainwright’s early research: People trust actual measurements and data as facts, even though they are skeptical about models and predictions. “I talked with many people living in Fukushima prefecture. Many of them have dosimeters and measure radiation levels on their own. They might not trust the government, but they trust their own data and are then convinced that it is safe to live there and to eat local food,” Wainwright says.

    She has been impressed that area citizens have gained significant knowledge about radiation and radioactivity through these efforts. “But they are often frustrated that people living far away, in cities like Tokyo, still avoid agricultural products from Fukushima,” Wainwright says.

    Wainwright thinks that data derived from environmental monitoring — through proper visualization and communication — can address misconceptions and fake news that often hurt people near contaminated sites.

    Wainwright is now interested in how these technologies — tested with real data at contaminated sites — can be proactively used for existing and future nuclear facilities “before contamination happens,” as she explored for Nuclear News. “I don’t think it is a good idea to simply dismiss someone’s concern as irrational. Showing credible data has been much more effective to provide assurance. Or a proper monitoring network would enable us to minimize contamination or support emergency responses when accidents happen,” she says.

    Educating communities and students

    Part of empowering communities involves improving their ability to process science-based information. “Potentially hazardous facilities always end up in rural regions; minorities’ concerns are often ignored. The problem is that these regions don’t produce so many scientists or policymakers; they don’t have a voice,” Wainwright says, “I am determined to dedicate my time to improve STEM education in rural regions and to increase the voice in these regions.”

    In a project funded by DOE, she collaborates with the team of researchers at the University of Alaska — the Alaska Center for Energy and Power and Teaching Through Technology program — aiming to improve STEM education for rural and indigenous communities. “Alaska is an important place for energy transition and environmental justice,” Wainwright says. Micro-nuclear reactors can potentially improve the life of rural communities who bear the brunt of the high cost of fuel and transportation. However, there is a distrust of nuclear technologies, stemming from past nuclear weapon testing. At the same time, Alaska has vast metal mining resources for renewable energy and batteries. And there are concerns about environmental contamination from mining and various sources. The teams’ vision is much broader, she points out. “The focus is on broader environmental monitoring technologies and relevant STEM education, addressing general water and air qualities,” Wainwright says.

    The issues also weave into the courses Wainwright teaches at MIT. “I think it is important for engineering students to be aware of environmental justice related to energy waste and mining as well as past contamination events and their recovery,” she says. “It is not OK just to send waste to, or develop mines in, rural regions, which could be a special place for some people. We need to make sure that these developments will not harm the environment and health of local communities.” Wainwright also hopes that this knowledge will ultimately encourage students to think creatively about engineering designs that minimize waste or recycle material.

    The last question of the final quiz of one of her recent courses was: Assume that you store high-level radioactive waste in your “backyard.” What technical strategies would make you and your family feel safe? “All students thought about this question seriously and many suggested excellent points, including those addressing environmental monitoring,” Wainwright says, “that made me hopeful about the future.” More

  • in

    Learning to grow machine-learning models

    It’s no secret that OpenAI’s ChatGPT has some incredible capabilities — for instance, the chatbot can write poetry that resembles Shakespearean sonnets or debug code for a computer program. These abilities are made possible by the massive machine-learning model that ChatGPT is built upon. Researchers have found that when these types of models become large enough, extraordinary capabilities emerge.

    But bigger models also require more time and money to train. The training process involves showing hundreds of billions of examples to a model. Gathering so much data is an involved process in itself. Then come the monetary and environmental costs of running many powerful computers for days or weeks to train a model that may have billions of parameters. 

    “It’s been estimated that training models at the scale of what ChatGPT is hypothesized to run on could take millions of dollars, just for a single training run. Can we improve the efficiency of these training methods, so we can still get good models in less time and for less money? We propose to do this by leveraging smaller language models that have previously been trained,” says Yoon Kim, an assistant professor in MIT’s Department of Electrical Engineering and Computer Science and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL).

    Rather than discarding a previous version of a model, Kim and his collaborators use it as the building blocks for a new model. Using machine learning, their method learns to “grow” a larger model from a smaller model in a way that encodes knowledge the smaller model has already gained. This enables faster training of the larger model.

    Their technique saves about 50 percent of the computational cost required to train a large model, compared to methods that train a new model from scratch. Plus, the models trained using the MIT method performed as well as, or better than, models trained with other techniques that also use smaller models to enable faster training of larger models.

    Reducing the time it takes to train huge models could help researchers make advancements faster with less expense, while also reducing the carbon emissions generated during the training process. It could also enable smaller research groups to work with these massive models, potentially opening the door to many new advances.

    “As we look to democratize these types of technologies, making training faster and less expensive will become more important,” says Kim, senior author of a paper on this technique.

    Kim and his graduate student Lucas Torroba Hennigen wrote the paper with lead author Peihao Wang, a graduate student at the University of Texas at Austin, as well as others at the MIT-IBM Watson AI Lab and Columbia University. The research will be presented at the International Conference on Learning Representations.

    The bigger the better

    Large language models like GPT-3, which is at the core of ChatGPT, are built using a neural network architecture called a transformer. A neural network, loosely based on the human brain, is composed of layers of interconnected nodes, or “neurons.” Each neuron contains parameters, which are variables learned during the training process that the neuron uses to process data.

    Transformer architectures are unique because, as these types of neural network models get bigger, they achieve much better results.

    “This has led to an arms race of companies trying to train larger and larger transformers on larger and larger datasets. More so than other architectures, it seems that transformer networks get much better with scaling. We’re just not exactly sure why this is the case,” Kim says.

    These models often have hundreds of millions or billions of learnable parameters. Training all these parameters from scratch is expensive, so researchers seek to accelerate the process.

    One effective technique is known as model growth. Using the model growth method, researchers can increase the size of a transformer by copying neurons, or even entire layers of a previous version of the network, then stacking them on top. They can make a network wider by adding new neurons to a layer or make it deeper by adding additional layers of neurons.

    In contrast to previous approaches for model growth, parameters associated with the new neurons in the expanded transformer are not just copies of the smaller network’s parameters, Kim explains. Rather, they are learned combinations of the parameters of the smaller model.

    Learning to grow

    Kim and his collaborators use machine learning to learn a linear mapping of the parameters of the smaller model. This linear map is a mathematical operation that transforms a set of input values, in this case the smaller model’s parameters, to a set of output values, in this case the parameters of the larger model.

    Their method, which they call a learned Linear Growth Operator (LiGO), learns to expand the width and depth of larger network from the parameters of a smaller network in a data-driven way.

    But the smaller model may actually be quite large — perhaps it has a hundred million parameters — and researchers might want to make a model with a billion parameters. So the LiGO technique breaks the linear map into smaller pieces that a machine-learning algorithm can handle.

    LiGO also expands width and depth simultaneously, which makes it more efficient than other methods. A user can tune how wide and deep they want the larger model to be when they input the smaller model and its parameters, Kim explains.

    When they compared their technique to the process of training a new model from scratch, as well as to model-growth methods, it was faster than all the baselines. Their method saves about 50 percent of the computational costs required to train both vision and language models, while often improving performance.

    The researchers also found they could use LiGO to accelerate transformer training even when they didn’t have access to a smaller, pretrained model.

    “I was surprised by how much better all the methods, including ours, did compared to the random initialization, train-from-scratch baselines.” Kim says.

    In the future, Kim and his collaborators are looking forward to applying LiGO to even larger models.

    The work was funded, in part, by the MIT-IBM Watson AI Lab, Amazon, the IBM Research AI Hardware Center, Center for Computational Innovation at Rensselaer Polytechnic Institute, and the U.S. Army Research Office. More

  • in

    Minimizing electric vehicles’ impact on the grid

    National and global plans to combat climate change include increasing the electrification of vehicles and the percentage of electricity generated from renewable sources. But some projections show that these trends might require costly new power plants to meet peak loads in the evening when cars are plugged in after the workday. What’s more, overproduction of power from solar farms during the daytime can waste valuable electricity-generation capacity.

    In a new study, MIT researchers have found that it’s possible to mitigate or eliminate both these problems without the need for advanced technological systems of connected devices and real-time communications, which could add to costs and energy consumption. Instead, encouraging the placing of charging stations for electric vehicles (EVs) in strategic ways, rather than letting them spring up anywhere, and setting up systems to initiate car charging at delayed times could potentially make all the difference.

    The study, published today in the journal Cell Reports Physical Science, is by Zachary Needell PhD ’22, postdoc Wei Wei, and Professor Jessika Trancik of MIT’s Institute for Data, Systems, and Society.

    In their analysis, the researchers used data collected in two sample cities: New York and Dallas. The data were gathered from, among other sources, anonymized records collected via onboard devices in vehicles, and surveys that carefully sampled populations to cover variable travel behaviors. They showed the times of day cars are used and for how long, and how much time the vehicles spend at different kinds of locations — residential, workplace, shopping, entertainment, and so on.

    The findings, Trancik says, “round out the picture on the question of where to strategically locate chargers to support EV adoption and also support the power grid.”

    Better availability of charging stations at workplaces, for example, could help to soak up peak power being produced at midday from solar power installations, which might otherwise go to waste because it is not economical to build enough battery or other storage capacity to save all of it for later in the day. Thus, workplace chargers can provide a double benefit, helping to reduce the evening peak load from EV charging and also making use of the solar electricity output.

    These effects on the electric power system are considerable, especially if the system must meet charging demands for a fully electrified personal vehicle fleet alongside the peaks in other demand for electricity, for example on the hottest days of the year. If unmitigated, the evening peaks in EV charging demand could require installing upwards of 20 percent more power-generation capacity, the researchers say.

    “Slow workplace charging can be more preferable than faster charging technologies for enabling a higher utilization of midday solar resources,” Wei says.

    Meanwhile, with delayed home charging, each EV charger could be accompanied by a simple app to estimate the time to begin its charging cycle so that it charges just before it is needed the next day. Unlike other proposals that require a centralized control of the charging cycle, such a system needs no interdevice communication of information and can be preprogrammed — and can accomplish a major shift in the demand on the grid caused by increasing EV penetration. The reason it works so well, Trancik says, is because of the natural variability in driving behaviors across individuals in a population.

    By “home charging,” the researchers aren’t only referring to charging equipment in individual garages or parking areas. They say it’s essential to make charging stations available in on-street parking locations and in apartment building parking areas as well.

    Trancik says the findings highlight the value of combining the two measures — workplace charging and delayed home charging — to reduce peak electricity demand, store solar energy, and conveniently meet drivers’ charging needs on all days. As the team showed in earlier research, home charging can be a particularly effective component of a strategic package of charging locations; workplace charging, they have found, is not a good substitute for home charging for meeting drivers’ needs on all days.

    “Given that there’s a lot of public money going into expanding charging infrastructure,” Trancik says, “how do you incentivize the location such that this is going to be efficiently and effectively integrated into the power grid without requiring a lot of additional capacity expansion?” This research offers some guidance to policymakers on where to focus rules and incentives.

    “I think one of the fascinating things about these findings is that by being strategic you can avoid a lot of physical infrastructure that you would otherwise need,” she adds. “Your electric vehicles can displace some of the need for stationary energy storage, and you can also avoid the need to expand the capacity of power plants, by thinking about the location of chargers as a tool for managing demands — where they occur and when they occur.”

    Delayed home charging could make a surprising amount of difference, the team found. “It’s basically incentivizing people to begin charging later. This can be something that is preprogrammed into your chargers. You incentivize people to delay the onset of charging by a bit, so that not everyone is charging at the same time, and that smooths out the peak.”

    Such a program would require some advance commitment on the part of participants. “You would need to have enough people committing to this program in advance to avoid the investment in physical infrastructure,” Trancik says. “So, if you have enough people signing up, then you essentially don’t have to build those extra power plants.”

    It’s not a given that all of this would line up just right, and putting in place the right mix of incentives would be crucial. “If you want electric vehicles to act as an effective storage technology for solar energy, then the [EV] market needs to grow fast enough in order to be able to do that,” Trancik says.

    To best use public funds to help make that happen, she says, “you can incentivize charging installations, which would go through ideally a competitive process — in the private sector, you would have companies bidding for different projects, but you can incentivize installing charging at workplaces, for example, to tap into both of these benefits.” Chargers people can access when they are parked near their residences are also important, Trancik adds, but for other reasons. Home charging is one of the ways to meet charging needs while avoiding inconvenient disruptions to people’s travel activities.

    The study was supported by the European Regional Development Fund Operational Program for Competitiveness and Internationalization, the Lisbon Portugal Regional Operation Program, and the Portuguese Foundation for Science and Technology. More

  • in

    New method accelerates data retrieval in huge databases

    Hashing is a core operation in most online databases, like a library catalogue or an e-commerce website. A hash function generates codes that directly determine the location where data would be stored. So, using these codes, it is easier to find and retrieve the data.

    However, because traditional hash functions generate codes randomly, sometimes two pieces of data can be hashed with the same value. This causes collisions — when searching for one item points a user to many pieces of data with the same hash value. It takes much longer to find the right one, resulting in slower searches and reduced performance.

    Certain types of hash functions, known as perfect hash functions, are designed to place the data in a way that prevents collisions. But they are time-consuming to construct for each dataset and take more time to compute than traditional hash functions.

    Since hashing is used in so many applications, from database indexing to data compression to cryptography, fast and efficient hash functions are critical. So, researchers from MIT and elsewhere set out to see if they could use machine learning to build better hash functions.

    They found that, in certain situations, using learned models instead of traditional hash functions could result in half as many collisions. These learned models are created by running a machine-learning algorithm on a dataset to capture specific characteristics. The team’s experiments also showed that learned models were often more computationally efficient than perfect hash functions.

    “What we found in this work is that in some situations we can come up with a better tradeoff between the computation of the hash function and the collisions we will face. In these situations, the computation time for the hash function can be increased a bit, but at the same time its collisions can be reduced very significantly,” says Ibrahim Sabek, a postdoc in the MIT Data Systems Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL).

    Their research, which will be presented at the 2023 International Conference on Very Large Databases, demonstrates how a hash function can be designed to significantly speed up searches in a huge database. For instance, their technique could accelerate computational systems that scientists use to store and analyze DNA, amino acid sequences, or other biological information.

    Sabek is the co-lead author of the paper with Department of Electrical Engineering and Computer Science (EECS) graduate student Kapil Vaidya. They are joined by co-authors Dominick Horn, a graduate student at the Technical University of Munich; Andreas Kipf, an MIT postdoc; Michael Mitzenmacher, professor of computer science at the Harvard John A. Paulson School of Engineering and Applied Sciences; and senior author Tim Kraska, associate professor of EECS at MIT and co-director of the Data, Systems, and AI Lab.

    Hashing it out

    Given a data input, or key, a traditional hash function generates a random number, or code, that corresponds to the slot where that key will be stored. To use a simple example, if there are 10 keys to be put into 10 slots, the function would generate a random integer between 1 and 10 for each input. It is highly probable that two keys will end up in the same slot, causing collisions.

    Perfect hash functions provide a collision-free alternative. Researchers give the function some extra knowledge, such as the number of slots the data are to be placed into. Then it can perform additional computations to figure out where to put each key to avoid collisions. However, these added computations make the function harder to create and less efficient.

    “We were wondering, if we know more about the data — that it will come from a particular distribution — can we use learned models to build a hash function that can actually reduce collisions?” Vaidya says.

    A data distribution shows all possible values in a dataset, and how often each value occurs. The distribution can be used to calculate the probability that a particular value is in a data sample.

    The researchers took a small sample from a dataset and used machine learning to approximate the shape of the data’s distribution, or how the data are spread out. The learned model then uses the approximation to predict the location of a key in the dataset.

    They found that learned models were easier to build and faster to run than perfect hash functions and that they led to fewer collisions than traditional hash functions if data are distributed in a predictable way. But if the data are not predictably distributed because gaps between data points vary too widely, using learned models might cause more collisions.

    “We may have a huge number of data inputs, and the gaps between consecutive inputs are very different, so learning a model to capture the data distribution of these inputs is quite difficult,” Sabek explains.

    Fewer collisions, faster results

    When data were predictably distributed, learned models could reduce the ratio of colliding keys in a dataset from 30 percent to 15 percent, compared with traditional hash functions. They were also able to achieve better throughput than perfect hash functions. In the best cases, learned models reduced the runtime by nearly 30 percent.

    As they explored the use of learned models for hashing, the researchers also found that throughput was impacted most by the number of sub-models. Each learned model is composed of smaller linear models that approximate the data distribution for different parts of the data. With more sub-models, the learned model produces a more accurate approximation, but it takes more time.

    “At a certain threshold of sub-models, you get enough information to build the approximation that you need for the hash function. But after that, it won’t lead to more improvement in collision reduction,” Sabek says.

    Building off this analysis, the researchers want to use learned models to design hash functions for other types of data. They also plan to explore learned hashing for databases in which data can be inserted or deleted. When data are updated in this way, the model needs to change accordingly, but changing the model while maintaining accuracy is a difficult problem.

    “We want to encourage the community to use machine learning inside more fundamental data structures and algorithms. Any kind of core data structure presents us with an opportunity to use machine learning to capture data properties and get better performance. There is still a lot we can explore,” Sabek says.

    “Hashing and indexing functions are core to a lot of database functionality. Given the variety of users and use cases, there is no one size fits all hashing, and learned models help adapt the database to a specific user. This paper is a great balanced analysis of the feasibility of these new techniques and does a good job of talking rigorously about the pros and cons, and helps us build our understanding of when such methods can be expected to work well,” says Murali Narayanaswamy, a principal machine learning scientist at Amazon, who was not involved with this work. “Exploring these kinds of enhancements is an exciting area of research both in academia and industry, and the kind of rigor shown in this work is critical for these methods to have large impact.”

    This work was supported, in part, by Google, Intel, Microsoft, the U.S. National Science Foundation, the U.S. Air Force Research Laboratory, and the U.S. Air Force Artificial Intelligence Accelerator. More

  • in

    Large language models are biased. Can logic help save them?

    Turns out, even language models “think” they’re biased. When prompted in ChatGPT, the response was as follows: “Yes, language models can have biases, because the training data reflects the biases present in society from which that data was collected. For example, gender and racial biases are prevalent in many real-world datasets, and if a language model is trained on that, it can perpetuate and amplify these biases in its predictions.” A well-known but dangerous problem. 

    Humans (typically) can dabble with both logical and stereotypical reasoning when learning. Still, language models mainly mimic the latter, an unfortunate narrative we’ve seen play out ad nauseam when the ability to employ reasoning and critical thinking is absent. So would injecting logic into the fray be enough to mitigate such behavior? 

    Scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) had an inkling that it might, so they set off to examine if logic-aware language models could significantly avoid more harmful stereotypes. They trained a language model to predict the relationship between two sentences, based on context and semantic meaning, using a dataset with labels for text snippets detailing if a second phrase “entails,” “contradicts,” or is neutral with respect to the first one. Using this dataset — natural language inference — they found that the newly trained models were significantly less biased than other baselines, without any extra data, data editing, or additional training algorithms.

    For example, with the premise “the person is a doctor” and the hypothesis “the person is masculine,” using these logic-trained models, the relationship would be classified as “neutral,” since there’s no logic that says the person is a man. With more common language models, two sentences might seem to be correlated due to some bias in training data, like “doctor” might be pinged with “masculine,” even when there’s no evidence that the statement is true. 

    At this point, the omnipresent nature of language models is well-known: Applications in natural language processing, speech recognition, conversational AI, and generative tasks abound. While not a nascent field of research, growing pains can take a front seat as they increase in complexity and capability. 

    “Current language models suffer from issues with fairness, computational resources, and privacy,” says MIT CSAIL postdoc Hongyin Luo, the lead author of a new paper about the work. “Many estimates say that the CO2 emission of training a language model can be higher than the lifelong emission of a car. Running these large language models is also very expensive because of the amount of parameters and the computational resources they need. With privacy, state-of-the-art language models developed by places like ChatGPT or GPT-3 have their APIs where you must upload your language, but there’s no place for sensitive information regarding things like health care or finance. To solve these challenges, we proposed a logical language model that we qualitatively measured as fair, is 500 times smaller than the state-of-the-art models, can be deployed locally, and with no human-annotated training samples for downstream tasks. Our model uses 1/400 the parameters compared with the largest language models, has better performance on some tasks, and significantly saves computation resources.” 

    This model, which has 350 million parameters, outperformed some very large-scale language models with 100 billion parameters on logic-language understanding tasks. The team evaluated, for example, popular BERT pretrained language models with their “textual entailment” ones on stereotype, profession, and emotion bias tests. The latter outperformed other models with significantly lower bias, while preserving the language modeling ability. The “fairness” was evaluated with something called ideal context association (iCAT) tests, where higher iCAT scores mean fewer stereotypes. The model had higher than 90 percent iCAT scores, while other strong language understanding models ranged between 40 to 80. 

    Luo wrote the paper alongside MIT Senior Research Scientist James Glass. They will present the work at the Conference of the European Chapter of the Association for Computational Linguistics in Croatia. 

    Unsurprisingly, the original pretrained language models the team examined were teeming with bias, confirmed by a slew of reasoning tests demonstrating how professional and emotion terms are significantly biased to the feminine or masculine words in the gender vocabulary. 

    With professions, a language model (which is biased) thinks that “flight attendant,” “secretary,” and “physician’s assistant” are feminine jobs, while “fisherman,” “lawyer,” and “judge” are masculine. Concerning emotions, a language model thinks that “anxious,” “depressed,” and “devastated” are feminine.

    While we may still be far away from a neutral language model utopia, this research is ongoing in that pursuit. Currently, the model is just for language understanding, so it’s based on reasoning among existing sentences. Unfortunately, it can’t generate sentences for now, so the next step for the researchers would be targeting the uber-popular generative models built with logical learning to ensure more fairness with computational efficiency. 

    “Although stereotypical reasoning is a natural part of human recognition, fairness-aware people conduct reasoning with logic rather than stereotypes when necessary,” says Luo. “We show that language models have similar properties. A language model without explicit logic learning makes plenty of biased reasoning, but adding logic learning can significantly mitigate such behavior. Furthermore, with demonstrated robust zero-shot adaptation ability, the model can be directly deployed to different tasks with more fairness, privacy, and better speed.” More

  • in

    Report: CHIPS Act just the first step in addressing threats to US leadership in advanced computing

    When Liu He, a Chinese economist, politician, and “chip czar,” was tapped to lead the charge in a chipmaking arms race with the United States, his message lingered in the air, leaving behind a dewy glaze of tension: “For our country, technology is not just for growth… it is a matter of survival.”

    Once upon a time, the United States’ early technological prowess positioned the nation to outpace foreign rivals and cultivate a competitive advantage for domestic businesses. Yet, 30 years later, America’s lead in advanced computing is continuing to wane. What happened?

    A new report from an MIT researcher and two colleagues sheds light on the decline in U.S. leadership. The scientists looked at high-level measures to examine the shrinkage: overall capabilities, supercomputers, applied algorithms, and semiconductor manufacturing. Through their analysis, they found that not only has China closed the computing gap with the U.S., but nearly 80 percent of American leaders in the field believe that their Chinese competitors are improving capabilities faster — which, the team says, suggests a “broad threat to U.S. competitiveness.”

    To delve deeply into the fray, the scientists conducted the Advanced Computing Users Survey, sampling 120 top-tier organizations, including universities, national labs, federal agencies, and industry. The team estimates that this group comprises one-third and one-half of all the most significant computing users in the United States.

    “Advanced computing is crucial to scientific improvement, economic growth and the competitiveness of U.S. companies,” says Neil Thompson, director of the FutureTech Research Project at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), who helped lead the study.

    Thompson, who is also a principal investigator at MIT’s Initiative on the Digital Economy, wrote the paper with Chad Evans, executive vice president and secretary and treasurer to the board at the Council on Competitiveness, and Daniel Armbrust, who is the co-founder, initial CEO, and member of the board of directors at Silicon Catalyst and former president of SEMATECH, the semiconductor consortium that developed industry roadmaps.

    The semiconductor, supercomputer, and algorithm bonanza

    Supercomputers — the room-sized, “giant calculators” of the hardware world — are an industry no longer dominated by the United States. Through 2015, about half of the most powerful computers were sitting firmly in the U.S., and China was growing slowly from a very slow base. But in the past six years, China has swiftly caught up, reaching near parity with America.

    This disappearing lead matters. Eighty-four percent of U.S. survey respondents said they’re computationally constrained in running essential programs. “This result was telling, given who our respondents are: the vanguard of American research enterprises and academic institutions with privileged access to advanced national supercomputing resources,” says Thompson. 

    With regards to advanced algorithms, historically, the U.S. has fronted the charge, with two-thirds of all significant improvements dominated by U.S.-born inventors. But in recent decades, U.S. dominance in algorithms has relied on bringing in foreign talent to work in the U.S., which the researchers say is now in jeopardy. China has outpaced the U.S. and many other countries in churning out PhDs in STEM fields since 2007, with one report postulating a near-distant future (2025) where China will be home to nearly twice as many PhDs than in the U.S. China’s rise in algorithms can also be seen with the “Gordon Bell Prize,” an achievement for outstanding work in harnessing the power of supercomputers in varied applications. U.S. winners historically dominated the prize, but China has now equaled or surpassed Americans’ performance in the past five years.

    While the researchers note the CHIPS and Science Act of 2022 is a critical step in re-establishing the foundation of success for advanced computing, they propose recommendations to the U.S. Office of Science and Technology Policy. 

    First, they suggest democratizing access to U.S. supercomputing by building more mid-tier systems that push boundaries for many users, as well as building tools so users scaling up computations can have less up-front resource investment. They also recommend increasing the pool of innovators by funding many more electrical engineers and computer scientists being trained with longer-term US residency incentives and scholarships. Finally, in addition to this new framework, the scientists urge taking advantage of what already exists, via providing the private sector access to experimentation with high-performance computing through supercomputing sites in academia and national labs.

    All that and a bag of chips

    Computing improvements depend on continuous advances in transistor density and performance, but creating robust, new chips necessitate a harmonious blend of design and manufacturing.

    Over the last six years, China was not known as the savants of noteworthy chips. In fact, in the past five decades, the U.S. designed most of them. But this changed in the past six years when China created the HiSilicon Kirin 9000, propelling itself to the international frontier. This success was mainly obtained through partnerships with leading global chip designers that began in the 2000s. Now, China now has 14 companies among the world’s top 50 fabless designers. A decade ago, there was only one. 

    Competitive semiconductor manufacturing has been more mixed, where U.S.-led policies and internal execution issues have slowed China’s rise, but as of July 2022, the Semiconductor Manufacturing International Corporation (SMIC) has evidence of 7 nanometer logic, which was not expected until much later. However, with extreme ultraviolet export restrictions, progress below 7 nm means domestic technology development would be expensive. Currently, China is only at parity or better in two out of 12 segments of the semiconductor supply chain. Still, with government policy and investments, the team expects a whopping increase to seven segments in 10 years. So, for the moment, the U.S. retains leadership in hardware manufacturing, but with fewer dimensions of advantage.

    The authors recommend that the White House Office of Science and Technology Policy work with key national agencies, such as the U.S. Department of Defense, U.S. Department of Energy, and the National Science Foundation, to define initiatives to build the hardware and software systems needed for important computing paradigms and workloads critical for economic and security goals. “It is crucial that American enterprises can get the benefit of faster computers,” says Thompson. “With Moore’s Law slowing down, the best way to do this is to create a portfolio of specialized chips (or “accelerators”) that are customized to our needs.”

    The scientists further believe that to lead the next generation of computing, four areas must be addressed. First, by issuing grand challenges to the CHIPS Act National Semiconductor Technology Center, researchers and startups would be motivated to invest in research and development and to seek startup capital for new technologies in areas such as spintronics, neuromorphics, optical and quantum computing, and optical interconnect fabrics. By supporting allies in passing similar acts, overall investment in these technologies would increase, and supply chains would become more aligned and secure. Establishing test beds for researchers to test algorithms on new computing architectures and hardware would provide an essential platform for innovation and discovery. Finally, planning for post-exascale systems that achieve higher levels of performance through next-generation advances would ensure that current commercial technologies don’t limit future computing systems.

    “The advanced computing landscape is in rapid flux — technologically, economically, and politically, with both new opportunities for innovation and rising global rivalries,” says Daniel Reed, Presidential Professor and professor of computer science and electrical and computer engineering at the University of Utah. “The transformational insights from both deep learning and computational modeling depend on both continued semiconductor advances and their instantiation in leading edge, large-scale computing systems — hyperscale clouds and high-performance computing systems. Although the U.S. has historically led the world in both advanced semiconductors and high-performance computing, other nations have recognized that these capabilities are integral to 21st century economic competitiveness and national security, and they are investing heavily.”

    The research was funded, in part, through Thompson’s grant from Good Ventures, which supports his FutureTech Research Group. The paper is being published by the Georgetown Public Policy Review. More

  • in

    A new chip for decoding data transmissions demonstrates record-breaking energy efficiency

    Imagine using an online banking app to deposit money into your account. Like all information sent over the internet, those communications could be corrupted by noise that inserts errors into the data.

    To overcome this problem, senders encode data before they are transmitted, and then a receiver uses a decoding algorithm to correct errors and recover the original message. In some instances, data are received with reliability information that helps the decoder figure out which parts of a transmission are likely errors.

    Researchers at MIT and elsewhere have developed a decoder chip that employs a new statistical model to use this reliability information in a way that is much simpler and faster than conventional techniques.

    Their chip uses a universal decoding algorithm the team previously developed, which can unravel any error correcting code. Typically, decoding hardware can only process one particular type of code. This new, universal decoder chip has broken the record for energy-efficient decoding, performing between 10 and 100 times better than other hardware.

    This advance could enable mobile devices with fewer chips, since they would no longer need separate hardware for multiple codes. This would reduce the amount of material needed for fabrication, cutting costs and improving sustainability. By making the decoding process less energy intensive, the chip could also improve device performance and lengthen battery life. It could be especially useful for demanding applications like augmented and virtual reality and 5G networks.

    “This is the first time anyone has broken below the 1 picojoule-per-bit barrier for decoding. That is roughly the same amount of energy you need to transmit a bit inside the system. It had been a big symbolic threshold, but it also changes the balance in the receiver of what might be the most pressing part from an energy perspective — we can move that away from the decoder to other elements,” says Muriel Médard, the School of Science NEC Professor of Software Science and Engineering, a professor in the Department of Electrical Engineering and Computer Science, and a co-author of a paper presenting the new chip.

    Médard’s co-authors include lead author Arslan Riaz, a graduate student at Boston University (BU); Rabia Tugce Yazicigil, assistant professor of electrical and computer engineering at BU; and Ken R. Duffy, then director of the Hamilton Institute at Maynooth University and now a professor at Northeastern University, as well as others from MIT, BU, and Maynooth University. The work is being presented at the International Solid-States Circuits Conference.

    Smarter sorting

    Digital data are transmitted over a network in the form of bits (0s and 1s). A sender encodes data by adding an error-correcting code, which is a redundant string of 0s and 1s that can be viewed as a hash. Information about this hash is held in a specific code book. A decoding algorithm at the receiver, designed for this particular code, uses its code book and the hash structure to retrieve the original information, which may have been jumbled by noise. Since each algorithm is code-specific, and most require dedicated hardware, a device would need many chips to decode different codes.

    The researchers previously demonstrated GRAND (Guessing Random Additive Noise Decoding), a universal decoding algorithm that can crack any code. GRAND works by guessing the noise that affected the transmission, subtracting that noise pattern from the received data, and then checking what remains in a code book. It guesses a series of noise patterns in the order they are likely to occur.

    Data are often received with reliability information, also called soft information, that helps a decoder figure out which pieces are errors. The new decoding chip, called ORBGRAND (Ordered Reliability Bits GRAND), uses this reliability information to sort data based on how likely each bit is to be an error.

    But it isn’t as simple as ordering single bits. While the most unreliable bit might be the likeliest error, perhaps the third and fourth most unreliable bits together are as likely to be an error as the seventh-most unreliable bit. ORBGRAND uses a new statistical model that can sort bits in this fashion, considering that multiple bits together are as likely to be an error as some single bits.

    “If your car isn’t working, soft information might tell you that it is probably the battery. But if it isn’t the battery alone, maybe it is the battery and the alternator together that are causing the problem. This is how a rational person would troubleshoot — you’d say that it could actually be these two things together before going down the list to something that is much less likely,” Médard says.

    This is a much more efficient approach than traditional decoders, which would instead look at the code structure and have a performance that is generally designed for the worst-case.

    “With a traditional decoder, you’d pull out the blueprint of the car and examine each and every piece. You’ll find the problem, but it will take you a long time and you’ll get very frustrated,” Médard explains.

    ORBGRAND stops sorting as soon as a code word is found, which is often very soon. The chip also employs parallelization, generating and testing multiple noise patterns simultaneously so it finds the code word faster. Because the decoder stops working once it finds the code word, its energy consumption stays low even though it runs multiple processes simultaneously.

    Record-breaking efficiency

    When they compared their approach to other chips, ORBGRAND decoded with maximum accuracy while consuming only 0.76 picojoules of energy per bit, breaking the previous performance record. ORBGRAND consumes between 10 and 100 times less energy than other devices.

    One of the biggest challenges of developing the new chip came from this reduced energy consumption, Médard says. With ORBGRAND, generating noise sequences is now so energy-efficient that other processes the researchers hadn’t focused on before, like checking the code word in a code book, consume most of the effort.

    “Now, this checking process, which is like turning on the car to see if it works, is the hardest part. So, we need to find more efficient ways to do that,” she says.

    The team is also exploring ways to change the modulation of transmissions so they can take advantage of the improved efficiency of the ORBGRAND chip. They also plan to see how their technique could be utilized to more efficiently manage multiple transmissions that overlap.

    The research is funded, in part, by the U.S. Defense Advanced Research Projects Agency (DARPA) and Science Foundation Ireland. More

  • in

    Study: Carbon-neutral pavements are possible by 2050, but rapid policy and industry action are needed

    Almost 2.8 million lane-miles, or about 4.6 million lane-kilometers, of the United States are paved.

    Roads and streets form the backbone of our built environment. They take us to work or school, take goods to their destinations, and much more.

    However, a new study by MIT Concrete Sustainability Hub (CSHub) researchers shows that the annual greenhouse gas (GHG) emissions of all construction materials used in the U.S. pavement network are 11.9 to 13.3 megatons. This is equivalent to the emissions of a gasoline-powered passenger vehicle driving about 30 billion miles in a year.

    As roads are built, repaved, and expanded, new approaches and thoughtful material choices are necessary to dampen their carbon footprint. 

    The CSHub researchers found that, by 2050, mixtures for pavements can be made carbon-neutral if industry and governmental actors help to apply a range of solutions — like carbon capture — to reduce, avoid, and neutralize embodied impacts. (A neutralization solution is any compensation mechanism in the value chain of a product that permanently removes the global warming impact of the processes after avoiding and reducing the emissions.) Furthermore, nearly half of pavement-related greenhouse gas (GHG) savings can be achieved in the short term with a negative or nearly net-zero cost.

    The research team, led by Hessam AzariJafari, MIT CSHub’s deputy director, closed gaps in our understanding of the impacts of pavements decisions by developing a dynamic model quantifying the embodied impact of future pavements materials demand for the U.S. road network. 

    The team first split the U.S. road network into 10-mile (about 16 kilometer) segments, forecasting the condition and performance of each. They then developed a pavement management system model to create benchmarks helping to understand the current level of emissions and the efficacy of different decarbonization strategies. 

    This model considered factors such as annual traffic volume and surface conditions, budget constraints, regional variation in pavement treatment choices, and pavement deterioration. The researchers also used a life-cycle assessment to calculate annual state-level emissions from acquiring pavement construction materials, considering future energy supply and materials procurement.

    The team considered three scenarios for the U.S. pavement network: A business-as-usual scenario in which technology remains static, a projected improvement scenario aligned with stated industry and national goals, and an ambitious improvement scenario that intensifies or accelerates projected strategies to achieve carbon neutrality. 

    If no steps are taken to decarbonize pavement mixtures, the team projected that GHG emissions of construction materials used in the U.S. pavement network would increase by 19.5 percent by 2050. Under the projected scenario, there was an estimated 38 percent embodied impact reduction for concrete and 14 percent embodied impact reduction for asphalt by 2050.

    The keys to making the pavement network carbon neutral by 2050 lie in multiple places. Fully renewable energy sources should be used for pavement materials production, transportation, and other processes. The federal government must contribute to the development of these low-carbon energy sources and carbon capture technologies, as it would be nearly impossible to achieve carbon neutrality for pavements without them. 

    Additionally, increasing pavements’ recycled content and improving their design and production efficiency can lower GHG emissions to an extent. Still, neutralization is needed to achieve carbon neutrality.

    Making the right pavement construction and repair choices would also contribute to the carbon neutrality of the network. For instance, concrete pavements can offer GHG savings across the whole life cycle as they are stiffer and stay smoother for longer, meaning they require less maintenance and have a lesser impact on the fuel efficiency of vehicles. 

    Concrete pavements have other use-phase benefits including a cooling effect through an intrinsically high albedo, meaning they reflect more sunlight than regular pavements. Therefore, they can help combat extreme heat and positively affect the earth’s energy balance through positive radiative forcing, making albedo a potential neutralization mechanism.

    At the same time, a mix of fixes, including using concrete and asphalt in different contexts and proportions, could produce significant GHG savings for the pavement network; decision-makers must consider scenarios on a case-by-case basis to identify optimal solutions. 

    In addition, it may appear as though the GHG emissions of materials used in local roads are dwarfed by the emissions of interstate highway materials. However, the study found that the two road types have a similar impact. In fact, all road types contribute heavily to the total GHG emissions of pavement materials in general. Therefore, stakeholders at the federal, state, and local levels must be involved if our roads are to become carbon neutral. 

    The path to pavement network carbon-neutrality is, therefore, somewhat of a winding road. It demands regionally specific policies and widespread investment to help implement decarbonization solutions, just as renewable energy initiatives have been supported. Providing subsidies and covering the costs of premiums, too, are vital to avoid shifts in the market that would derail environmental savings.

    When planning for these shifts, we must recall that pavements have impacts not just in their production, but across their entire life cycle. As pavements are used, maintained, and eventually decommissioned, they have significant impacts on the surrounding environment.

    If we are to meet climate goals such as the Paris Agreement, which demands that we reach carbon-neutrality by 2050 to avoid the worst impacts of climate change, we — as well as industry and governmental stakeholders — must come together to take a hard look at the roads we use every day and work to reduce their life cycle emissions. 

    The study was published in the International Journal of Life Cycle Assessment. In addition to AzariJafari, the authors include Fengdi Guo of the MIT Department of Civil and Environmental Engineering; Jeremy Gregory, executive director of the MIT Climate and Sustainability Consortium; and Randolph Kirchain, director of the MIT CSHub. More