More stories

  • in

    New method accelerates data retrieval in huge databases

    Hashing is a core operation in most online databases, like a library catalogue or an e-commerce website. A hash function generates codes that directly determine the location where data would be stored. So, using these codes, it is easier to find and retrieve the data.

    However, because traditional hash functions generate codes randomly, sometimes two pieces of data can be hashed with the same value. This causes collisions — when searching for one item points a user to many pieces of data with the same hash value. It takes much longer to find the right one, resulting in slower searches and reduced performance.

    Certain types of hash functions, known as perfect hash functions, are designed to place the data in a way that prevents collisions. But they are time-consuming to construct for each dataset and take more time to compute than traditional hash functions.

    Since hashing is used in so many applications, from database indexing to data compression to cryptography, fast and efficient hash functions are critical. So, researchers from MIT and elsewhere set out to see if they could use machine learning to build better hash functions.

    They found that, in certain situations, using learned models instead of traditional hash functions could result in half as many collisions. These learned models are created by running a machine-learning algorithm on a dataset to capture specific characteristics. The team’s experiments also showed that learned models were often more computationally efficient than perfect hash functions.

    “What we found in this work is that in some situations we can come up with a better tradeoff between the computation of the hash function and the collisions we will face. In these situations, the computation time for the hash function can be increased a bit, but at the same time its collisions can be reduced very significantly,” says Ibrahim Sabek, a postdoc in the MIT Data Systems Group of the Computer Science and Artificial Intelligence Laboratory (CSAIL).

    Their research, which will be presented at the 2023 International Conference on Very Large Databases, demonstrates how a hash function can be designed to significantly speed up searches in a huge database. For instance, their technique could accelerate computational systems that scientists use to store and analyze DNA, amino acid sequences, or other biological information.

    Sabek is the co-lead author of the paper with Department of Electrical Engineering and Computer Science (EECS) graduate student Kapil Vaidya. They are joined by co-authors Dominick Horn, a graduate student at the Technical University of Munich; Andreas Kipf, an MIT postdoc; Michael Mitzenmacher, professor of computer science at the Harvard John A. Paulson School of Engineering and Applied Sciences; and senior author Tim Kraska, associate professor of EECS at MIT and co-director of the Data, Systems, and AI Lab.

    Hashing it out

    Given a data input, or key, a traditional hash function generates a random number, or code, that corresponds to the slot where that key will be stored. To use a simple example, if there are 10 keys to be put into 10 slots, the function would generate a random integer between 1 and 10 for each input. It is highly probable that two keys will end up in the same slot, causing collisions.

    Perfect hash functions provide a collision-free alternative. Researchers give the function some extra knowledge, such as the number of slots the data are to be placed into. Then it can perform additional computations to figure out where to put each key to avoid collisions. However, these added computations make the function harder to create and less efficient.

    “We were wondering, if we know more about the data — that it will come from a particular distribution — can we use learned models to build a hash function that can actually reduce collisions?” Vaidya says.

    A data distribution shows all possible values in a dataset, and how often each value occurs. The distribution can be used to calculate the probability that a particular value is in a data sample.

    The researchers took a small sample from a dataset and used machine learning to approximate the shape of the data’s distribution, or how the data are spread out. The learned model then uses the approximation to predict the location of a key in the dataset.

    They found that learned models were easier to build and faster to run than perfect hash functions and that they led to fewer collisions than traditional hash functions if data are distributed in a predictable way. But if the data are not predictably distributed because gaps between data points vary too widely, using learned models might cause more collisions.

    “We may have a huge number of data inputs, and the gaps between consecutive inputs are very different, so learning a model to capture the data distribution of these inputs is quite difficult,” Sabek explains.

    Fewer collisions, faster results

    When data were predictably distributed, learned models could reduce the ratio of colliding keys in a dataset from 30 percent to 15 percent, compared with traditional hash functions. They were also able to achieve better throughput than perfect hash functions. In the best cases, learned models reduced the runtime by nearly 30 percent.

    As they explored the use of learned models for hashing, the researchers also found that throughput was impacted most by the number of sub-models. Each learned model is composed of smaller linear models that approximate the data distribution for different parts of the data. With more sub-models, the learned model produces a more accurate approximation, but it takes more time.

    “At a certain threshold of sub-models, you get enough information to build the approximation that you need for the hash function. But after that, it won’t lead to more improvement in collision reduction,” Sabek says.

    Building off this analysis, the researchers want to use learned models to design hash functions for other types of data. They also plan to explore learned hashing for databases in which data can be inserted or deleted. When data are updated in this way, the model needs to change accordingly, but changing the model while maintaining accuracy is a difficult problem.

    “We want to encourage the community to use machine learning inside more fundamental data structures and algorithms. Any kind of core data structure presents us with an opportunity to use machine learning to capture data properties and get better performance. There is still a lot we can explore,” Sabek says.

    “Hashing and indexing functions are core to a lot of database functionality. Given the variety of users and use cases, there is no one size fits all hashing, and learned models help adapt the database to a specific user. This paper is a great balanced analysis of the feasibility of these new techniques and does a good job of talking rigorously about the pros and cons, and helps us build our understanding of when such methods can be expected to work well,” says Murali Narayanaswamy, a principal machine learning scientist at Amazon, who was not involved with this work. “Exploring these kinds of enhancements is an exciting area of research both in academia and industry, and the kind of rigor shown in this work is critical for these methods to have large impact.”

    This work was supported, in part, by Google, Intel, Microsoft, the U.S. National Science Foundation, the U.S. Air Force Research Laboratory, and the U.S. Air Force Artificial Intelligence Accelerator. More

  • in

    Large language models are biased. Can logic help save them?

    Turns out, even language models “think” they’re biased. When prompted in ChatGPT, the response was as follows: “Yes, language models can have biases, because the training data reflects the biases present in society from which that data was collected. For example, gender and racial biases are prevalent in many real-world datasets, and if a language model is trained on that, it can perpetuate and amplify these biases in its predictions.” A well-known but dangerous problem. 

    Humans (typically) can dabble with both logical and stereotypical reasoning when learning. Still, language models mainly mimic the latter, an unfortunate narrative we’ve seen play out ad nauseam when the ability to employ reasoning and critical thinking is absent. So would injecting logic into the fray be enough to mitigate such behavior? 

    Scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) had an inkling that it might, so they set off to examine if logic-aware language models could significantly avoid more harmful stereotypes. They trained a language model to predict the relationship between two sentences, based on context and semantic meaning, using a dataset with labels for text snippets detailing if a second phrase “entails,” “contradicts,” or is neutral with respect to the first one. Using this dataset — natural language inference — they found that the newly trained models were significantly less biased than other baselines, without any extra data, data editing, or additional training algorithms.

    For example, with the premise “the person is a doctor” and the hypothesis “the person is masculine,” using these logic-trained models, the relationship would be classified as “neutral,” since there’s no logic that says the person is a man. With more common language models, two sentences might seem to be correlated due to some bias in training data, like “doctor” might be pinged with “masculine,” even when there’s no evidence that the statement is true. 

    At this point, the omnipresent nature of language models is well-known: Applications in natural language processing, speech recognition, conversational AI, and generative tasks abound. While not a nascent field of research, growing pains can take a front seat as they increase in complexity and capability. 

    “Current language models suffer from issues with fairness, computational resources, and privacy,” says MIT CSAIL postdoc Hongyin Luo, the lead author of a new paper about the work. “Many estimates say that the CO2 emission of training a language model can be higher than the lifelong emission of a car. Running these large language models is also very expensive because of the amount of parameters and the computational resources they need. With privacy, state-of-the-art language models developed by places like ChatGPT or GPT-3 have their APIs where you must upload your language, but there’s no place for sensitive information regarding things like health care or finance. To solve these challenges, we proposed a logical language model that we qualitatively measured as fair, is 500 times smaller than the state-of-the-art models, can be deployed locally, and with no human-annotated training samples for downstream tasks. Our model uses 1/400 the parameters compared with the largest language models, has better performance on some tasks, and significantly saves computation resources.” 

    This model, which has 350 million parameters, outperformed some very large-scale language models with 100 billion parameters on logic-language understanding tasks. The team evaluated, for example, popular BERT pretrained language models with their “textual entailment” ones on stereotype, profession, and emotion bias tests. The latter outperformed other models with significantly lower bias, while preserving the language modeling ability. The “fairness” was evaluated with something called ideal context association (iCAT) tests, where higher iCAT scores mean fewer stereotypes. The model had higher than 90 percent iCAT scores, while other strong language understanding models ranged between 40 to 80. 

    Luo wrote the paper alongside MIT Senior Research Scientist James Glass. They will present the work at the Conference of the European Chapter of the Association for Computational Linguistics in Croatia. 

    Unsurprisingly, the original pretrained language models the team examined were teeming with bias, confirmed by a slew of reasoning tests demonstrating how professional and emotion terms are significantly biased to the feminine or masculine words in the gender vocabulary. 

    With professions, a language model (which is biased) thinks that “flight attendant,” “secretary,” and “physician’s assistant” are feminine jobs, while “fisherman,” “lawyer,” and “judge” are masculine. Concerning emotions, a language model thinks that “anxious,” “depressed,” and “devastated” are feminine.

    While we may still be far away from a neutral language model utopia, this research is ongoing in that pursuit. Currently, the model is just for language understanding, so it’s based on reasoning among existing sentences. Unfortunately, it can’t generate sentences for now, so the next step for the researchers would be targeting the uber-popular generative models built with logical learning to ensure more fairness with computational efficiency. 

    “Although stereotypical reasoning is a natural part of human recognition, fairness-aware people conduct reasoning with logic rather than stereotypes when necessary,” says Luo. “We show that language models have similar properties. A language model without explicit logic learning makes plenty of biased reasoning, but adding logic learning can significantly mitigate such behavior. Furthermore, with demonstrated robust zero-shot adaptation ability, the model can be directly deployed to different tasks with more fairness, privacy, and better speed.” More

  • in

    Report: CHIPS Act just the first step in addressing threats to US leadership in advanced computing

    When Liu He, a Chinese economist, politician, and “chip czar,” was tapped to lead the charge in a chipmaking arms race with the United States, his message lingered in the air, leaving behind a dewy glaze of tension: “For our country, technology is not just for growth… it is a matter of survival.”

    Once upon a time, the United States’ early technological prowess positioned the nation to outpace foreign rivals and cultivate a competitive advantage for domestic businesses. Yet, 30 years later, America’s lead in advanced computing is continuing to wane. What happened?

    A new report from an MIT researcher and two colleagues sheds light on the decline in U.S. leadership. The scientists looked at high-level measures to examine the shrinkage: overall capabilities, supercomputers, applied algorithms, and semiconductor manufacturing. Through their analysis, they found that not only has China closed the computing gap with the U.S., but nearly 80 percent of American leaders in the field believe that their Chinese competitors are improving capabilities faster — which, the team says, suggests a “broad threat to U.S. competitiveness.”

    To delve deeply into the fray, the scientists conducted the Advanced Computing Users Survey, sampling 120 top-tier organizations, including universities, national labs, federal agencies, and industry. The team estimates that this group comprises one-third and one-half of all the most significant computing users in the United States.

    “Advanced computing is crucial to scientific improvement, economic growth and the competitiveness of U.S. companies,” says Neil Thompson, director of the FutureTech Research Project at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), who helped lead the study.

    Thompson, who is also a principal investigator at MIT’s Initiative on the Digital Economy, wrote the paper with Chad Evans, executive vice president and secretary and treasurer to the board at the Council on Competitiveness, and Daniel Armbrust, who is the co-founder, initial CEO, and member of the board of directors at Silicon Catalyst and former president of SEMATECH, the semiconductor consortium that developed industry roadmaps.

    The semiconductor, supercomputer, and algorithm bonanza

    Supercomputers — the room-sized, “giant calculators” of the hardware world — are an industry no longer dominated by the United States. Through 2015, about half of the most powerful computers were sitting firmly in the U.S., and China was growing slowly from a very slow base. But in the past six years, China has swiftly caught up, reaching near parity with America.

    This disappearing lead matters. Eighty-four percent of U.S. survey respondents said they’re computationally constrained in running essential programs. “This result was telling, given who our respondents are: the vanguard of American research enterprises and academic institutions with privileged access to advanced national supercomputing resources,” says Thompson. 

    With regards to advanced algorithms, historically, the U.S. has fronted the charge, with two-thirds of all significant improvements dominated by U.S.-born inventors. But in recent decades, U.S. dominance in algorithms has relied on bringing in foreign talent to work in the U.S., which the researchers say is now in jeopardy. China has outpaced the U.S. and many other countries in churning out PhDs in STEM fields since 2007, with one report postulating a near-distant future (2025) where China will be home to nearly twice as many PhDs than in the U.S. China’s rise in algorithms can also be seen with the “Gordon Bell Prize,” an achievement for outstanding work in harnessing the power of supercomputers in varied applications. U.S. winners historically dominated the prize, but China has now equaled or surpassed Americans’ performance in the past five years.

    While the researchers note the CHIPS and Science Act of 2022 is a critical step in re-establishing the foundation of success for advanced computing, they propose recommendations to the U.S. Office of Science and Technology Policy. 

    First, they suggest democratizing access to U.S. supercomputing by building more mid-tier systems that push boundaries for many users, as well as building tools so users scaling up computations can have less up-front resource investment. They also recommend increasing the pool of innovators by funding many more electrical engineers and computer scientists being trained with longer-term US residency incentives and scholarships. Finally, in addition to this new framework, the scientists urge taking advantage of what already exists, via providing the private sector access to experimentation with high-performance computing through supercomputing sites in academia and national labs.

    All that and a bag of chips

    Computing improvements depend on continuous advances in transistor density and performance, but creating robust, new chips necessitate a harmonious blend of design and manufacturing.

    Over the last six years, China was not known as the savants of noteworthy chips. In fact, in the past five decades, the U.S. designed most of them. But this changed in the past six years when China created the HiSilicon Kirin 9000, propelling itself to the international frontier. This success was mainly obtained through partnerships with leading global chip designers that began in the 2000s. Now, China now has 14 companies among the world’s top 50 fabless designers. A decade ago, there was only one. 

    Competitive semiconductor manufacturing has been more mixed, where U.S.-led policies and internal execution issues have slowed China’s rise, but as of July 2022, the Semiconductor Manufacturing International Corporation (SMIC) has evidence of 7 nanometer logic, which was not expected until much later. However, with extreme ultraviolet export restrictions, progress below 7 nm means domestic technology development would be expensive. Currently, China is only at parity or better in two out of 12 segments of the semiconductor supply chain. Still, with government policy and investments, the team expects a whopping increase to seven segments in 10 years. So, for the moment, the U.S. retains leadership in hardware manufacturing, but with fewer dimensions of advantage.

    The authors recommend that the White House Office of Science and Technology Policy work with key national agencies, such as the U.S. Department of Defense, U.S. Department of Energy, and the National Science Foundation, to define initiatives to build the hardware and software systems needed for important computing paradigms and workloads critical for economic and security goals. “It is crucial that American enterprises can get the benefit of faster computers,” says Thompson. “With Moore’s Law slowing down, the best way to do this is to create a portfolio of specialized chips (or “accelerators”) that are customized to our needs.”

    The scientists further believe that to lead the next generation of computing, four areas must be addressed. First, by issuing grand challenges to the CHIPS Act National Semiconductor Technology Center, researchers and startups would be motivated to invest in research and development and to seek startup capital for new technologies in areas such as spintronics, neuromorphics, optical and quantum computing, and optical interconnect fabrics. By supporting allies in passing similar acts, overall investment in these technologies would increase, and supply chains would become more aligned and secure. Establishing test beds for researchers to test algorithms on new computing architectures and hardware would provide an essential platform for innovation and discovery. Finally, planning for post-exascale systems that achieve higher levels of performance through next-generation advances would ensure that current commercial technologies don’t limit future computing systems.

    “The advanced computing landscape is in rapid flux — technologically, economically, and politically, with both new opportunities for innovation and rising global rivalries,” says Daniel Reed, Presidential Professor and professor of computer science and electrical and computer engineering at the University of Utah. “The transformational insights from both deep learning and computational modeling depend on both continued semiconductor advances and their instantiation in leading edge, large-scale computing systems — hyperscale clouds and high-performance computing systems. Although the U.S. has historically led the world in both advanced semiconductors and high-performance computing, other nations have recognized that these capabilities are integral to 21st century economic competitiveness and national security, and they are investing heavily.”

    The research was funded, in part, through Thompson’s grant from Good Ventures, which supports his FutureTech Research Group. The paper is being published by the Georgetown Public Policy Review. More

  • in

    Improving health outcomes by targeting climate and air pollution simultaneously

    Climate policies are typically designed to reduce greenhouse gas emissions that result from human activities and drive climate change. The largest source of these emissions is the combustion of fossil fuels, which increases atmospheric concentrations of ozone, fine particulate matter (PM2.5) and other air pollutants that pose public health risks. While climate policies may result in lower concentrations of health-damaging air pollutants as a “co-benefit” of reducing greenhouse gas emissions-intensive activities, they are most effective at improving health outcomes when deployed in tandem with geographically targeted air-quality regulations.

    Yet the computer models typically used to assess the likely air quality/health impacts of proposed climate/air-quality policy combinations come with drawbacks for decision-makers. Atmospheric chemistry/climate models can produce high-resolution results, but they are expensive and time-consuming to run. Integrated assessment models can produce results for far less time and money, but produce results at global and regional scales, rendering them insufficiently precise to obtain accurate assessments of air quality/health impacts at the subnational level.

    To overcome these drawbacks, a team of researchers at MIT and the University of California at Davis has developed a climate/air-quality policy assessment tool that is both computationally efficient and location-specific. Described in a new study in the journal ACS Environmental Au, the tool could enable users to obtain rapid estimates of combined policy impacts on air quality/health at more than 1,500 locations around the globe — estimates precise enough to reveal the equity implications of proposed policy combinations within a particular region.

    “The modeling approach described in this study may ultimately allow decision-makers to assess the efficacy of multiple combinations of climate and air-quality policies in reducing the health impacts of air pollution, and to design more effective policies,” says Sebastian Eastham, the study’s lead author and a principal research scientist at the MIT Joint Program on the Science and Policy of Global Change. “It may also be used to determine if a given policy combination would result in equitable health outcomes across a geographical area of interest.”

    To demonstrate the efficiency and accuracy of their policy assessment tool, the researchers showed that outcomes projected by the tool within seconds were consistent with region-specific results from detailed chemistry/climate models that took days or even months to run. While continuing to refine and develop their approaches, they are now working to embed the new tool into integrated assessment models for direct use by policymakers.

    “As decision-makers implement climate policies in the context of other sustainability challenges like air pollution, efficient modeling tools are important for assessment — and new computational techniques allow us to build faster and more accurate tools to provide credible, relevant information to a broader range of users,” says Noelle Selin, a professor at MIT’s Institute for Data, Systems and Society and Department of Earth, Atmospheric and Planetary Sciences, and supervising author of the study. “We are looking forward to further developing such approaches, and to working with stakeholders to ensure that they provide timely, targeted and useful assessments.”

    The study was funded, in part, by the U.S. Environmental Protection Agency and the Biogen Foundation. More

  • in

    A new chip for decoding data transmissions demonstrates record-breaking energy efficiency

    Imagine using an online banking app to deposit money into your account. Like all information sent over the internet, those communications could be corrupted by noise that inserts errors into the data.

    To overcome this problem, senders encode data before they are transmitted, and then a receiver uses a decoding algorithm to correct errors and recover the original message. In some instances, data are received with reliability information that helps the decoder figure out which parts of a transmission are likely errors.

    Researchers at MIT and elsewhere have developed a decoder chip that employs a new statistical model to use this reliability information in a way that is much simpler and faster than conventional techniques.

    Their chip uses a universal decoding algorithm the team previously developed, which can unravel any error correcting code. Typically, decoding hardware can only process one particular type of code. This new, universal decoder chip has broken the record for energy-efficient decoding, performing between 10 and 100 times better than other hardware.

    This advance could enable mobile devices with fewer chips, since they would no longer need separate hardware for multiple codes. This would reduce the amount of material needed for fabrication, cutting costs and improving sustainability. By making the decoding process less energy intensive, the chip could also improve device performance and lengthen battery life. It could be especially useful for demanding applications like augmented and virtual reality and 5G networks.

    “This is the first time anyone has broken below the 1 picojoule-per-bit barrier for decoding. That is roughly the same amount of energy you need to transmit a bit inside the system. It had been a big symbolic threshold, but it also changes the balance in the receiver of what might be the most pressing part from an energy perspective — we can move that away from the decoder to other elements,” says Muriel Médard, the School of Science NEC Professor of Software Science and Engineering, a professor in the Department of Electrical Engineering and Computer Science, and a co-author of a paper presenting the new chip.

    Médard’s co-authors include lead author Arslan Riaz, a graduate student at Boston University (BU); Rabia Tugce Yazicigil, assistant professor of electrical and computer engineering at BU; and Ken R. Duffy, then director of the Hamilton Institute at Maynooth University and now a professor at Northeastern University, as well as others from MIT, BU, and Maynooth University. The work is being presented at the International Solid-States Circuits Conference.

    Smarter sorting

    Digital data are transmitted over a network in the form of bits (0s and 1s). A sender encodes data by adding an error-correcting code, which is a redundant string of 0s and 1s that can be viewed as a hash. Information about this hash is held in a specific code book. A decoding algorithm at the receiver, designed for this particular code, uses its code book and the hash structure to retrieve the original information, which may have been jumbled by noise. Since each algorithm is code-specific, and most require dedicated hardware, a device would need many chips to decode different codes.

    The researchers previously demonstrated GRAND (Guessing Random Additive Noise Decoding), a universal decoding algorithm that can crack any code. GRAND works by guessing the noise that affected the transmission, subtracting that noise pattern from the received data, and then checking what remains in a code book. It guesses a series of noise patterns in the order they are likely to occur.

    Data are often received with reliability information, also called soft information, that helps a decoder figure out which pieces are errors. The new decoding chip, called ORBGRAND (Ordered Reliability Bits GRAND), uses this reliability information to sort data based on how likely each bit is to be an error.

    But it isn’t as simple as ordering single bits. While the most unreliable bit might be the likeliest error, perhaps the third and fourth most unreliable bits together are as likely to be an error as the seventh-most unreliable bit. ORBGRAND uses a new statistical model that can sort bits in this fashion, considering that multiple bits together are as likely to be an error as some single bits.

    “If your car isn’t working, soft information might tell you that it is probably the battery. But if it isn’t the battery alone, maybe it is the battery and the alternator together that are causing the problem. This is how a rational person would troubleshoot — you’d say that it could actually be these two things together before going down the list to something that is much less likely,” Médard says.

    This is a much more efficient approach than traditional decoders, which would instead look at the code structure and have a performance that is generally designed for the worst-case.

    “With a traditional decoder, you’d pull out the blueprint of the car and examine each and every piece. You’ll find the problem, but it will take you a long time and you’ll get very frustrated,” Médard explains.

    ORBGRAND stops sorting as soon as a code word is found, which is often very soon. The chip also employs parallelization, generating and testing multiple noise patterns simultaneously so it finds the code word faster. Because the decoder stops working once it finds the code word, its energy consumption stays low even though it runs multiple processes simultaneously.

    Record-breaking efficiency

    When they compared their approach to other chips, ORBGRAND decoded with maximum accuracy while consuming only 0.76 picojoules of energy per bit, breaking the previous performance record. ORBGRAND consumes between 10 and 100 times less energy than other devices.

    One of the biggest challenges of developing the new chip came from this reduced energy consumption, Médard says. With ORBGRAND, generating noise sequences is now so energy-efficient that other processes the researchers hadn’t focused on before, like checking the code word in a code book, consume most of the effort.

    “Now, this checking process, which is like turning on the car to see if it works, is the hardest part. So, we need to find more efficient ways to do that,” she says.

    The team is also exploring ways to change the modulation of transmissions so they can take advantage of the improved efficiency of the ORBGRAND chip. They also plan to see how their technique could be utilized to more efficiently manage multiple transmissions that overlap.

    The research is funded, in part, by the U.S. Defense Advanced Research Projects Agency (DARPA) and Science Foundation Ireland. More

  • in

    Study: Carbon-neutral pavements are possible by 2050, but rapid policy and industry action are needed

    Almost 2.8 million lane-miles, or about 4.6 million lane-kilometers, of the United States are paved.

    Roads and streets form the backbone of our built environment. They take us to work or school, take goods to their destinations, and much more.

    However, a new study by MIT Concrete Sustainability Hub (CSHub) researchers shows that the annual greenhouse gas (GHG) emissions of all construction materials used in the U.S. pavement network are 11.9 to 13.3 megatons. This is equivalent to the emissions of a gasoline-powered passenger vehicle driving about 30 billion miles in a year.

    As roads are built, repaved, and expanded, new approaches and thoughtful material choices are necessary to dampen their carbon footprint. 

    The CSHub researchers found that, by 2050, mixtures for pavements can be made carbon-neutral if industry and governmental actors help to apply a range of solutions — like carbon capture — to reduce, avoid, and neutralize embodied impacts. (A neutralization solution is any compensation mechanism in the value chain of a product that permanently removes the global warming impact of the processes after avoiding and reducing the emissions.) Furthermore, nearly half of pavement-related greenhouse gas (GHG) savings can be achieved in the short term with a negative or nearly net-zero cost.

    The research team, led by Hessam AzariJafari, MIT CSHub’s deputy director, closed gaps in our understanding of the impacts of pavements decisions by developing a dynamic model quantifying the embodied impact of future pavements materials demand for the U.S. road network. 

    The team first split the U.S. road network into 10-mile (about 16 kilometer) segments, forecasting the condition and performance of each. They then developed a pavement management system model to create benchmarks helping to understand the current level of emissions and the efficacy of different decarbonization strategies. 

    This model considered factors such as annual traffic volume and surface conditions, budget constraints, regional variation in pavement treatment choices, and pavement deterioration. The researchers also used a life-cycle assessment to calculate annual state-level emissions from acquiring pavement construction materials, considering future energy supply and materials procurement.

    The team considered three scenarios for the U.S. pavement network: A business-as-usual scenario in which technology remains static, a projected improvement scenario aligned with stated industry and national goals, and an ambitious improvement scenario that intensifies or accelerates projected strategies to achieve carbon neutrality. 

    If no steps are taken to decarbonize pavement mixtures, the team projected that GHG emissions of construction materials used in the U.S. pavement network would increase by 19.5 percent by 2050. Under the projected scenario, there was an estimated 38 percent embodied impact reduction for concrete and 14 percent embodied impact reduction for asphalt by 2050.

    The keys to making the pavement network carbon neutral by 2050 lie in multiple places. Fully renewable energy sources should be used for pavement materials production, transportation, and other processes. The federal government must contribute to the development of these low-carbon energy sources and carbon capture technologies, as it would be nearly impossible to achieve carbon neutrality for pavements without them. 

    Additionally, increasing pavements’ recycled content and improving their design and production efficiency can lower GHG emissions to an extent. Still, neutralization is needed to achieve carbon neutrality.

    Making the right pavement construction and repair choices would also contribute to the carbon neutrality of the network. For instance, concrete pavements can offer GHG savings across the whole life cycle as they are stiffer and stay smoother for longer, meaning they require less maintenance and have a lesser impact on the fuel efficiency of vehicles. 

    Concrete pavements have other use-phase benefits including a cooling effect through an intrinsically high albedo, meaning they reflect more sunlight than regular pavements. Therefore, they can help combat extreme heat and positively affect the earth’s energy balance through positive radiative forcing, making albedo a potential neutralization mechanism.

    At the same time, a mix of fixes, including using concrete and asphalt in different contexts and proportions, could produce significant GHG savings for the pavement network; decision-makers must consider scenarios on a case-by-case basis to identify optimal solutions. 

    In addition, it may appear as though the GHG emissions of materials used in local roads are dwarfed by the emissions of interstate highway materials. However, the study found that the two road types have a similar impact. In fact, all road types contribute heavily to the total GHG emissions of pavement materials in general. Therefore, stakeholders at the federal, state, and local levels must be involved if our roads are to become carbon neutral. 

    The path to pavement network carbon-neutrality is, therefore, somewhat of a winding road. It demands regionally specific policies and widespread investment to help implement decarbonization solutions, just as renewable energy initiatives have been supported. Providing subsidies and covering the costs of premiums, too, are vital to avoid shifts in the market that would derail environmental savings.

    When planning for these shifts, we must recall that pavements have impacts not just in their production, but across their entire life cycle. As pavements are used, maintained, and eventually decommissioned, they have significant impacts on the surrounding environment.

    If we are to meet climate goals such as the Paris Agreement, which demands that we reach carbon-neutrality by 2050 to avoid the worst impacts of climate change, we — as well as industry and governmental stakeholders — must come together to take a hard look at the roads we use every day and work to reduce their life cycle emissions. 

    The study was published in the International Journal of Life Cycle Assessment. In addition to AzariJafari, the authors include Fengdi Guo of the MIT Department of Civil and Environmental Engineering; Jeremy Gregory, executive director of the MIT Climate and Sustainability Consortium; and Randolph Kirchain, director of the MIT CSHub. More

  • in

    A new way for quantum computing systems to keep their cool

    Heat causes errors in the qubits that are the building blocks of a quantum computer, so quantum systems are typically kept inside refrigerators that keep the temperature just above absolute zero (-459 degrees Fahrenheit).

    But quantum computers need to communicate with electronics outside the refrigerator, in a room-temperature environment. The metal cables that connect these electronics bring heat into the refrigerator, which has to work even harder and draw extra power to keep the system cold. Plus, more qubits require more cables, so the size of a quantum system is limited by how much heat the fridge can remove.

    To overcome this challenge, an interdisciplinary team of MIT researchers has developed a wireless communication system that enables a quantum computer to send and receive data to and from electronics outside the refrigerator using high-speed terahertz waves.

    A transceiver chip placed inside the fridge can receive and transmit data. Terahertz waves generated outside the refrigerator are beamed in through a glass window. Data encoded onto these waves can be received by the chip. That chip also acts as a mirror, delivering data from the qubits on the terahertz waves it reflects to their source.

    This reflection process also bounces back much of the power sent into the fridge, so the process generates only a minimal amount of heat. The contactless communication system consumes up to 10 times less power than systems with metal cables.

    “By having this reflection mode, you really save the power consumption inside the fridge and leave all those dirty jobs on the outside. While this is still just a preliminary prototype and we have some room to improve, even at this point, we have shown low power consumption inside the fridge that is already better than metallic cables. I believe this could be a way to build largescale quantum systems,” says senior author Ruonan Han, an associate professor in the Department of Electrical Engineering and Computer Sciences (EECS) who leads the Terahertz Integrated Electronics Group.

    Han and his team, with expertise in terahertz waves and electronic devices, joined forces with associate professor Dirk Englund and the Quantum Photonics Laboratory team, who provided quantum engineering expertise and joined in conducting the cryogenic experiments.

    Joining Han and Englund on the paper are first author and EECS graduate student Jinchen Wang; Mohamed Ibrahim PhD ’21; Isaac Harris, a graduate student in the Quantum Photonics Laboratory; Nathan M. Monroe PhD ’22; Wasiq Khan PhD ’22; and Xiang Yi, a former postdoc who is now a professor at the South China University of Technology. The paper will be presented at the International Solid-States Circuits Conference.

    Tiny mirrors

    The researchers’ square transceiver chip, measuring about 2 millimeters on each side, is placed on a quantum computer inside the refrigerator, which is called a cryostat because it maintains cryogenic temperatures. These super-cold temperatures don’t damage the chip; in fact, they enable it to run more efficiently than it would at room temperature.

    The chip sends and receives data from a terahertz wave source outside the cryostat using a passive communication process known as backscatter, which involves reflections. An array of antennas on top of the chip, each of which is only about 200 micrometers in size, act as tiny mirrors. These mirrors can be “turned on” to reflect waves or “turned off.”

    The terahertz wave generation source encodes data onto the waves it sends into the cryostat, and the antennas in their “off” state can receive those waves and the data they carry.

    When the tiny mirrors are turned on, they can be set so they either reflect a wave in its current form or invert its phase before bouncing it back. If the reflected wave has the same phase, that represents a 0, but if the phase is inverted, that represents a 1. Electronics outside the cryostat can interpret those binary signals to decode the data.

    “This backscatter technology is not new. For instance, RFIDs are based on backscatter communication. We borrow that idea and bring it into this very unique scenario, and I think this leads to a good combination of all these technologies,” Han says.

    Terahertz advantages

    The data are transmitted using high-speed terahertz waves, which are located on the electromagnetic spectrum between radio waves and infrared light.

    Because terahertz waves are much smaller than radio waves, the chip and its antennas can be smaller, too, which would make the device easier to manufacture at scale. Terahertz waves also have higher frequencies than radio waves, so they can transmit data much faster and move larger amounts of information.

    But because terahertz waves have lower frequencies than the light waves used in photonic systems, the terahertz waves carry less quantum noise, which leads to less interference with quantum processors.

    Importantly, the transceiver chip and terahertz link can be fully constructed with standard fabrication processes on a CMOS chip, so they can be integrated into many current systems and techniques.

    “CMOS compatibility is important. For example, one terahertz link could deliver a large amount of data and feed it to another cryo-CMOS controller, which can split the signal to control multiple qubits simultaneously, so we can reduce the quantity of RF cables dramatically. This is very promising.” Wang says.

    The researchers were able to transmit data at 4 gigabits per second with their prototype, but Han says the sky is nearly the limit when it comes to boosting that speed. The downlink of the contactless system posed about 10 times less heat load than a system with metallic cables, and the temperature of the cryostat fluctuated up to a few millidegrees during experiments.

    Now that the researchers have demonstrated this wireless technology, they want to improve the system’s speed and efficiency using special terahertz fibers, which are only a few hundred micrometers wide. Han’s group has shown that these plastic wires can transmit data at a rate of 100 gigabits per second and have much better thermal insulation than fatter, metal cables.

    The researchers also want to refine the design of their transceiver to improve scalability and continue boosting its energy efficiency. Generating terahertz waves requires a lot of power, but Han’s group is studying more efficient methods that utilize low-cost chips. Incorporating this technology into the system could make the device more cost-effective.

    The transceiver chip was fabricated through the Intel University Shuttle Program. More

  • in

    New chip for mobile devices knocks out unwanted signals

    Imagine sitting in a packed stadium for a pivotal football game — tens of thousands of people are using mobile phones at the same time, perhaps video chatting with friends or posting photos on social media. The radio frequency signals being sent and received by all these devices could cause interference, which slows device performance and drains batteries.

    Designing devices that can efficiently block unwanted signals is no easy task, especially as 5G networks become more universal and future generations of wireless communication systems are developed. Conventional techniques utilize many filters to block a range of signals, but filters are bulky, expensive, and drive up production costs.

    MIT researchers have developed a circuit architecture that targets and blocks unwanted signals at a receiver’s input without hurting its performance. They borrowed a technique from digital signal processing and used a few tricks that enable it to work effectively in a radio frequency system across a wide frequency range.

    Their receiver blocked even high-power unwanted signals without introducing more noise, or inaccuracies, into the signal processing operations. The chip, which performed about 40 times better than other wideband receivers at blocking a special type of interference, does not require any additional hardware or circuitry. This would make the chip easier to manufacture at scale.

    “We are interested in developing electronic circuits and systems that meet the demands of 5G and future generations of wireless communication systems. In designing our circuits, we look for inspirations from other domains, such as digital signal processing and applied electromagnetics. We believe in circuit elegance and simplicity and try to come up with multifunctional hardware that doesn’t require additional power and chip area,” says senior author Negar Reiskarimian, the X-Window Consortium Career Development Assistant Professor in the Department of Electrical Engineering and Computer Science (EECS) and a core faculty member of the Microsystems Technology Laboratories.

    Reiskarimian wrote the paper with EECS graduate students Soroush Araei, who is the lead author, and Shahabeddin Mohin. The work is being presented at the International Solid-States Circuits Conference.

    Harmonic interference

    The researchers developed the receiver chip using what is known as a mixer-first architecture. This means that when a radio frequency signal is received by the device, it is immediately converted to a lower-frequency signal before being passed on to the analog-to-digital converter to extract the digital bits that it is carrying. This approach enables the radio to cover a wide frequency range while filtering out interference located close to the operation frequency.

    While effective, mixer-first receivers are susceptible to a particular kind of interference known as harmonic interference. Harmonic interference comes from signals that have frequencies which are multiples of a device’s operating frequency. For instance, if a device operates at 1 gigahertz, then signals at 2 gigahertz, 3 gigahertz, 5 gigahertz, etc., will cause harmonic interference. These harmonics can be indistinguishable from the original signal during the frequency conversion process.    

    “A lot of other wideband receivers don’t do anything about the harmonics until it is time to see what the bits mean. They do it later in the chain, but this doesn’t work well if you have high-power signals at the harmonic frequencies. Instead, we want to remove harmonics as soon as possible to avoid losing information,” Araei says.

    To do this, the researchers were inspired by a concept from digital signal processing known as block digital filtering. They adapted this technique to the analog domain using capacitors, which hold electric charges. The capacitors are charged up at different times as the signal is received, then they are switched off so that charge can be held and used later for processing the data.  

    These capacitors can be connected to each other in various ways, including connecting them in parallel, which enables the capacitors to exchange the stored charges. While this technique can target harmonic interference, the process results in significant signal loss. Stacking capacitors is another possibility, but this method alone is not enough to provide harmonic resilience.

    Most radio receivers already use switched-capacitor circuits to perform frequency conversion. This frequency conversion circuitry can be combined with block filtering to target harmonic interference.

    A precise arrangement

    The researchers found that arranging capacitors in a specific layout, by connecting some of them in series and then performing charge sharing, enabled the device to block harmonic interference without losing any information.

    “People have used these techniques, charge sharing and capacitor stacking, separately before, but never together. We found that both techniques must be done simultaneously to get this benefit. Moreover, we have found out how to do this in a passive way within the mixer without using any additional hardware while maintaining signal integrity and keeping the costs down,” he says.

    They tested the device by simultaneously sending a desired signal and harmonic interference. Their chip was able to block harmonic signals effectively with only a slight reduction in signal strength. It was able to handle signals that were 40 times more powerful than previous, state-of-the-art wideband receivers. More