More stories

  • in

    Nonsense can make sense to machine-learning models

    For all that neural networks can accomplish, we still don’t really understand how they operate. Sure, we can program them to learn, but making sense of a machine’s decision-making process remains much like a fancy puzzle with a dizzying, complex pattern where plenty of integral pieces have yet to be fitted. 

    If a model was trying to classify an image of said puzzle, for example, it could encounter well-known, but annoying adversarial attacks, or even more run-of-the-mill data or processing issues. But a new, more subtle type of failure recently identified by MIT scientists is another cause for concern: “overinterpretation,” where algorithms make confident predictions based on details that don’t make sense to humans, like random patterns or image borders. 

    This could be particularly worrisome for high-stakes environments, like split-second decisions for self-driving cars, and medical diagnostics for diseases that need more immediate attention. Autonomous vehicles in particular rely heavily on systems that can accurately understand surroundings and then make quick, safe decisions. The network used specific backgrounds, edges, or particular patterns of the sky to classify traffic lights and street signs — irrespective of what else was in the image. 

    The team found that neural networks trained on popular datasets like CIFAR-10 and ImageNet suffered from overinterpretation. Models trained on CIFAR-10, for example, made confident predictions even when 95 percent of input images were missing, and the remainder is senseless to humans. 

    “Overinterpretation is a dataset problem that’s caused by these nonsensical signals in datasets. Not only are these high-confidence images unrecognizable, but they contain less than 10 percent of the original image in unimportant areas, such as borders. We found that these images were meaningless to humans, yet models can still classify them with high confidence,” says Brandon Carter, MIT Computer Science and Artificial Intelligence Laboratory PhD student and lead author on a paper about the research. 

    Deep-image classifiers are widely used. In addition to medical diagnosis and boosting autonomous vehicle technology, there are use cases in security, gaming, and even an app that tells you if something is or isn’t a hot dog, because sometimes we need reassurance. The tech in discussion works by processing individual pixels from tons of pre-labeled images for the network to “learn.” 

    Image classification is hard, because machine-learning models have the ability to latch onto these nonsensical subtle signals. Then, when image classifiers are trained on datasets such as ImageNet, they can make seemingly reliable predictions based on those signals. 

    Although these nonsensical signals can lead to model fragility in the real world, the signals are actually valid in the datasets, meaning overinterpretation can’t be diagnosed using typical evaluation methods based on that accuracy. 

    To find the rationale for the model’s prediction on a particular input, the methods in the present study start with the full image and repeatedly ask, what can I remove from this image? Essentially, it keeps covering up the image, until you’re left with the smallest piece that still makes a confident decision. 

    To that end, it could also be possible to use these methods as a type of validation criteria. For example, if you have an autonomously driving car that uses a trained machine-learning method for recognizing stop signs, you could test that method by identifying the smallest input subset that constitutes a stop sign. If that consists of a tree branch, a particular time of day, or something that’s not a stop sign, you could be concerned that the car might come to a stop at a place it’s not supposed to.

    While it may seem that the model is the likely culprit here, the datasets are more likely to blame. “There’s the question of how we can modify the datasets in a way that would enable models to be trained to more closely mimic how a human would think about classifying images and therefore, hopefully, generalize better in these real-world scenarios, like autonomous driving and medical diagnosis, so that the models don’t have this nonsensical behavior,” says Carter. 

    This may mean creating datasets in more controlled environments. Currently, it’s just pictures that are extracted from public domains that are then classified. But if you want to do object identification, for example, it might be necessary to train models with objects with an uninformative background. 

    This work was supported by Schmidt Futures and the National Institutes of Health. Carter wrote the paper alongside Siddhartha Jain and Jonas Mueller, scientists at Amazon, and MIT Professor David Gifford. They are presenting the work at the 2021 Conference on Neural Information Processing Systems. More

  • in

    Systems scientists find clues to why false news snowballs on social media

    The spread of misinformation on social media is a pressing societal problem that tech companies and policymakers continue to grapple with, yet those who study this issue still don’t have a deep understanding of why and how false news spreads.

    To shed some light on this murky topic, researchers at MIT developed a theoretical model of a Twitter-like social network to study how news is shared and explore situations where a non-credible news item will spread more widely than the truth. Agents in the model are driven by a desire to persuade others to take on their point of view: The key assumption in the model is that people bother to share something with their followers if they think it is persuasive and likely to move others closer to their mindset. Otherwise they won’t share.

    The researchers found that in such a setting, when a network is highly connected or the views of its members are sharply polarized, news that is likely to be false will spread more widely and travel deeper into the network than news with higher credibility.

    This theoretical work could inform empirical studies of the relationship between news credibility and the size of its spread, which might help social media companies adapt networks to limit the spread of false information.

    “We show that, even if people are rational in how they decide to share the news, this could still lead to the amplification of information with low credibility. With this persuasion motive, no matter how extreme my beliefs are — given that the more extreme they are the more I gain by moving others’ opinions — there is always someone who would amplify [the information],” says senior author Ali Jadbabaie, professor and head of the Department of Civil and Environmental Engineering and a core faculty member of the Institute for Data, Systems, and Society (IDSS) and a principal investigator in the Laboratory for Information and Decision Systems (LIDS).

    Joining Jadbabaie on the paper are first author Chin-Chia Hsu, a graduate student in the Social and Engineering Systems program in IDSS, and Amir Ajorlou, a LIDS research scientist. The research will be presented this week at the IEEE Conference on Decision and Control.

    Pondering persuasion

    This research draws on a 2018 study by Sinan Aral, the David Austin Professor of Management at the MIT Sloan School of Management; Deb Roy, an associate professor of media arts and sciences at the Media Lab; and former postdoc Soroush Vosoughi (now an assistant professor of computer science at Dartmouth University). Their empirical study of data from Twitter found that false news spreads wider, faster, and deeper than real news.

    Jadbabaie and his collaborators wanted to drill down on why this occurs.

    They hypothesized that persuasion might be a strong motive for sharing news — perhaps agents in the network want to persuade others to take on their point of view — and decided to build a theoretical model that would let them explore this possibility.

    In their model, agents have some prior belief about a policy, and their goal is to persuade followers to move their beliefs closer to the agent’s side of the spectrum.

    A news item is initially released to a small, random subgroup of agents, which must decide whether to share this news with their followers. An agent weighs the newsworthiness of the item and its credibility, and updates its belief based on how surprising or convincing the news is. 

    “They will make a cost-benefit analysis to see if, on average, this piece of news will move people closer to what they think or move them away. And we include a nominal cost for sharing. For instance, taking some action, if you are scrolling on social media, you have to stop to do that. Think of that as a cost. Or a reputation cost might come if I share something that is embarrassing. Everyone has this cost, so the more extreme and the more interesting the news is, the more you want to share it,” Jadbabaie says.

    If the news affirms the agent’s perspective and has persuasive power that outweighs the nominal cost, the agent will always share the news. But if an agent thinks the news item is something others may have already seen, the agent is disincentivized to share it.

    Since an agent’s willingness to share news is a product of its perspective and how persuasive the news is, the more extreme an agent’s perspective or the more surprising the news, the more likely the agent will share it.

    The researchers used this model to study how information spreads during a news cascade, which is an unbroken sharing chain that rapidly permeates the network.

    Connectivity and polarization

    The team found that when a network has high connectivity and the news is surprising, the credibility threshold for starting a news cascade is lower. High connectivity means that there are multiple connections between many users in the network.

    Likewise, when the network is largely polarized, there are plenty of agents with extreme views who want to share the news item, starting a news cascade. In both these instances, news with low credibility creates the largest cascades.

    “For any piece of news, there is a natural network speed limit, a range of connectivity, that facilitates good transmission of information where the size of the cascade is maximized by true news. But if you exceed that speed limit, you will get into situations where inaccurate news or news with low credibility has a larger cascade size,” Jadbabaie says.

    If the views of users in the network become more diverse, it is less likely that a poorly credible piece of news will spread more widely than the truth.

    Jadbabaie and his colleagues designed the agents in the network to behave rationally, so the model would better capture actions real humans might take if they want to persuade others.

    “Someone might say that is not why people share, and that is valid. Why people do certain things is a subject of intense debate in cognitive science, social psychology, neuroscience, economics, and political science,” he says. “Depending on your assumptions, you end up getting different results. But I feel like this assumption of persuasion being the motive is a natural assumption.”

    Their model also shows how costs can be manipulated to reduce the spread of false information. Agents make a cost-benefit analysis and won’t share news if the cost to do so outweighs the benefit of sharing.

    “We don’t make any policy prescriptions, but one thing this work suggests is that, perhaps, having some cost associated with sharing news is not a bad idea. The reason you get lots of these cascades is because the cost of sharing the news is actually very low,” he says.

    This work was supported by an Army Research Office Multidisciplinary University Research Initiative grant and a Vannevar Bush Fellowship from the Office of the Secretary of Defense. More

  • in

    Q&A: Can the world change course on climate?

    In this ongoing series on climate issues, MIT faculty, students, and alumni in the humanistic fields share perspectives that are significant for solving climate change and mitigating its myriad social and ecological impacts. Nazli Choucri is a professor of political science and an expert on climate issues, who also focuses on international relations and cyberpolitics. She is the architect and director of the Global System for Sustainable Development, an evolving knowledge networking system centered on sustainability problems and solution strategies. The author and/or editor of 12 books, she is also the founding editor of the MIT Press book series “Global Environmental Accord: Strategies for Sustainability and Institutional Innovation.” Q: The impacts of climate change — including storms, floods, wildfires, and droughts — have the potential to destabilize nations, yet they are not constrained by borders. What international developments most concern you in terms of addressing climate change and its myriad ecological and social impacts?

    A: Climate change is a global issue. By definition, and a long history of practice, countries focus on their own priorities and challenges. Over time, we have seen the gradual development of norms reflecting shared interests, and the institutional arrangements to support and pursue the global good. What concerns me most is that general responses to the climate crisis are being framed in broad terms; the overall pace of change remains perilously slow; and uncertainty remains about operational action and implementation of stated intent. We have just seen the completion of the 26th meeting of states devoted to climate change, the United Nations Climate Change Conference (COP26). In some ways this is positive. Yet, past commitments remain unfulfilled, creating added stress in an already stressful political situation. Industrial countries are uneven in their recognition of, and responses to, climate change. This may signal uncertainty about whether climate matters are sufficiently compelling to call for immediate action. Alternatively, the push for changing course may seem too costly at a time when other imperatives — such as employment, economic growth, or protecting borders — inevitably dominate discourse and decisions. Whatever the cause, the result has been an unwillingness to take strong action. Unfortunately, climate change remains within the domain of “low politics,” although there are signs the issue is making a slow but steady shift to “high politics” — those issues deemed vital to the existence of the state. This means that short-term priorities, such as those noted above, continue to shape national politics and international positions and, by extension, to obscure the existential threat revealed by scientific evidence. As for developing countries, these are overwhelmed by internal challenges, and managing the difficulties of daily life always takes priority over other challenges, however compelling. Long-term thinking is a luxury, but daily bread is a necessity. Non-state actors — including registered nongovernmental organizations, climate organizations, sustainability support groups, activists of various sorts, and in some cases much of civil society — have been left with a large share of the responsibility for educating and convincing diverse constituencies of the consequences of inaction on climate change. But many of these institutions carry their own burdens and struggle to manage current pressures. The international community, through its formal and informal institutions, continues to articulate the perils of climate change and to search for a powerful consensus that can prove effective both in form and in function. The general contours are agreed upon — more or less. But leadership of, for, and by the global collective is elusive and difficult to shape. Most concerning of all is the clear reluctance to address head-on the challenge of planning for changes that we know will occur. The reality that we are all being affected — in different ways and to different degrees — has yet to be sufficiently appreciated by everyone, everywhere. Yet, in many parts of the world, major shifts in climate will create pressures on human settlements, spur forced migrations, or generate social dislocations. Some small island states, for example, may not survive a sea-level surge. Everywhere there is a need to cut emissions, and this means adaptation and/or major changes in economic activity and in lifestyle.The discourse and debate at COP26 reflect all of such persistent features in the international system. So far, the largest achievements center on the common consensus that more must be done to prevent the rise in temperature from creating a global catastrophe. This is not enough, however. Differences remain, and countries have yet to specify what cuts in emissions they are willing to make.Echoes of who is responsible for what remains strong. The thorny matter of the unfulfilled pledge of $100 billion once promised by rich countries to help countries to reduce their emissions remained unresolved. At the same time, however, some important agreements were reached. The United States and China announced they would make greater efforts to cut methane, a powerful greenhouse gas. More than 100 countries agreed to end deforestation. India joined the countries committed to attain zero emissions by 2070. And on matters of finance, countries agreed to a two-year plan to determine how to meet the needs of the most-vulnerable countries. Q: In what ways do you think the tools and insights from political science can advance efforts to address climate change and its impacts?A: I prefer to take a multidisciplinary view of the issues at hand, rather than focus on the tools of political science alone. Disciplinary perspectives can create siloed views and positions that undermine any overall drive toward consensus. The scientific evidence is pointing to, even anticipating, pervasive changes that transcend known and established parameters of social order all across the globe.That said, political science provides important insight, even guidance, for addressing the impacts of climate change in some notable ways. One is understanding the extent to which our formal institutions enable discussion, debate, and decisions about the directions we can take collectively to adapt, adjust, or even depart from the established practices of managing social order.If we consider politics as the allocation of values in terms of who gets what, when, and how, then it becomes clear that the current allocation requires a change in course. Coordination and cooperation across the jurisdictions of sovereign states is foundational for any response to climate change impacts.We have already recognized, and to some extent, developed targets for reducing carbon emissions — a central impact from traditional forms of energy use — and are making notable efforts to shift toward alternatives. This move is an easy one compared to all the work that needs to be done to address climate change. But, in taking this step we have learned quite a bit that might help in creating a necessary consensus for cross-jurisdiction coordination and response.Respecting individuals and protecting life is increasingly recognized as a global value — at least in principle. As we work to change course, new norms will be developed, and political science provides important perspectives on how to establish such norms. We will be faced with demands for institutional design, and these will need to embody our guiding values. For example, having learned to recognize the burdens of inequity, we can establish the value of equity as foundational for our social order both now and as we recognize and address the impacts of climate change.

    Q: You teach a class on “Sustainability Development: Theory and Practice.” Broadly speaking, what are goals of this class? What lessons do you hope students will carry with them into the future?A: The goal of 17.181, my class on sustainability, is to frame as clearly as possible the concept of sustainable development (sustainability) with attention to conceptual, empirical, institutional, and policy issues.The course centers on human activities. Individuals are embedded in complex interactive systems: the social system, the natural environment, and the constructed cyber domain — each with distinct temporal, special, and dynamic features. Sustainability issues intersect with, but cannot be folded into, the impacts of climate change. Sustainability places human beings in social systems at the core of what must be done to respect the imperatives of a highly complex natural environment.We consider sustainability an evolving knowledge domain with attendant policy implications. It is driven by events on the ground, not by revolution in academic or theoretical concerns per se. Overall, sustainable development refers to the process of meeting the needs of current and future generations, without undermining the resilience of the life-supporting properties, the integrity of social systems, or the supports of the human-constructed cyberspace.More specifically, we differentiate among four fundamental dimensions and their necessary conditions:

    (a) ecological systems — exhibiting balance and resilience;(b) economic production and consumption — with equity and efficiency;(c) governance and politics — with participation and responsiveness; and(d) institutional performance — demonstrating adaptation and incorporating feedback.The core proposition is this: If all conditions hold, then the system is (or can be) sustainable. Then, we must examine the critical drivers — people, resources, technology, and their interactions — followed by a review and assessment of evolving policy responses. Then we ask: What are new opportunities?I would like students to carry forward these ideas and issues: what has been deemed “normal” in modern Western societies and in developing societies seeking to emulate the Western model is damaging humans in many ways — all well-known. Yet only recently have alternatives begun to be considered to the traditional economic growth model based on industrialization and high levels of energy use. To make changes, we must first understand the underlying incentives, realities, and choices that shape a whole set of dysfunctional behaviors and outcomes. We then need to delve deep into the driving sources and consequences, and to consider the many ways in which our known “normal” can be adjusted — in theory and in practice. Q: In confronting an issue as formidable as global climate change, what gives you hope?  A: I see a few hopeful signs; among them:The scientific evidence is clear and compelling. We are no longer discussing whether there is climate change, or if we will face major challenges of unprecedented proportions, or even how to bring about an international consensus on the salience of such threats.Climate change has been recognized as a global phenomenon. Imperatives for cooperation are necessary. No one can go it alone. Major efforts have and are being made in world politics to forge action agendas with specific targets.The issue appears to be on the verge of becoming one of “high politics” in the United States.Younger generations are more sensitive to the reality that we are altering the life-supporting properties of our planet. They are generally more educated, skilled, and open to addressing such challenges than their elders.However disappointing the results of COP26 might seem, the global community is moving in the right direction.None of the above points, individually or jointly, translates into an effective response to the known impacts of climate change — let alone the unknown. But, this is what gives me hope.

    Interview prepared by MIT SHASS CommunicationsEditorial, design, and series director: Emily HiestandSenior writer: Kathryn O’Neill More

  • in

    Machine learning speeds up vehicle routing

    Waiting for a holiday package to be delivered? There’s a tricky math problem that needs to be solved before the delivery truck pulls up to your door, and MIT researchers have a strategy that could speed up the solution.

    The approach applies to vehicle routing problems such as last-mile delivery, where the goal is to deliver goods from a central depot to multiple cities while keeping travel costs down. While there are algorithms designed to solve this problem for a few hundred cities, these solutions become too slow when applied to a larger set of cities.

    To remedy this, Cathy Wu, the Gilbert W. Winslow Career Development Assistant Professor in Civil and Environmental Engineering and the Institute for Data, Systems, and Society, and her students have come up with a machine-learning strategy that accelerates some of the strongest algorithmic solvers by 10 to 100 times.

    The solver algorithms work by breaking up the problem of delivery into smaller subproblems to solve — say, 200 subproblems for routing vehicles between 2,000 cities. Wu and her colleagues augment this process with a new machine-learning algorithm that identifies the most useful subproblems to solve, instead of solving all the subproblems, to increase the quality of the solution while using orders of magnitude less compute.

    Their approach, which they call “learning-to-delegate,” can be used across a variety of solvers and a variety of similar problems, including scheduling and pathfinding for warehouse robots, the researchers say.

    The work pushes the boundaries on rapidly solving large-scale vehicle routing problems, says Marc Kuo, founder and CEO of Routific, a smart logistics platform for optimizing delivery routes. Some of Routific’s recent algorithmic advances were inspired by Wu’s work, he notes.

    “Most of the academic body of research tends to focus on specialized algorithms for small problems, trying to find better solutions at the cost of processing times. But in the real-world, businesses don’t care about finding better solutions, especially if they take too long for compute,” Kuo explains. “In the world of last-mile logistics, time is money, and you cannot have your entire warehouse operations wait for a slow algorithm to return the routes. An algorithm needs to be hyper-fast for it to be practical.”

    Wu, social and engineering systems doctoral student Sirui Li, and electrical engineering and computer science doctoral student Zhongxia Yan presented their research this week at the 2021 NeurIPS conference.

    Selecting good problems

    Vehicle routing problems are a class of combinatorial problems, which involve using heuristic algorithms to find “good-enough solutions” to the problem. It’s typically not possible to come up with the one “best” answer to these problems, because the number of possible solutions is far too huge.

    “The name of the game for these types of problems is to design efficient algorithms … that are optimal within some factor,” Wu explains. “But the goal is not to find optimal solutions. That’s too hard. Rather, we want to find as good of solutions as possible. Even a 0.5% improvement in solutions can translate to a huge revenue increase for a company.”

    Over the past several decades, researchers have developed a variety of heuristics to yield quick solutions to combinatorial problems. They usually do this by starting with a poor but valid initial solution and then gradually improving the solution — by trying small tweaks to improve the routing between nearby cities, for example. For a large problem like a 2,000-plus city routing challenge, however, this approach just takes too much time.

    More recently, machine-learning methods have been developed to solve the problem, but while faster, they tend to be more inaccurate, even at the scale of a few dozen cities. Wu and her colleagues decided to see if there was a beneficial way to combine the two methods to find speedy but high-quality solutions.

    “For us, this is where machine learning comes in,” Wu says. “Can we predict which of these subproblems, that if we were to solve them, would lead to more improvement in the solution, saving computing time and expense?”

    Traditionally, a large-scale vehicle routing problem heuristic might choose the subproblems to solve in which order either randomly or by applying yet another carefully devised heuristic. In this case, the MIT researchers ran sets of subproblems through a neural network they created to automatically find the subproblems that, when solved, would lead to the greatest gain in quality of the solutions. This process sped up subproblem selection process by 1.5 to 2 times, Wu and colleagues found.

    “We don’t know why these subproblems are better than other subproblems,” Wu notes. “It’s actually an interesting line of future work. If we did have some insights here, these could lead to designing even better algorithms.”

    Surprising speed-up

    Wu and colleagues were surprised by how well the approach worked. In machine learning, the idea of garbage-in, garbage-out applies — that is, the quality of a machine-learning approach relies heavily on the quality of the data. A combinatorial problem is so difficult that even its subproblems can’t be optimally solved. A neural network trained on the “medium-quality” subproblem solutions available as the input data “would typically give medium-quality results,” says Wu. In this case, however, the researchers were able to leverage the medium-quality solutions to achieve high-quality results, significantly faster than state-of-the-art methods.

    For vehicle routing and similar problems, users often must design very specialized algorithms to solve their specific problem. Some of these heuristics have been in development for decades.

    The learning-to-delegate method offers an automatic way to accelerate these heuristics for large problems, no matter what the heuristic or — potentially — what the problem.

    Since the method can work with a variety of solvers, it may be useful for a variety of resource allocation problems, says Wu. “We may unlock new applications that now will be possible because the cost of solving the problem is 10 to 100 times less.”

    The research was supported by MIT Indonesia Seed Fund, U.S. Department of Transportation Dwight David Eisenhower Transportation Fellowship Program, and the MIT-IBM Watson AI Lab. More

  • in

    Q&A: More-sustainable concrete with machine learning

    As a building material, concrete withstands the test of time. Its use dates back to early civilizations, and today it is the most popular composite choice in the world. However, it’s not without its faults. Production of its key ingredient, cement, contributes 8-9 percent of the global anthropogenic CO2 emissions and 2-3 percent of energy consumption, which is only projected to increase in the coming years. With aging United States infrastructure, the federal government recently passed a milestone bill to revitalize and upgrade it, along with a push to reduce greenhouse gas emissions where possible, putting concrete in the crosshairs for modernization, too.

    Elsa Olivetti, the Esther and Harold E. Edgerton Associate Professor in the MIT Department of Materials Science and Engineering, and Jie Chen, MIT-IBM Watson AI Lab research scientist and manager, think artificial intelligence can help meet this need by designing and formulating new, more sustainable concrete mixtures, with lower costs and carbon dioxide emissions, while improving material performance and reusing manufacturing byproducts in the material itself. Olivetti’s research improves environmental and economic sustainability of materials, and Chen develops and optimizes machine learning and computational techniques, which he can apply to materials reformulation. Olivetti and Chen, along with their collaborators, have recently teamed up for an MIT-IBM Watson AI Lab project to make concrete more sustainable for the benefit of society, the climate, and the economy.

    Q: What applications does concrete have, and what properties make it a preferred building material?

    Olivetti: Concrete is the dominant building material globally with an annual consumption of 30 billion metric tons. That is over 20 times the next most produced material, steel, and the scale of its use leads to considerable environmental impact, approximately 5-8 percent of global greenhouse gas (GHG) emissions. It can be made locally, has a broad range of structural applications, and is cost-effective. Concrete is a mixture of fine and coarse aggregate, water, cement binder (the glue), and other additives.

    Q: Why isn’t it sustainable, and what research problems are you trying to tackle with this project?

    Olivetti: The community is working on several ways to reduce the impact of this material, including alternative fuels use for heating the cement mixture, increasing energy and materials efficiency and carbon sequestration at production facilities, but one important opportunity is to develop an alternative to the cement binder.

    While cement is 10 percent of the concrete mass, it accounts for 80 percent of the GHG footprint. This impact is derived from the fuel burned to heat and run the chemical reaction required in manufacturing, but also the chemical reaction itself releases CO2 from the calcination of limestone. Therefore, partially replacing the input ingredients to cement (traditionally ordinary Portland cement or OPC) with alternative materials from waste and byproducts can reduce the GHG footprint. But use of these alternatives is not inherently more sustainable because wastes might have to travel long distances, which adds to fuel emissions and cost, or might require pretreatment processes. The optimal way to make use of these alternate materials will be situation-dependent. But because of the vast scale, we also need solutions that account for the huge volumes of concrete needed. This project is trying to develop novel concrete mixtures that will decrease the GHG impact of the cement and concrete, moving away from the trial-and-error processes towards those that are more predictive.

    Chen: If we want to fight climate change and make our environment better, are there alternative ingredients or a reformulation we could use so that less greenhouse gas is emitted? We hope that through this project using machine learning we’ll be able to find a good answer.

    Q: Why is this problem important to address now, at this point in history?

    Olivetti: There is urgent need to address greenhouse gas emissions as aggressively as possible, and the road to doing so isn’t necessarily straightforward for all areas of industry. For transportation and electricity generation, there are paths that have been identified to decarbonize those sectors. We need to move much more aggressively to achieve those in the time needed; further, the technological approaches to achieve that are more clear. However, for tough-to-decarbonize sectors, such as industrial materials production, the pathways to decarbonization are not as mapped out.

    Q: How are you planning to address this problem to produce better concrete?

    Olivetti: The goal is to predict mixtures that will both meet performance criteria, such as strength and durability, with those that also balance economic and environmental impact. A key to this is to use industrial wastes in blended cements and concretes. To do this, we need to understand the glass and mineral reactivity of constituent materials. This reactivity not only determines the limit of the possible use in cement systems but also controls concrete processing, and the development of strength and pore structure, which ultimately control concrete durability and life-cycle CO2 emissions.

    Chen: We investigate using waste materials to replace part of the cement component. This is something that we’ve hypothesized would be more sustainable and economic — actually waste materials are common, and they cost less. Because of the reduction in the use of cement, the final concrete product would be responsible for much less carbon dioxide production. Figuring out the right concrete mixture proportion that makes endurable concretes while achieving other goals is a very challenging problem. Machine learning is giving us an opportunity to explore the advancement of predictive modeling, uncertainty quantification, and optimization to solve the issue. What we are doing is exploring options using deep learning as well as multi-objective optimization techniques to find an answer. These efforts are now more feasible to carry out, and they will produce results with reliability estimates that we need to understand what makes a good concrete.

    Q: What kinds of AI and computational techniques are you employing for this?

    Olivetti: We use AI techniques to collect data on individual concrete ingredients, mix proportions, and concrete performance from the literature through natural language processing. We also add data obtained from industry and/or high throughput atomistic modeling and experiments to optimize the design of concrete mixtures. Then we use this information to develop insight into the reactivity of possible waste and byproduct materials as alternatives to cement materials for low-CO2 concrete. By incorporating generic information on concrete ingredients, the resulting concrete performance predictors are expected to be more reliable and transformative than existing AI models.

    Chen: The final objective is to figure out what constituents, and how much of each, to put into the recipe for producing the concrete that optimizes the various factors: strength, cost, environmental impact, performance, etc. For each of the objectives, we need certain models: We need a model to predict the performance of the concrete (like, how long does it last and how much weight does it sustain?), a model to estimate the cost, and a model to estimate how much carbon dioxide is generated. We will need to build these models by using data from literature, from industry, and from lab experiments.

    We are exploring Gaussian process models to predict the concrete strength, going forward into days and weeks. This model can give us an uncertainty estimate of the prediction as well. Such a model needs specification of parameters, for which we will use another model to calculate. At the same time, we also explore neural network models because we can inject domain knowledge from human experience into them. Some models are as simple as multi-layer perceptions, while some are more complex, like graph neural networks. The goal here is that we want to have a model that is not only accurate but also robust — the input data is noisy, and the model must embrace the noise, so that its prediction is still accurate and reliable for the multi-objective optimization.

    Once we have built models that we are confident with, we will inject their predictions and uncertainty estimates into the optimization of multiple objectives, under constraints and under uncertainties.

    Q: How do you balance cost-benefit trade-offs?

    Chen: The multiple objectives we consider are not necessarily consistent, and sometimes they are at odds with each other. The goal is to identify scenarios where the values for our objectives cannot be further pushed simultaneously without compromising one or a few. For example, if you want to further reduce the cost, you probably have to suffer the performance or suffer the environmental impact. Eventually, we will give the results to policymakers and they will look into the results and weigh the options. For example, they may be able to tolerate a slightly higher cost under a significant reduction in greenhouse gas. Alternatively, if the cost varies little but the concrete performance changes drastically, say, doubles or triples, then this is definitely a favorable outcome.

    Q: What kinds of challenges do you face in this work?

    Chen: The data we get either from industry or from literature are very noisy; the concrete measurements can vary a lot, depending on where and when they are taken. There are also substantial missing data when we integrate them from different sources, so, we need to spend a lot of effort to organize and make the data usable for building and training machine learning models. We also explore imputation techniques that substitute missing features, as well as models that tolerate missing features, in our predictive modeling and uncertainty estimate.

    Q: What do you hope to achieve through this work?

    Chen: In the end, we are suggesting either one or a few concrete recipes, or a continuum of recipes, to manufacturers and policymakers. We hope that this will provide invaluable information for both the construction industry and for the effort of protecting our beloved Earth.

    Olivetti: We’d like to develop a robust way to design cements that make use of waste materials to lower their CO2 footprint. Nobody is trying to make waste, so we can’t rely on one stream as a feedstock if we want this to be massively scalable. We have to be flexible and robust to shift with feedstocks changes, and for that we need improved understanding. Our approach to develop local, dynamic, and flexible alternatives is to learn what makes these wastes reactive, so we know how to optimize their use and do so as broadly as possible. We do that through predictive model development through software we have developed in my group to automatically extract data from literature on over 5 million texts and patents on various topics. We link this to the creative capabilities of our IBM collaborators to design methods that predict the final impact of new cements. If we are successful, we can lower the emissions of this ubiquitous material and play our part in achieving carbon emissions mitigation goals.

    Other researchers involved with this project include Stefanie Jegelka, the X-Window Consortium Career Development Associate Professor in the MIT Department of Electrical Engineering and Computer Science; Richard Goodwin, IBM principal researcher; Soumya Ghosh, MIT-IBM Watson AI Lab research staff member; and Kristen Severson, former research staff member. Collaborators included Nghia Hoang, former research staff member with MIT-IBM Watson AI Lab and IBM Research; and Jeremy Gregory, research scientist in the MIT Department of Civil and Environmental Engineering and executive director of the MIT Concrete Sustainability Hub.

    This research is supported by the MIT-IBM Watson AI Lab. More

  • in

    Community policing in the Global South

    Community policing is meant to combat citizen mistrust of the police force. The concept was developed in the mid-20th century to help officers become part of the communities they are responsible for. The hope was that such presence would create a partnership between citizens and the police force, leading to reduced crime and increased trust. Studies in the 1990s from the United States, United Kingdom, and Australia showed that these goals can be achieved in certain circumstances. Many metropolitan areas in the Global North have since included community policing in their techniques.

    But a recently published study of six different sites in the Global South showed no significant positive effect associated with community policing across a range of countries.

    “We found no reduction in crime or insecurity in these communities, and no increase in trust in the police,” says Fotini Christia, an author of the paper, which was published in Science. Christia is the Ford International Professor in the Social Sciences at MIT and the director of the Sociotechnical Systems Research Center (SSRC) within the Institute for Data, Systems, and Society (IDSS). She was one of three on the steering committee for the research, which also included lead author Graeme Blair at the University of California at Los Angeles and Jeremy Weinstein at Stanford University. Fellow MIT political scientist Lily Tsai was also a co-author on the paper.

    In this study, randomized-control trials of community policing initiatives were implemented at sites in Santa Catarina State, Brazil; Medellín, Colombia; Monrovia, Liberia; Sorsogon Province, Philippines; Ugandan rural areas; and two Punjab Province districts in Pakistan. Each suite of interventions was developed based on the needs of the area but consisted of core elements of community policing such as officer recruitment and training, foot patrols, town hall meetings, and problem-oriented policing. The work was done by a collaboration of several social scientists in the United States and abroad. Major funding for this project was provided by the UK Foreign, Commonwealth and Development Office, awarded through the Evidence in Governance and Politics network.

    The null results were determined after interviewing 18,382 citizens and 874 police officers involved in the experiment over six years.

    The strength of these results lies in the size of the collaboration and the care taken in the research design. Input from researchers representing 22 different departments from universities around the world allowed for a broad diversity of study sites across the Global South. And the study was preregistered to establish a common approach to measurement and indicate exactly which effects the researchers were tracking, to avoid any chance of mining the data to find positive effects.

    “This is a pathbreaking study across a diverse set of sites that provides a new understanding about community policing outside of the Western world” says Christopher Winship, the Diker-Tishman Professor of Sociology at Harvard University, who was not an author on the paper.

    Structural overhaul

    The reasons for the failure of community policing to elicit positive results were as varied as the sites themselves, but an important commonality was difficulties in implementation.

    “We saw three common problems: limited resources, a lack of prioritization of the reform, and rapid rotation of officers,” says Blair. “These challenges lead to weaker implementation of community policing than we’ve seen in ‘success stories’ in the U.S. and may explain why community policing didn’t deliver the same results in these Global South contexts.”

    Citizen attendance at community meetings was variable. And then, resources dedicated to following up on problems identified by citizens were scarce. Police officers in the countries represented in the study are often over-stretched, leaving them unable to adequately follow up on their community policing duties.

    For example, Ugandan police stations averaged one motorbike per whole station, and outposts averaged less than one. At the study sites in Pakistan, fewer than 25 percent of issues that arose in community meetings were followed up on. The police officers tried to push the problems through to other agencies that could assist, but those agencies were also underresourced.            

    There was also significant officer turnover. “In many places, we started with and trained one group of officers and ended with a completely different set of folks,” says Christia.

    In the Philippines, only 25 percent of officers were still in the same post 11 months after the start of the study. Not only is it difficult to train new recruits in the methods of community policing with that rate of turnover, it also makes it extremely difficult to build community respect and familiarity with officers.

    Even in the Global North, the success of community policing can vary. As part of their study, the researchers conducted a review of 43 existing randomized trials conducted since the 1970s to determine the success rate of community policing endeavors already in place.

    They found that in these initiatives, problem-oriented policing reduces crime and likely improves perceptions of safety in a community, but there is mixed-to-negative evidence on the benefits of police presence on crime and perceptions of police. 

    That these initiatives struggle to achieve consistently positive results in countries with better resources indicates there is significant work to be done before success can be achieved in the Global South. Improvements in policing in the Global South may require major structural overhauls of the systems to ensure resource availability, encourage community engagement, and enhance officers’ abilities to follow up on issues of concern.

    “Issues of crime and violence are at the top of the policy agenda in the Global South, and this research demonstrates how universities and government partners can work together to identify the most effective strategies from improving people’s sense of safety,” says Weinstein. “While community policing strategies didn’t deliver the anticipated results on their own, the challenges in implementation point to the need for more systemic reforms that provide the necessary resources and align incentives for police to respond to citizens’ primary concerns.” More

  • in

    The reasons behind lithium-ion batteries’ rapid cost decline

    Lithium-ion batteries, those marvels of lightweight power that have made possible today’s age of handheld electronics and electric vehicles, have plunged in cost since their introduction three decades ago at a rate similar to the drop in solar panel prices, as documented by a study published last March. But what brought about such an astonishing cost decline, of about 97 percent?

    Some of the researchers behind that earlier study have now analyzed what accounted for the extraordinary savings. They found that by far the biggest factor was work on research and development, particularly in chemistry and materials science. This outweighed the gains achieved through economies of scale, though that turned out to be the second-largest category of reductions.

    The new findings are being published today in the journal Energy and Environmental Science, in a paper by MIT postdoc Micah Ziegler, recent graduate student Juhyun Song PhD ’19, and Jessika Trancik, a professor in MIT’s Institute for Data, Systems and Society.

    The findings could be useful for policymakers and planners to help guide spending priorities in order to continue the pathway toward ever-lower costs for this and other crucial energy storage technologies, according to Trancik. Their work suggests that there is still considerable room for further improvement in electrochemical battery technologies, she says.

    The analysis required digging through a variety of sources, since much of the relevant information consists of closely held proprietary business data. “The data collection effort was extensive,” Ziegler says. “We looked at academic articles, industry and government reports, press releases, and specification sheets. We even looked at some legal filings that came out. We had to piece together data from many different sources to get a sense of what was happening.” He says they collected “about 15,000 qualitative and quantitative data points, across 1,000 individual records from approximately 280 references.”

    Data from the earliest times are hardest to access and can have the greatest uncertainties, Trancik says, but by comparing different data sources from the same period they have attempted to account for these uncertainties.

    Overall, she says, “we estimate that the majority of the cost decline, more than 50 percent, came from research-and-development-related activities.” That included both private sector and government-funded research and development, and “the vast majority” of that cost decline within that R&D category came from chemistry and materials research.

    That was an interesting finding, she says, because “there were so many variables that people were working on through very different kinds of efforts,” including the design of the battery cells themselves, their manufacturing systems, supply chains, and so on. “The cost improvement emerged from a diverse set of efforts and many people, and not from the work of only a few individuals.”

    The findings about the importance of investment in R&D were especially significant, Ziegler says, because much of this investment happened after lithium-ion battery technology was commercialized, a stage at which some analysts thought the research contribution would become less significant. Over roughly a 20-year period starting five years after the batteries’ introduction in the early 1990s, he says, “most of the cost reduction still came from R&D. The R&D contribution didn’t end when commercialization began. In fact, it was still the biggest contributor to cost reduction.”

    The study took advantage of an analytical approach that Trancik and her team initially developed to analyze the similarly precipitous drop in costs of silicon solar panels over the last few decades. They also applied the approach to understand the rising costs of nuclear energy. “This is really getting at the fundamental mechanisms of technological change,” she says. “And we can also develop these models looking forward in time, which allows us to uncover the levers that people could use to improve the technology in the future.”

    One advantage of the methodology Trancik and her colleagues have developed, she says, is that it helps to sort out the relative importance of different factors when many variables are changing all at once, which typically happens as a technology improves. “It’s not simply adding up the cost effects of these variables,” she says, “because many of these variables affect many different cost components. There’s this kind of intricate web of dependencies.” But the team’s methodology makes it possible to “look at how that overall cost change can be attributed to those variables, by essentially mapping out that network of dependencies,” she says.

    This can help provide guidance on public spending, private investments, and other incentives. “What are all the things that different decision makers could do?” she asks. “What decisions do they have agency over so that they could improve the technology, which is important in the case of low-carbon technologies, where we’re looking for solutions to climate change and we have limited time and limited resources? The new approach allows us to potentially be a bit more intentional about where we make those investments of time and money.”

    “This paper collects data available in a systematic way to determine changes in the cost components of lithium-ion batteries between 1990-1995 and 2010-2015,” says Laura Diaz Anadon, a professor of climate change policy at Cambridge University, who was not connected to this research. “This period was an important one in the history of the technology, and understanding the evolution of cost components lays the groundwork for future work on mechanisms and could help inform research efforts in other types of batteries.”

    The research was supported by the Alfred P. Sloan Foundation, the Environmental Defense Fund, and the MIT Technology and Policy Program. More

  • in

    Design’s new frontier

    In the 1960s, the advent of computer-aided design (CAD) sparked a revolution in design. For his PhD thesis in 1963, MIT Professor Ivan Sutherland developed Sketchpad, a game-changing software program that enabled users to draw, move, and resize shapes on a computer. Over the course of the next few decades, CAD software reshaped how everything from consumer products to buildings and airplanes were designed.

    “CAD was part of the first wave in computing in design. The ability of researchers and practitioners to represent and model designs using computers was a major breakthrough and still is one of the biggest outcomes of design research, in my opinion,” says Maria Yang, Gail E. Kendall Professor and director of MIT’s Ideation Lab.

    Innovations in 3D printing during the 1980s and 1990s expanded CAD’s capabilities beyond traditional injection molding and casting methods, providing designers even more flexibility. Designers could sketch, ideate, and develop prototypes or models faster and more efficiently. Meanwhile, with the push of a button, software like that developed by Professor Emeritus David Gossard of MIT’s CAD Lab could solve equations simultaneously to produce a new geometry on the fly.

    In recent years, mechanical engineers have expanded the computing tools they use to ideate, design, and prototype. More sophisticated algorithms and the explosion of machine learning and artificial intelligence technologies have sparked a second revolution in design engineering.

    Researchers and faculty at MIT’s Department of Mechanical Engineering are utilizing these technologies to re-imagine how the products, systems, and infrastructures we use are designed. These researchers are at the forefront of the new frontier in design.

    Computational design

    Faez Ahmed wants to reinvent the wheel, or at least the bicycle wheel. He and his team at MIT’s Design Computation & Digital Engineering Lab (DeCoDE) use an artificial intelligence-driven design method that can generate entirely novel and improved designs for a range of products — including the traditional bicycle. They create advanced computational methods to blend human-driven design with simulation-based design.

    “The focus of our DeCoDE lab is computational design. We are looking at how we can create machine learning and AI algorithms to help us discover new designs that are optimized based on specific performance parameters,” says Ahmed, an assistant professor of mechanical engineering at MIT.

    For their work using AI-driven design for bicycles, Ahmed and his collaborator Professor Daniel Frey wanted to make it easier to design customizable bicycles, and by extension, encourage more people to use bicycles over transportation methods that emit greenhouse gases.

    To start, the group gathered a dataset of 4,500 bicycle designs. Using this massive dataset, they tested the limits of what machine learning could do. First, they developed algorithms to group bicycles that looked similar together and explore the design space. They then created machine learning models that could successfully predict what components are key in identifying a bicycle style, such as a road bike versus a mountain bike.

    Once the algorithms were good enough at identifying bicycle designs and parts, the team proposed novel machine learning tools that could use this data to create a unique and creative design for a bicycle based on certain performance parameters and rider dimensions.

    Ahmed used a generative adversarial network — or GAN — as the basis of this model. GAN models utilize neural networks that can create new designs based on vast amounts of data. However, using GAN models alone would result in homogeneous designs that lack novelty and can’t be assessed in terms of performance. To address these issues in design problems, Ahmed has developed a new method which he calls “PaDGAN,” performance augmented diverse GAN.

    “When we apply this type of model, what we see is that we can get large improvements in the diversity, quality, as well as novelty of the designs,” Ahmed explains.

    Using this approach, Ahmed’s team developed an open-source computational design tool for bicycles freely available on their lab website. They hope to further develop a set of generalizable tools that can be used across industries and products.

    Longer term, Ahmed has his sights set on loftier goals. He hopes the computational design tools he develops could lead to “design democratization,” putting more power in the hands of the end user.

    “With these algorithms, you can have more individualization where the algorithm assists a customer in understanding their needs and helps them create a product that satisfies their exact requirements,” he adds.

    Using algorithms to democratize the design process is a goal shared by Stefanie Mueller, an associate professor in electrical engineering and computer science and mechanical engineering.

    Personal fabrication

    Platforms like Instagram give users the freedom to instantly edit their photographs or videos using filters. In one click, users can alter the palette, tone, and brightness of their content by applying filters that range from bold colors to sepia-toned or black-and-white. Mueller, X-Window Consortium Career Development Professor, wants to bring this concept of the Instagram filter to the physical world.

    “We want to explore how digital capabilities can be applied to tangible objects. Our goal is to bring reprogrammable appearance to the physical world,” explains Mueller, director of the HCI Engineering Group based out of MIT’s Computer Science and Artificial Intelligence Laboratory.

    Mueller’s team utilizes a combination of smart materials, optics, and computation to advance personal fabrication technologies that would allow end users to alter the design and appearance of the products they own. They tested this concept in a project they dubbed “Photo-Chromeleon.”

    First, a mix of photochromic cyan, magenta, and yellow dies are airbrushed onto an object — in this instance, a 3D sculpture of a chameleon. Using software they developed, the team sketches the exact color pattern they want to achieve on the object itself. An ultraviolet light shines on the object to activate the dyes.

    To actually create the physical pattern on the object, Mueller has developed an optimization algorithm to use alongside a normal office projector outfitted with red, green, and blue LED lights. These lights shine on specific pixels on the object for a given period of time to physically change the makeup of the photochromic pigments.

    “This fancy algorithm tells us exactly how long we have to shine the red, green, and blue light on every single pixel of an object to get the exact pattern we’ve programmed in our software,” says Mueller.

    Giving this freedom to the end user enables limitless possibilities. Mueller’s team has applied this technology to iPhone cases, shoes, and even cars. In the case of shoes, Mueller envisions a shoebox embedded with UV and LED light projectors. Users could put their shoes in the box overnight and the next day have a pair of shoes in a completely new pattern.

    Mueller wants to expand her personal fabrication methods to the clothes we wear. Rather than utilize the light projection technique developed in the PhotoChromeleon project, her team is exploring the possibility of weaving LEDs directly into clothing fibers, allowing people to change their shirt’s appearance as they wear it. These personal fabrication technologies could completely alter consumer habits.

    “It’s very interesting for me to think about how these computational techniques will change product design on a high level,” adds Mueller. “In the future, a consumer could buy a blank iPhone case and update the design on a weekly or daily basis.”

    Computational fluid dynamics and participatory design

    Another team of mechanical engineers, including Sili Deng, the Brit (1961) & Alex (1949) d’Arbeloff Career Development Professor, are developing a different kind of design tool that could have a large impact on individuals in low- and middle-income countries across the world.

    As Deng walked down the hallway of Building 1 on MIT’s campus, a monitor playing a video caught her eye. The video featured work done by mechanical engineers and MIT D-Lab on developing cleaner burning briquettes for cookstoves in Uganda. Deng immediately knew she wanted to get involved.

    “As a combustion scientist, I’ve always wanted to work on such a tangible real-world problem, but the field of combustion tends to focus more heavily on the academic side of things,” explains Deng.

    After reaching out to colleagues in MIT D-Lab, Deng joined a collaborative effort to develop a new cookstove design tool for the 3 billion people across the world who burn solid fuels to cook and heat their homes. These stoves often emit soot and carbon monoxide, leading not only to millions of deaths each year, but also worsening the world’s greenhouse gas emission problem.

    The team is taking a three-pronged approach to developing this solution, using a combination of participatory design, physical modeling, and experimental validation to create a tool that will lead to the production of high-performing, low-cost energy products.

    Deng and her team in the Deng Energy and Nanotechnology Group use physics-based modeling for the combustion and emission process in cookstoves.

    “My team is focused on computational fluid dynamics. We use computational and numerical studies to understand the flow field where the fuel is burned and releases heat,” says Deng.

    These flow mechanics are crucial to understanding how to minimize heat loss and make cookstoves more efficient, as well as learning how dangerous pollutants are formed and released in the process.

    Using computational methods, Deng’s team performs three-dimensional simulations of the complex chemistry and transport coupling at play in the combustion and emission processes. They then use these simulations to build a combustion model for how fuel is burned and a pollution model that predicts carbon monoxide emissions.

    Deng’s models are used by a group led by Daniel Sweeney in MIT D-Lab to test the experimental validation in prototypes of stoves. Finally, Professor Maria Yang uses participatory design methods to integrate user feedback, ensuring the design tool can actually be used by people across the world.

    The end goal for this collaborative team is to not only provide local manufacturers with a prototype they could produce themselves, but to also provide them with a tool that can tweak the design based on local needs and available materials.

    Deng sees wide-ranging applications for the computational fluid dynamics her team is developing.

    “We see an opportunity to use physics-based modeling, augmented with a machine learning approach, to come up with chemical models for practical fuels that help us better understand combustion. Therefore, we can design new methods to minimize carbon emissions,” she adds.

    While Deng is utilizing simulations and machine learning at the molecular level to improve designs, others are taking a more macro approach.

    Designing intelligent systems

    When it comes to intelligent design, Navid Azizan thinks big. He hopes to help create future intelligent systems that are capable of making decisions autonomously by using the enormous amounts of data emerging from the physical world. From smart robots and autonomous vehicles to smart power grids and smart cities, Azizan focuses on the analysis, design, and control of intelligent systems.

    Achieving such massive feats takes a truly interdisciplinary approach that draws upon various fields such as machine learning, dynamical systems, control, optimization, statistics, and network science, among others.

    “Developing intelligent systems is a multifaceted problem, and it really requires a confluence of disciplines,” says Azizan, assistant professor of mechanical engineering with a dual appointment in MIT’s Institute for Data, Systems, and Society (IDSS). “To create such systems, we need to go beyond standard approaches to machine learning, such as those commonly used in computer vision, and devise algorithms that can enable safe, efficient, real-time decision-making for physical systems.”

    For robot control to work in the complex dynamic environments that arise in the real world, real-time adaptation is key. If, for example, an autonomous vehicle is going to drive in icy conditions or a drone is operating in windy conditions, they need to be able to adapt to their new environment quickly.

    To address this challenge, Azizan and his collaborators at MIT and Stanford University have developed a new algorithm that combines adaptive control, a powerful methodology from control theory, with meta learning, a new machine learning paradigm.

    “This ‘control-oriented’ learning approach outperforms the existing ‘regression-oriented’ methods, which are mostly focused on just fitting the data, by a wide margin,” says Azizan.

    Another critical aspect of deploying machine learning algorithms in physical systems that Azizan and his team hope to address is safety. Deep neural networks are a crucial part of autonomous systems. They are used for interpreting complex visual inputs and making data-driven predictions of future behavior in real time. However, Azizan urges caution.

    “These deep neural networks are only as good as their training data, and their predictions can often be untrustworthy in scenarios not covered by their training data,” he says. Making decisions based on such untrustworthy predictions could lead to fatal accidents in autonomous vehicles or other safety-critical systems.

    To avoid these potentially catastrophic events, Azizan proposes that it is imperative to equip neural networks with a measure of their uncertainty. When the uncertainty is high, they can then be switched to a “safe policy.”

    In pursuit of this goal, Azizan and his collaborators have developed a new algorithm known as SCOD — Sketching Curvature of Out-of-Distribution Detection. This framework could be embedded within any deep neural network to equip them with a measure of their uncertainty.

    “This algorithm is model-agnostic and can be applied to neural networks used in various kinds of autonomous systems, whether it’s drones, vehicles, or robots,” says Azizan.

    Azizan hopes to continue working on algorithms for even larger-scale systems. He and his team are designing efficient algorithms to better control supply and demand in smart energy grids. According to Azizan, even if we create the most efficient solar panels and batteries, we can never achieve a sustainable grid powered by renewable resources without the right control mechanisms.

    Mechanical engineers like Ahmed, Mueller, Deng, and Azizan serve as the key to realizing the next revolution of computing in design.

    “MechE is in a unique position at the intersection of the computational and physical worlds,” Azizan says. “Mechanical engineers build a bridge between theoretical, algorithmic tools and real, physical world applications.”

    Sophisticated computational tools, coupled with the ground truth mechanical engineers have in the physical world, could unlock limitless possibilities for design engineering, well beyond what could have been imagined in those early days of CAD. More