More stories

  • in

    Q&A: Global challenges surrounding the deployment of AI

    The AI Policy Forum (AIPF) is an initiative of the MIT Schwarzman College of Computing to move the global conversation about the impact of artificial intelligence from principles to practical policy implementation. Formed in late 2020, AIPF brings together leaders in government, business, and academia to develop approaches to address the societal challenges posed by the rapid advances and increasing applicability of AI.

    The co-chairs of the AI Policy Forum are Aleksander Madry, the Cadence Design Systems Professor; Asu Ozdaglar, deputy dean of academics for the MIT Schwarzman College of Computing and head of the Department of Electrical Engineering and Computer Science; and Luis Videgaray, senior lecturer at MIT Sloan School of Management and director of MIT AI Policy for the World Project. Here, they discuss talk some of the key issues facing the AI policy landscape today and the challenges surrounding the deployment of AI. The three are co-organizers of the upcoming AI Policy Forum Summit on Sept. 28, which will further explore the issues discussed here.

    Q: Can you talk about the ­ongoing work of the AI Policy Forum and the AI policy landscape generally?

    Ozdaglar: There is no shortage of discussion about AI at different venues, but conversations are often high-level, focused on questions of ethics and principles, or on policy problems alone. The approach the AIPF takes to its work is to target specific questions with actionable policy solutions and engage with the stakeholders working directly in these areas. We work “behind the scenes” with smaller focus groups to tackle these challenges and aim to bring visibility to some potential solutions alongside the players working directly on them through larger gatherings.

    Q: AI impacts many sectors, which makes us naturally worry about its trustworthiness. Are there any emerging best practices for development and deployment of trustworthy AI?

    Madry: The most important thing to understand regarding deploying trustworthy AI is that AI technology isn’t some natural, preordained phenomenon. It is something built by people. People who are making certain design decisions.

    We thus need to advance research that can guide these decisions as well as provide more desirable solutions. But we also need to be deliberate and think carefully about the incentives that drive these decisions. 

    Now, these incentives stem largely from the business considerations, but not exclusively so. That is, we should also recognize that proper laws and regulations, as well as establishing thoughtful industry standards have a big role to play here too.

    Indeed, governments can put in place rules that prioritize the value of deploying AI while being keenly aware of the corresponding downsides, pitfalls, and impossibilities. The design of such rules will be an ongoing and evolving process as the technology continues to improve and change, and we need to adapt to socio-political realities as well.

    Q: Perhaps one of the most rapidly evolving domains in AI deployment is in the financial sector. From a policy perspective, how should governments, regulators, and lawmakers make AI work best for consumers in finance?

    Videgaray: The financial sector is seeing a number of trends that present policy challenges at the intersection of AI systems. For one, there is the issue of explainability. By law (in the U.S. and in many other countries), lenders need to provide explanations to customers when they take actions deleterious in whatever way, like denial of a loan, to a customer’s interest. However, as financial services increasingly rely on automated systems and machine learning models, the capacity of banks to unpack the “black box” of machine learning to provide that level of mandated explanation becomes tenuous. So how should the finance industry and its regulators adapt to this advance in technology? Perhaps we need new standards and expectations, as well as tools to meet these legal requirements.

    Meanwhile, economies of scale and data network effects are leading to a proliferation of AI outsourcing, and more broadly, AI-as-a-service is becoming increasingly common in the finance industry. In particular, we are seeing fintech companies provide the tools for underwriting to other financial institutions — be it large banks or small, local credit unions. What does this segmentation of the supply chain mean for the industry? Who is accountable for the potential problems in AI systems deployed through several layers of outsourcing? How can regulators adapt to guarantee their mandates of financial stability, fairness, and other societal standards?

    Q: Social media is one of the most controversial sectors of the economy, resulting in many societal shifts and disruptions around the world. What policies or reforms might be needed to best ensure social media is a force for public good and not public harm?

    Ozdaglar: The role of social media in society is of growing concern to many, but the nature of these concerns can vary quite a bit — with some seeing social media as not doing enough to prevent, for example, misinformation and extremism, and others seeing it as unduly silencing certain viewpoints. This lack of unified view on what the problem is impacts the capacity to enact any change. All of that is additionally coupled with the complexities of the legal framework in the U.S. spanning the First Amendment, Section 230 of the Communications Decency Act, and trade laws.

    However, these difficulties in regulating social media do not mean that there is nothing to be done. Indeed, regulators have begun to tighten their control over social media companies, both in the United States and abroad, be it through antitrust procedures or other means. In particular, Ofcom in the U.K. and the European Union is already introducing new layers of oversight to platforms. Additionally, some have proposed taxes on online advertising to address the negative externalities caused by current social media business model. So, the policy tools are there, if the political will and proper guidance exists to implement them. More

  • in

    Companies use MIT research to identify and respond to supply chain risks

    In February 2020, MIT professor David Simchi-Levi predicted the future. In an article in Harvard Business Review, he and his colleague warned that the new coronavirus outbreak would throttle supply chains and shutter tens of thousands of businesses across North America and Europe by mid-March.

    For Simchi-Levi, who had developed new models of supply chain resiliency and advised major companies on how to best shield themselves from supply chain woes, the signs of disruption were plain to see. Two years later, the professor of engineering systems at the MIT Schwarzman College of Computing and the Department of Civil and Environmental Engineering, and director of the MIT Data Science Lab has found a “flood of interest” from companies anxious to apply his Risk Exposure Index (REI) research to identify and respond to hidden risks in their own supply chains.

    His work on “stress tests” for critical supply chains and ways to guide global supply chain recovery were included in the 2022 Economic Report of the President presented to the U.S. Congress in April.

    It is rare that data science research can influence policy at the highest levels, Simchi-Levi says, but his models reflect something that business needs now: a new world of continuing global crisis, without relying on historical precedent.

    “What the last two years showed is that you cannot plan just based on what happened last year or the last two years,” Simchi-Levi says.

    He recalled the famous quote, sometimes attributed to hockey great Wayne Gretzsky, that good players don’t skate to where the puck is, but where the puck is going to be. “We are not focusing on the state of the supply chain right now, but what may happen six weeks from now, eight weeks from now, to prepare ourselves today to prevent the problems of the future.”

    Finding hidden risks

    At the heart of REI is a mathematical model of the supply chain that focuses on potential failures at different supply chain nodes — a flood at a supplier’s factory, or a shortage of raw materials at another factory, for instance. By calculating variables such as “time-to-recover” (TTR), which measures how long it will take a particular node to be back at full function, and time-to-survive (TTS), which identifies the maximum duration that the supply chain can match supply with demand after a disruption, the model focuses on the impact of disruption on the supply chain, rather than the cause of disruption.

    Even before the pandemic, catastrophic events such as the 2010 Iceland volcanic eruption and the 2011 Tohoku earthquake and tsunami in Japan were threatening these nodes. “For many years, companies from a variety of industries focused mostly on efficiency, cutting costs as much as possible, using strategies like outsourcing and offshoring,” Simchi-Levi says. “They were very successful doing this, but it has dramatically increased their exposure to risk.”

    Using their model, Simchi-Levi and colleagues began working with Ford Motor Company in 2013 to improve the company’s supply chain resiliency. The partnership uncovered some surprising hidden risks.

    To begin with, the researchers found out that Ford’s “strategic suppliers” — the nodes of the supply chain where the company spent large amount of money each year — had only moderate exposure to risk. Instead, the biggest risk “tended to come from tiny suppliers that provide Ford with components that cost about 10 cents,” says Simchi-Levi.

    The analysis also found that risky suppliers are everywhere across the globe. “There is this idea that if you just move suppliers closer to market, to demand, to North America or to Mexico, you increase the resiliency of your supply chain. That is not supported by our data,” he says.

    Rewards of resiliency

    By creating a virtual representation, or “digital twin,” of the Ford supply chain, the researchers were able to test out strategies at each node to see what would increase supply chain resiliency. Should the company invest in more warehouses to store a key component? Should it shift production of a component to another factory?

    Companies are sometimes reluctant to invest in supply chain resiliency, Simchi-Levi says, but the analysis isn’t just about risk. “It’s also going to help you identify savings opportunities. The company may be building a lot of misplaced, costly inventory, for instance, and our method helps them to identify these inefficiencies and cut costs.”

    Since working with Ford, Simchi-Levi and colleagues have collaborated with many other companies, including a partnership with Accenture, to scale the REI technology to a variety of industries including high-tech, industrial equipment, home improvement retailers, fashion retailers, and consumer packaged goods.

    Annette Clayton, the CEO of Schneider Electric North America and previously its chief supply chain officer, has worked with Simchi-Levi for 17 years. “When I first went to work for Schneider, I asked David and his team to help us look at resiliency and inventory positioning in order to make the best cost, delivery, flexibility, and speed trade-offs for the North American supply chain,” she says. “As the pandemic unfolded, the very learnings in supply chain resiliency we had worked on before became even more important and we partnered with David and his team again,”

    “We have used TTR and TTS to determine places where we need to develop and duplicate supplier capability, from raw materials to assembled parts. We increased inventories where our time-to-recover because of extended logistics times exceeded our time-to-survive,” Clayton adds. “We have used TTR and TTS to prioritize our workload in supplier development, procurement and expanding our own manufacturing capacity.”

    The REI approach can even be applied to an entire country’s economy, as the U.N. Office for Disaster Risk Reduction has done for developing countries such as Thailand in the wake of disastrous flooding in 2011.

    Simchi-Levi and colleagues have been motivated by the pandemic to enhance the REI model with new features. “Because we have started collaborating with more companies, we have realized some interesting, company-specific business constraints,” he says, which are leading to more efficient ways of calculating hidden risk. More

  • in

    End-to-end supply chain transparency

    For years, companies have managed their extended supply chains with intermittent audits and certifications while attempting to persuade their suppliers to adhere to certain standards and codes of conduct. But they’ve lacked the concrete data necessary to prove their supply chains were working as they should. They most likely had baseline data about their suppliers — what they bought and who they bought it from — but knew little else about the rest of the supply chain.

    With Sourcemap, companies can now trace their supply chains from raw material to finished good with certainty, keeping track of the mines and farms that produce the commodities they rely on to take their goods to market. This unprecedented level of transparency provides Sourcemap’s customers with the assurance that the entire end-to-end supply chain operates within their standards while living up to social and environmental targets.

    And they’re doing it at scale for large multinationals across the food, agricultural, automotive, tech, and apparel industries. Thanks to Sourcemap founder and CEO Leonardo Bonanni MA ’03, SM ’05, PhD ’10, companies like VF Corporation, owner of brands like Timberland, The North Face, Mars, Hershey, and Ferrero, now have enough data to confidently tell the story of how they’re sourcing their raw materials.

    “Coming from the Media Lab, we recognized early on the power of the cloud, the power of social networking-type databases and smartphone diffusion around the world,” says Bonanni of his company’s MIT roots. Rather than providing intermittent glances at the supply chain via an auditor, Sourcemap collects data continuously, in real-time, every step of the way, flagging anything that could indicate counterfeiting, adulteration, fraud, waste, or abuse.

    “We’ve taken our customers from a situation where they had very little control to a world where they have direct visibility over their entire global operations, even allowing them to see ahead of time — before a container reaches the port — whether there is any indication that there might be something wrong with it,” says Bonanni.

    The key problem Sourcemap addresses is a lack of data in companies’ supply chain management databases. According to Bonanni, most Sourcemap customers have invested millions of dollars in enterprise resource planning (ERP) databases, which provide information about internal operations and direct suppliers, but fall short when it comes to global operations, where their secondary and tertiary suppliers operate. Built on relational databases, ERP systems have been around for more than 40 years and work well for simple, static data structures. But they aren’t agile enough to handle big data and rapidly evolving, complex data structures

    Sourcemap, on the other hand, uses NoSQL (non-relational) database technology, which is more flexible, cost-efficient, and scalable. “Our platform is like a LinkedIn for the supply chain,” explains Bonanni. Customers provide information about where they buy their raw materials, the suppliers get invited to the network and provide information to validate those relationships, right down to the farms and the mines where the raw materials are extracted — which is often where the biggest risks lie.

    Initially, the entire supply chain database of a Sourcemap customer might amount to a few megabytes of spreadsheets listing their purchase orders and the names of their suppliers. Sourcemap delivers terabytes of data that paint a detailed picture of the supply chain, capturing everything, right down to the moment a farmer in West Africa delivers cocoa beans to a warehouse, onto a truck heading to a port, to a factory, all the way to the finished goods.

    “We’ve seen the amount of data collected grow by a factor of 1 million, which tells us that the world is finally ready for full visibility of supply chains,” says Bonanni. “The fact is that we’ve seen supply chain transparency go from a fringe concern to a broad-based requirement as a license to operate in most of Europe and North America,” says Bonanni.

    These days, disruptions in supply chains, combined with price volatility and new laws requiring companies to prove that the goods they import were not made illegally (such as by causing deforestation or involving forced or child labor), means that companies are often required to know where they source their raw materials from, even if they only import the materials through an intermediary.

    Sourcemap uses its full suite of tools to walk customers through a step-by-step process that maps their suppliers while measuring performance, ultimately verifying the entire supply chain and providing them with the confidence to import goods while being customs-compliant. At the end of the day, Sourcemap customers can communicate to their stakeholders and the end consumer exactly where their commodities come from while ensuring that social, environmental, and compliance standards are met.

    The company was recently named to the newest cohort of firms honored by the MIT Startup Exchange (STEX) as STEX25 startups. Bonanni is quick to point out the benefits of STEX and of MIT’s Industrial Liaison Program (ILP): “Our best feedback and our most constructive relationships have been with companies that sponsored our research early on at the Media Lab and ILP,” he says. “The innovative exchange of ideas inherent in the MIT startup ecosystem has helped to build up Sourcemap as a company and to grow supply chain transparency as a future-facing technology that more and more companies are now scrambling to adopt.” More

  • in

    Last-mile routing research challenge awards $175,000 to three winning teams

    Routing is one of the most studied problems in operations research; even small improvements in routing efficiency can save companies money and result in energy savings and reduced environmental impacts. Now, three teams of researchers from universities around the world have received prize money totaling $175,000 for their innovative route optimization models.

    The three teams were the winners of the Amazon Last-Mile Routing Research Challenge, through which the MIT Center for Transportation & Logistics (MIT CTL) and Amazon engaged with a global community of researchers across a range of disciplines, from computer science to business operations to supply chain management, challenging them to build data-driven route optimization models leveraging massive historical route execution data.

    First announced in February, the research challenge attracted more than 2,000 participants from around the world. Two hundred twenty-nine researcher teams formed during the spring to independently develop solutions that incorporated driver know-how into route optimization models with the intent that they would outperform traditional optimization approaches. Out of the 48 teams whose models qualified for the final round of the challenge, three teams’ work stood out above the rest. Amazon provided real operational training data for the models and evaluated submissions, with technical support from MIT CTL scientists.

    In real life, drivers frequently deviate from planned and mathematically optimized route sequences. Drivers carry information about which roads are hard to navigate when traffic is bad, when and where they can easily find parking, which stops can be conveniently served together, and many other factors that existing optimization models simply don’t capture.

    Each model addressed the challenge data in a unique way. The methodological approaches chosen by the participants frequently combined traditional exact and heuristic optimization approaches with nontraditional machine learning methods. On the machine learning side, the most commonly adopted methods were different variants of artificial neural networks, as well as inverse reinforcement learning approaches.

    There were 45 submissions that reached the finalist phase, with team members hailing from 29 countries. Entrants spanned all levels of higher education from final-year undergraduate students to retired faculty. Entries were assessed in a double-blind review process so that the judges would not know what team was attached to each entry.

    The third-place prize of $25,000 was awarded to Okan Arslan and Rasit Abay. Okan is a professor at HEC Montréal, and Rasit is a doctoral student at the University of New South Wales in Australia. The runner-up prize at $50,000 was awarded to MIT’s own Xiaotong Guo, Qingyi Wang, and Baichuan Mo, all doctoral students. The top prize of $100,000 was awarded to Professor William Cook of the University of Waterloo in Canada, Professor Stephan Held of the University of Bonn in Germany, and Professor Emeritus Keld Helsgaun of Roskilde University in Denmark. Congratulations to all winners and contestants were held via webinar on July 30.

    Top-performing teams may be interviewed by Amazon for research roles in the company’s Last Mile organization. MIT CTL will publish and promote short technical papers written by all finalists and might invite top-performing teams to present at MIT. Further, a team led by Matthias Winkenbach, director of the MIT Megacity Logistics Lab, will guest-edit a special issue of Transportation Science, one of the most renowned academic journals in this field, featuring academic papers on topics related to the problem tackled by the research challenge. More

  • in

    Smarter regulation of global shipping emissions could improve air quality and health outcomes

    Emissions from shipping activities around the world account for nearly 3 percent of total human-caused greenhouse gas emissions, and could increase by up to 50 percent by 2050, making them an important and often overlooked target for global climate mitigation. At the same time, shipping-related emissions of additional pollutants, particularly nitrogen and sulfur oxides, pose a significant threat to global health, as they degrade air quality enough to cause premature deaths.

    The main source of shipping emissions is the combustion of heavy fuel oil in large diesel engines, which disperses pollutants into the air over coastal areas. The nitrogen and sulfur oxides emitted from these engines contribute to the formation of PM2.5, airborne particulates with diameters of up to 2.5 micrometers that are linked to respiratory and cardiovascular diseases. Previous studies have estimated that PM2.5  from shipping emissions contribute to about 60,000 cardiopulmonary and lung cancer deaths each year, and that IMO 2020, an international policy that caps engine fuel sulfur content at 0.5 percent, could reduce PM2.5 concentrations enough to lower annual premature mortality by 34 percent.

    Global shipping emissions arise from both domestic (between ports in the same country) and international (between ports of different countries) shipping activities, and are governed by national and international policies, respectively. Consequently, effective mitigation of the air quality and health impacts of global shipping emissions will require that policymakers quantify the relative contributions of domestic and international shipping activities to these adverse impacts in an integrated global analysis.

    A new study in the journal Environmental Research Letters provides that kind of analysis for the first time. To that end, the study’s co-authors — researchers from MIT and the Hong Kong University of Science and Technology — implement a three-step process. First, they create global shipping emission inventories for domestic and international vessels based on ship activity records of the year 2015 from the Automatic Identification System (AIS). Second, they apply an atmospheric chemistry and transport model to this data to calculate PM2.5 concentrations generated by that year’s domestic and international shipping activities. Finally, they apply a model that estimates mortalities attributable to these pollutant concentrations.

    The researchers find that approximately 94,000 premature deaths were associated with PM2.5 exposure due to maritime shipping in 2015 — 83 percent international and 17 percent domestic. While international shipping accounted for the vast majority of the global health impact, some regions experienced significant health burdens from domestic shipping operations. This is especially true in East Asia: In China, 44 percent of shipping-related premature deaths were attributable to domestic shipping activities.

    “By comparing the health impacts from international and domestic shipping at the global level, our study could help inform decision-makers’ efforts to coordinate shipping emissions policies across multiple scales, and thereby reduce the air quality and health impacts of these emissions more effectively,” says Yiqi Zhang, a researcher at the Hong Kong University of Science and Technology who led the study as a visiting student supported by the MIT Joint Program on the Science and Policy of Global Change.

    In addition to estimating the air-quality and health impacts of domestic and international shipping, the researchers evaluate potential health outcomes under different shipping emissions-control policies that are either currently in effect or likely to be implemented in different regions in the near future.

    They estimate about 30,000 avoided deaths per year under a scenario consistent with IMO 2020, an international regulation limiting the sulfur content in shipping fuel oil to 0.5 percent — a finding that tracks with previous studies. Further strengthening regulations on sulfur content would yield only slight improvement; limiting sulfur content to 0.1 percent reduces annual shipping-attributable PM2.5-related premature deaths by an additional 5,000. In contrast, regulating nitrogen oxides instead, involving a Tier III NOx Standard would produce far greater benefits than a 0.1-percent sulfur cap, with 33,000 further avoided deaths.

    “Areas with high proportions of mortalities contributed by domestic shipping could effectively use domestic regulations to implement controls,” says study co-author Noelle Selin, a professor at MIT’s Institute for Data, Systems and Society and Department of Earth, Atmospheric and Planetary Sciences, and a faculty affiliate of the MIT Joint Program. “For other regions where much damage comes from international vessels, further international cooperation is required to mitigate impacts.” More