More stories

  • in

    Companies use MIT research to identify and respond to supply chain risks

    In February 2020, MIT professor David Simchi-Levi predicted the future. In an article in Harvard Business Review, he and his colleague warned that the new coronavirus outbreak would throttle supply chains and shutter tens of thousands of businesses across North America and Europe by mid-March.

    For Simchi-Levi, who had developed new models of supply chain resiliency and advised major companies on how to best shield themselves from supply chain woes, the signs of disruption were plain to see. Two years later, the professor of engineering systems at the MIT Schwarzman College of Computing and the Department of Civil and Environmental Engineering, and director of the MIT Data Science Lab has found a “flood of interest” from companies anxious to apply his Risk Exposure Index (REI) research to identify and respond to hidden risks in their own supply chains.

    His work on “stress tests” for critical supply chains and ways to guide global supply chain recovery were included in the 2022 Economic Report of the President presented to the U.S. Congress in April.

    It is rare that data science research can influence policy at the highest levels, Simchi-Levi says, but his models reflect something that business needs now: a new world of continuing global crisis, without relying on historical precedent.

    “What the last two years showed is that you cannot plan just based on what happened last year or the last two years,” Simchi-Levi says.

    He recalled the famous quote, sometimes attributed to hockey great Wayne Gretzsky, that good players don’t skate to where the puck is, but where the puck is going to be. “We are not focusing on the state of the supply chain right now, but what may happen six weeks from now, eight weeks from now, to prepare ourselves today to prevent the problems of the future.”

    Finding hidden risks

    At the heart of REI is a mathematical model of the supply chain that focuses on potential failures at different supply chain nodes — a flood at a supplier’s factory, or a shortage of raw materials at another factory, for instance. By calculating variables such as “time-to-recover” (TTR), which measures how long it will take a particular node to be back at full function, and time-to-survive (TTS), which identifies the maximum duration that the supply chain can match supply with demand after a disruption, the model focuses on the impact of disruption on the supply chain, rather than the cause of disruption.

    Even before the pandemic, catastrophic events such as the 2010 Iceland volcanic eruption and the 2011 Tohoku earthquake and tsunami in Japan were threatening these nodes. “For many years, companies from a variety of industries focused mostly on efficiency, cutting costs as much as possible, using strategies like outsourcing and offshoring,” Simchi-Levi says. “They were very successful doing this, but it has dramatically increased their exposure to risk.”

    Using their model, Simchi-Levi and colleagues began working with Ford Motor Company in 2013 to improve the company’s supply chain resiliency. The partnership uncovered some surprising hidden risks.

    To begin with, the researchers found out that Ford’s “strategic suppliers” — the nodes of the supply chain where the company spent large amount of money each year — had only moderate exposure to risk. Instead, the biggest risk “tended to come from tiny suppliers that provide Ford with components that cost about 10 cents,” says Simchi-Levi.

    The analysis also found that risky suppliers are everywhere across the globe. “There is this idea that if you just move suppliers closer to market, to demand, to North America or to Mexico, you increase the resiliency of your supply chain. That is not supported by our data,” he says.

    Rewards of resiliency

    By creating a virtual representation, or “digital twin,” of the Ford supply chain, the researchers were able to test out strategies at each node to see what would increase supply chain resiliency. Should the company invest in more warehouses to store a key component? Should it shift production of a component to another factory?

    Companies are sometimes reluctant to invest in supply chain resiliency, Simchi-Levi says, but the analysis isn’t just about risk. “It’s also going to help you identify savings opportunities. The company may be building a lot of misplaced, costly inventory, for instance, and our method helps them to identify these inefficiencies and cut costs.”

    Since working with Ford, Simchi-Levi and colleagues have collaborated with many other companies, including a partnership with Accenture, to scale the REI technology to a variety of industries including high-tech, industrial equipment, home improvement retailers, fashion retailers, and consumer packaged goods.

    Annette Clayton, the CEO of Schneider Electric North America and previously its chief supply chain officer, has worked with Simchi-Levi for 17 years. “When I first went to work for Schneider, I asked David and his team to help us look at resiliency and inventory positioning in order to make the best cost, delivery, flexibility, and speed trade-offs for the North American supply chain,” she says. “As the pandemic unfolded, the very learnings in supply chain resiliency we had worked on before became even more important and we partnered with David and his team again,”

    “We have used TTR and TTS to determine places where we need to develop and duplicate supplier capability, from raw materials to assembled parts. We increased inventories where our time-to-recover because of extended logistics times exceeded our time-to-survive,” Clayton adds. “We have used TTR and TTS to prioritize our workload in supplier development, procurement and expanding our own manufacturing capacity.”

    The REI approach can even be applied to an entire country’s economy, as the U.N. Office for Disaster Risk Reduction has done for developing countries such as Thailand in the wake of disastrous flooding in 2011.

    Simchi-Levi and colleagues have been motivated by the pandemic to enhance the REI model with new features. “Because we have started collaborating with more companies, we have realized some interesting, company-specific business constraints,” he says, which are leading to more efficient ways of calculating hidden risk. More

  • in

    Zero-trust architecture may hold the answer to cybersecurity insider threats

    For years, organizations have taken a defensive “castle-and-moat” approach to cybersecurity, seeking to secure the perimeters of their networks to block out any malicious actors. Individuals with the right credentials were assumed to be trustworthy and allowed access to a network’s systems and data without having to reauthorize themselves at each access attempt. However, organizations today increasingly store data in the cloud and allow employees to connect to the network remotely, both of which create vulnerabilities to this traditional approach. A more secure future may require a “zero-trust architecture,” in which users must prove their authenticity each time they access a network application or data.

    In May 2021, President Joe Biden’s Executive Order on Improving the Nation’s Cybersecurity outlined a goal for federal agencies to implement zero-trust security. Since then, MIT Lincoln Laboratory has been performing a study on zero-trust architectures, with the goals of reviewing their implementation in government and industry, identifying technical gaps and opportunities, and developing a set of recommendations for the United States’ approach to a zero-trust system.

    The study team’s first step was to define the term “zero trust” and understand the misperceptions in the field surrounding the concept. Some of these misperceptions suggest that a zero-trust architecture requires entirely new equipment to implement, or that it makes systems so “locked down” they’re not usable. 

    “Part of the reason why there is a lot of confusion about what zero trust is, is because it takes what the cybersecurity world has known about for many years and applies it in a different way,” says Jeffrey Gottschalk, the assistant head of Lincoln Laboratory’s Cyber Security and Information Sciences Division and study’s co-lead. “It is a paradigm shift in terms of how to think about security, but holistically it takes a lot of things that we already know how to do — such as multi-factor authentication, encryption, and software-defined networking­ — and combines them in different ways.”

    Play video

    Presentation: Overview of Zero Trust Architectures

    Recent high-profile cybersecurity incidents — such as those involving the National Security Agency, the U.S. Office of Personnel Management, Colonial Pipeline, SolarWinds, and Sony Pictures — highlight the vulnerability of systems and the need to rethink cybersecurity approaches.

    The study team reviewed recent, impactful cybersecurity incidents to identify which security principles were most responsible for the scale and impact of the attack. “We noticed that while a number of these attacks exploited previously unknown implementation vulnerabilities (also known as ‘zero-days’), the vast majority actually were due to the exploitation of operational security principles,” says Christopher Roeser, study co-lead and the assistant head of the Homeland Protection and Air Traffic Control Division, “that is, the gaining of individuals’ credentials, and the movement within a well-connected network that allows users to gather a significant amount of information or have very widespread effects.”

    In other words, the malicious actor had “breached the moat” and effectively became an insider.

    Zero-trust security principles could protect against this type of insider threat by treating every component, service, and user of a system as continuously exposed to and potentially compromised by a malicious actor. A user’s identity is verified each time that they request to access a new resource, and every access is mediated, logged, and analyzed. It’s like putting trip wires all over the inside of a network system, says Gottschalk. “So, when an adversary trips over that trip wire, you’ll get a signal and can validate that signal and see what’s going on.”

    In practice, a zero-trust approach could look like replacing a single-sign-on system, which lets users sign in just once for access to multiple applications, with a cloud-based identity that is known and verified. “Today, a lot of organizations have different ways that people authenticate and log onto systems, and many of those have been aggregated for expediency into single-sign-on capabilities, just to make it easier for people to log onto their systems. But we envision a future state that embraces zero trust, where identity verification is enabled by cloud-based identity that’s portable and ubiquitous, and very secure itself.”

    While conducting their study, the team spoke to approximately 10 companies and government organizations that have adopted zero-trust implementations — either through cloud services, in-house management, or a combination of both. They found the hybrid approach to be a good model for government organizations to adopt. They also found that the implementation could take from three to five years. “We talked to organizations that have actually done implementations of zero trust, and all of them have indicated that significant organizational commitment and change was required to be able to implement them,” Gottschalk says.

    But a key takeaway from the study is that there isn’t a one-size-fits-all approach to zero trust. “It’s why we think that having test-bed and pilot efforts are going to be very important to balance out zero-trust security with the mission needs of those systems,” Gottschalk says. The team also recognizes the importance of conducting ongoing research and development beyond initial zero-trust implementations, to continue to address evolving threats.

    Lincoln Laboratory will present further findings from the study at its upcoming Cyber Technology for National Security conference, which will be held June 28-29. The conference will also offer a short course for attendees to learn more about the benefits and implementations of zero-trust architectures.  More

  • in

    MIT announces five flagship projects in first-ever Climate Grand Challenges competition

    MIT today announced the five flagship projects selected in its first-ever Climate Grand Challenges competition. These multiyear projects will define a dynamic research agenda focused on unraveling some of the toughest unsolved climate problems and bringing high-impact, science-based solutions to the world on an accelerated basis.

    Representing the most promising concepts to emerge from the two-year competition, the five flagship projects will receive additional funding and resources from MIT and others to develop their ideas and swiftly transform them into practical solutions at scale.

    “Climate Grand Challenges represents a whole-of-MIT drive to develop game-changing advances to confront the escalating climate crisis, in time to make a difference,” says MIT President L. Rafael Reif. “We are inspired by the creativity and boldness of the flagship ideas and by their potential to make a significant contribution to the global climate response. But given the planet-wide scale of the challenge, success depends on partnership. We are eager to work with visionary leaders in every sector to accelerate this impact-oriented research, implement serious solutions at scale, and inspire others to join us in confronting this urgent challenge for humankind.”

    Brief descriptions of the five Climate Grand Challenges flagship projects are provided below.

    Bringing Computation to the Climate Challenge

    This project leverages advances in artificial intelligence, machine learning, and data sciences to improve the accuracy of climate models and make them more useful to a variety of stakeholders — from communities to industry. The team is developing a digital twin of the Earth that harnesses more data than ever before to reduce and quantify uncertainties in climate projections.

    Research leads: Raffaele Ferrari, the Cecil and Ida Green Professor of Oceanography in the Department of Earth, Atmospheric and Planetary Sciences, and director of the Program in Atmospheres, Oceans, and Climate; and Noelle Eckley Selin, director of the Technology and Policy Program and professor with a joint appointment in the Institute for Data, Systems, and Society and the Department of Earth, Atmospheric and Planetary Sciences

    Center for Electrification and Decarbonization of Industry

    This project seeks to reinvent and electrify the processes and materials behind hard-to-decarbonize industries like steel, cement, ammonia, and ethylene production. A new innovation hub will perform targeted fundamental research and engineering with urgency, pushing the technological envelope on electricity-driven chemical transformations.

    Research leads: Yet-Ming Chiang, the Kyocera Professor of Materials Science and Engineering, and Bilge Yıldız, the Breene M. Kerr Professor in the Department of Nuclear Science and Engineering and professor in the Department of Materials Science and Engineering

    Preparing for a new world of weather and climate extremes

    This project addresses key gaps in knowledge about intensifying extreme events such as floods, hurricanes, and heat waves, and quantifies their long-term risk in a changing climate. The team is developing a scalable climate-change adaptation toolkit to help vulnerable communities and low-carbon energy providers prepare for these extreme weather events.

    Research leads: Kerry Emanuel, the Cecil and Ida Green Professor of Atmospheric Science in the Department of Earth, Atmospheric and Planetary Sciences and co-director of the MIT Lorenz Center; Miho Mazereeuw, associate professor of architecture and urbanism in the Department of Architecture and director of the Urban Risk Lab; and Paul O’Gorman, professor in the Program in Atmospheres, Oceans, and Climate in the Department of Earth, Atmospheric and Planetary Sciences

    The Climate Resilience Early Warning System

    The CREWSnet project seeks to reinvent climate change adaptation with a novel forecasting system that empowers underserved communities to interpret local climate risk, proactively plan for their futures incorporating resilience strategies, and minimize losses. CREWSnet will initially be demonstrated in southwestern Bangladesh, serving as a model for similarly threatened regions around the world.

    Research leads: John Aldridge, assistant leader of the Humanitarian Assistance and Disaster Relief Systems Group at MIT Lincoln Laboratory, and Elfatih Eltahir, the H.M. King Bhumibol Professor of Hydrology and Climate in the Department of Civil and Environmental Engineering

    Revolutionizing agriculture with low-emissions, resilient crops

    This project works to revolutionize the agricultural sector with climate-resilient crops and fertilizers that have the ability to dramatically reduce greenhouse gas emissions from food production.

    Research lead: Christopher Voigt, the Daniel I.C. Wang Professor in the Department of Biological Engineering

    “As one of the world’s leading institutions of research and innovation, it is incumbent upon MIT to draw on our depth of knowledge, ingenuity, and ambition to tackle the hard climate problems now confronting the world,” says Richard Lester, MIT associate provost for international activities. “Together with collaborators across industry, finance, community, and government, the Climate Grand Challenges teams are looking to develop and implement high-impact, path-breaking climate solutions rapidly and at a grand scale.”

    The initial call for ideas in 2020 yielded nearly 100 letters of interest from almost 400 faculty members and senior researchers, representing 90 percent of MIT departments. After an extensive evaluation, 27 finalist teams received a total of $2.7 million to develop comprehensive research and innovation plans. The projects address four broad research themes:

    To select the winning projects, research plans were reviewed by panels of international experts representing relevant scientific and technical domains as well as experts in processes and policies for innovation and scalability.

    “In response to climate change, the world really needs to do two things quickly: deploy the solutions we already have much more widely, and develop new solutions that are urgently needed to tackle this intensifying threat,” says Maria Zuber, MIT vice president for research. “These five flagship projects exemplify MIT’s strong determination to bring its knowledge and expertise to bear in generating new ideas and solutions that will help solve the climate problem.”

    “The Climate Grand Challenges flagship projects set a new standard for inclusive climate solutions that can be adapted and implemented across the globe,” says MIT Chancellor Melissa Nobles. “This competition propels the entire MIT research community — faculty, students, postdocs, and staff — to act with urgency around a worsening climate crisis, and I look forward to seeing the difference these projects can make.”

    “MIT’s efforts on climate research amid the climate crisis was a primary reason that I chose to attend MIT, and remains a reason that I view the Institute favorably. MIT has a clear opportunity to be a thought leader in the climate space in our own MIT way, which is why CGC fits in so well,” says senior Megan Xu, who served on the Climate Grand Challenges student committee and is studying ways to make the food system more sustainable.

    The Climate Grand Challenges competition is a key initiative of “Fast Forward: MIT’s Climate Action Plan for the Decade,” which the Institute published in May 2021. Fast Forward outlines MIT’s comprehensive plan for helping the world address the climate crisis. It consists of five broad areas of action: sparking innovation, educating future generations, informing and leveraging government action, reducing MIT’s own climate impact, and uniting and coordinating all of MIT’s climate efforts. More

  • in

    Transforming the travel experience for the Hong Kong airport

    MIT Hong Kong Innovation Node welcomed 33 students to its flagship program, MIT Entrepreneurship and Maker Skills Integrator (MEMSI). Designed to develop entrepreneurial prowess through exposure to industry-driven challenges, MIT students joined forces with Hong Kong peers in this two-week hybrid bootcamp, developing unique proposals for the Airport Authority of Hong Kong.

    Many airports across the world continue to be affected by the broader impact of Covid-19 with reduced air travel, prompting airlines to cut capacity. The result is a need for new business opportunities to propel economic development. For Hong Kong, the expansion toward non-aeronautical activities to boost regional consumption is therefore crucial, and included as part of the blueprint to transform the city’s airport into an airport city — characterized by capacity expansion, commercial developments, air cargo leadership, an autonomous transport system, connectivity to neighboring cities in mainland China, and evolution into a smart airport guided by sustainable practices. To enhance the customer experience, a key focus is capturing business opportunities at the nexus of digital and physical interactions. 

    These challenges “bring ideas and talent together to tackle real-world problems in the areas of digital service creation for the airport and engaging regional customers to experience the new airport city,” says Charles Sodini, the LeBel Professor of Electrical Engineering at MIT and faculty director at the Node. 

    The new travel standard

    Businesses are exploring new digital technologies, both to drive bookings and to facilitate safe travel. Developments such as Hong Kong airport’s Flight Token, a biometric technology using facial recognition to enable contactless check-ins and boarding at airports, unlock enormous potential that speeds up the departure journey of passengers. Seamless virtual experiences are not going to disappear.

    “What we may see could be a strong rebounce especially for travelers after the travel ban lifts … an opportunity to make travel easier, flying as simple as riding the bus,” says Chris Au Young, general manager of smart airport and general manager of data analytics at the Airport Authority of Hong Kong. 

    The passenger experience of the future will be “enabled by mobile technology, internet of things, and digital platforms,” he explains, adding that in the aviation community, “international organizations have already stipulated that biometric technology will be the new standard for the future … the next question is how this can be connected across airports.”  

    This extends further beyond travel, where Au Young illustrates, “If you go to a concert at Asia World Expo, which is the airport’s new arena in the future, you might just simply show your face rather than queue up in a long line waiting to show your tickets.”

    Accelerating the learning curve with industry support

    Working closely with industry mentors involved in the airport city’s development, students dived deep into discussions on the future of adapted travel, interviewed and surveyed travelers, and plowed through a range of airport data to uncover business insights.

    “With the large amount of data provided, my teammates and I worked hard to identify modeling opportunities that were both theoretically feasible and valuable in a business sense,” says Sean Mann, a junior at MIT studying computer science.

    Mann and his team applied geolocation data to inform machine learning predictions on a passenger’s journey once they enter the airside area. Coupled with biometric technology, passengers can receive personalized recommendations with improved accuracy via the airport’s bespoke passenger app, powered by data collected through thousands of iBeacons dispersed across the vicinity. Armed with these insights, the aim is to enhance the user experience by driving meaningful footfall to retail shops, restaurants, and other airport amenities.

    The support of industry partners inspired his team “with their deep understanding of the aviation industry,” he added. “In a short period of two weeks, we built a proof-of-concept and a rudimentary business plan — the latter of which was very new to me.”

    Collaborating across time zones, Rumen Dangovski, a PhD candidate in electrical engineering and computer science at MIT, joined MEMSI from his home in Bulgaria. For him, learning “how to continually revisit ideas to discover important problems and meaningful solutions for a large and complex real-world system” was a key takeaway. The iterative process helped his team overcome the obstacle of narrowing down the scope of their proposal, with the help of industry mentors and advisors. 

    “Without the feedback from industry partners, we would not have been able to formulate a concrete solution that is actually helpful to the airport,” says Dangovski.  

    Beyond valuable mentorship, he adds, “there was incredible energy in our team, consisting of diverse talent, grit, discipline and organization. I was positively surprised how MEMSI can form quickly and give continual support to our team. The overall experience was very fun.“

    A sustainable future

    Mrigi Munjal, a PhD candidate studying materials science and engineering at MIT, had just taken a long-haul flight from Boston to Delhi prior to the program, and “was beginning to fully appreciate the scale of carbon emissions from aviation.” For her, “that one journey basically overshadowed all of my conscious pro-sustainability lifestyle changes,” she says.

    Knowing that international flights constitute the largest part of an individual’s carbon footprint, Munjal and her team wanted “to make flying more sustainable with an idea that is economically viable for all of the stakeholders involved.” 

    They proposed a carbon offset API that integrates into an airline’s ticket payment system, empowering individuals to take action to offset their carbon footprint, track their personal carbon history, and pick and monitor green projects. The advocacy extends to a digital display of interactive art featured in physical installations across the airport city. The intent is to raise community awareness about one’s impact on the environment and making carbon offsetting accessible. 

    Shaping the travel narrative

    Six teams of students created innovative solutions for the Hong Kong airport which they presented in hybrid format to a panel of judges on Showcase Day. The diverse ideas included an app-based airport retail recommendations supported by iBeacons; a platform that empowers customers to offset their carbon footprint; an app that connects fellow travelers for social and incentive-driven retail experiences; a travel membership exchange platform offering added flexibility to earn and redeem loyalty rewards; an interactive and gamified location-based retail experience using augmented reality; and a digital companion avatar to increase adoption of the airport’s Flight Token and improve airside passenger experience.

    Among the judges was Julian Lee ’97, former president of the MIT Club of Hong Kong and current executive director of finance at the Airport Authority of Hong Kong, who commended the students for demonstrably having “worked very thoroughly and thinking through the specific challenges,” addressing the real pain points that the airport is experiencing.

    “The ideas were very thoughtful and very unique to us. Some of you defined transit passengers as a sub-segment of the market that works. It only happens at the airport and you’ve been able to leverage this transit time in between,” remarked Lee. 

    Strong solutions include an implementation plan to see a path for execution and a viable future. Among the solutions proposed, Au Young was impressed by teams for “paying a lot of attention to the business model … a very important aspect in all the ideas generated.”  

    Addressing the students, Au Young says, “What we love is the way you reinvent the airport business and partnerships, presenting a new way of attracting people to engage more in new services and experiences — not just returning for a flight or just shopping with us, but innovating beyond the airport and using emerging technologies, using location data, using the retailer’s capability and adding some social activities in your solutions.”

    Despite today’s rapidly evolving travel industry, what remains unchanged is a focus on the customer. In the end, “it’s still about the passengers,” added Au Young.  More

  • in

    Unlocking new doors to artificial intelligence

    Artificial intelligence research is constantly developing new hypotheses that have the potential to benefit society and industry; however, sometimes these benefits are not fully realized due to a lack of engineering tools. To help bridge this gap, graduate students in the MIT Department of Electrical Engineering and Computer Science’s 6-A Master of Engineering (MEng) Thesis Program work with some of the most innovative companies in the world and collaborate on cutting-edge projects, while contributing to and completing their MEng thesis.

    During a portion of the last year, four 6-A MEng students teamed up and completed an internship with IBM Research’s advanced prototyping team through the MIT-IBM Watson AI Lab on AI projects, often developing web applications to solve a real-world issue or business use cases. Here, the students worked alongside AI engineers, user experience engineers, full-stack researchers, and generalists to accommodate project requests and receive thesis advice, says Lee Martie, IBM research staff member and 6-A manager. The students’ projects ranged from generating synthetic data to allow for privacy-sensitive data analysis to using computer vision to identify actions in video that allows for monitoring human safety and tracking build progress on a construction site.

    “I appreciated all of the expertise from the team and the feedback,” says 6-A graduate Violetta Jusiega ’21, who participated in the program. “I think that working in industry gives the lens of making sure that the project’s needs are satisfied and [provides the opportunity] to ground research and make sure that it is helpful for some use case in the future.”

    Jusiega’s research intersected the fields of computer vision and design to focus on data visualization and user interfaces for the medical field. Working with IBM, she built an application programming interface (API) that let clinicians interact with a medical treatment strategy AI model, which was deployed in the cloud. Her interface provided a medical decision tree, as well as some prescribed treatment plans. After receiving feedback on her design from physicians at a local hospital, Jusiega developed iterations of the API and how the results where displayed, visually, so that it would be user-friendly and understandable for clinicians, who don’t usually code. She says that, “these tools are often not acquired into the field because they lack some of these API principles which become more important in an industry where everything is already very fast paced, so there’s little time to incorporate a new technology.” But this project might eventually allow for industry deployment. “I think this application has a bunch of potential, whether it does get picked up by clinicians or whether it’s simply used in research. It’s very promising and very exciting to see how technology can help us modify, or I can improve, the health-care field to be even more custom-tailored towards patients and giving them the best care possible,” she says.

    Another 6-A graduate student, Spencer Compton, was also considering aiding professionals to make more informed decisions, for use in settings including health care, but he was tackling it from a causal perspective. When given a set of related variables, Compton was investigating if there was a way to determine not just correlation, but the cause-and-effect relationship between them (the direction of the interaction) from the data alone. For this, he and his collaborators from IBM Research and Purdue University turned to a field of math called information theory. With the goal of designing an algorithm to learn complex networks of causal relationships, Compton used ideas relating to entropy, the randomness in a system, to help determine if a causal relationship is present and how variables might be interacting. “When judging an explanation, people often default to Occam’s razor” says Compton. “We’re more inclined to believe a simpler explanation than a more complex one.” In many cases, he says, it seemed to perform well. For instance, they were able to consider variables such as lung cancer, pollution, and X-ray findings. He was pleased that his research allowed him to help create a framework of “entropic causal inference” that could aid in safe and smart decisions in the future, in a satisfying way. “The math is really surprisingly deep, interesting, and complex,” says Compton. “We’re basically asking, ‘when is the simplest explanation correct?’ but as a math question.”

    Determining relationships within data can sometimes require large volumes of it to suss out patterns, but for data that may contain sensitive information, this may not be available. For her master’s work, Ivy Huang worked with IBM Research to generate synthetic tabular data using a natural language processing tool called a transformer model, which can learn and predict future values from past values. Trained on real data, the model can produce new data with similar patterns, properties, and relationships without restrictions like privacy, availability, and access that might come with real data in financial transactions and electronic medical records. Further, she created an API and deployed the model in an IBM cluster, which allowed users increased access to the model and abilities to query it without compromising the original data.

    Working with the advanced prototyping team, MEng candidate Brandon Perez also considered how to gather and investigate data with restrictions, but in his case it was to use computer vision frameworks, centered on an action recognition model, to identify construction site happenings. The team based their work on the Moments in Time dataset, which contains over a million three-second video clips with about 300 attached classification labels, and has performed well during AI training. However, the group needed more construction-based video data. For this, they used YouTube-8M. Perez built a framework for testing and fine-tuning existing object detection models and action recognition models that could plug into an automatic spatial and temporal localization tool — how they would identify and label particular actions in a video timeline. “I was satisfied that I was able to explore what made me curious, and I was grateful for the autonomy that I was given with this project,” says Perez. “I felt like I was always supported, and my mentor was a great support to the project.”

    “The kind of collaborations that we have seen between our MEng students and IBM researchers are exactly what the 6-A MEng Thesis program at MIT is all about,” says Tomas Palacios, professor of electrical engineering and faculty director of the MIT 6-A MEng Thesis program. “For more than 100 years, 6-A has been connecting MIT students with industry to solve together some of the most important problems in the world.” More

  • in

    Meet the 2021-22 Accenture Fellows

    Launched in October of 2020, the MIT and Accenture Convergence Initiative for Industry and Technology underscores the ways in which industry and technology come together to spur innovation. The five-year initiative aims to achieve its mission through research, education, and fellowships. To that end, Accenture has once again awarded five annual fellowships to MIT graduate students working on research in industry and technology convergence who are underrepresented, including by race, ethnicity, and gender.

    This year’s Accenture Fellows work across disciplines including robotics, manufacturing, artificial intelligence, and biomedicine. Their research covers a wide array of subjects, including: advancing manufacturing through computational design, with the potential to benefit global vaccine production; designing low-energy robotics for both consumer electronics and the aerospace industry; developing robotics and machine learning systems that may aid the elderly in their homes; and creating ingestible biomedical devices that can help gather medical data from inside a patient’s body.

    Student nominations from each unit within the School of Engineering, as well as from the four other MIT schools and the MIT Schwarzman College of Computing, were invited as part of the application process. Five exceptional students were selected as fellows in the initiative’s second year.

    Xinming (Lily) Liu is a PhD student in operations research at MIT Sloan School of Management. Her work is focused on behavioral and data-driven operations for social good, incorporating human behaviors into traditional optimization models, designing incentives, and analyzing real-world data. Her current research looks at the convergence of social media, digital platforms, and agriculture, with particular attention to expanding technological equity and economic opportunity in developing countries. Liu earned her BS from Cornell University, with a double major in operations research and computer science.

    Caris Moses is a PhD student in electrical engineering and computer science specializing inartificial intelligence. Moses’ research focuses on using machine learning, optimization, and electromechanical engineering to build robotics systems that are robust, flexible, intelligent, and can learn on the job. The technology she is developing holds promise for industries including flexible, small-batch manufacturing; robots to assist the elderly in their households; and warehouse management and fulfillment. Moses earned her BS in mechanical engineering from Cornell University and her MS in computer science from Northeastern University.

    Sergio Rodriguez Aponte is a PhD student in biological engineering. He is working on the convergence of computational design and manufacturing practices, which have the potential to impact industries such as biopharmaceuticals, food, and wellness/nutrition. His current research aims to develop strategies for applying computational tools, such as multiscale modeling and machine learning, to the design and production of manufacturable and accessible vaccine candidates that could eventually be available globally. Rodriguez Aponte earned his BS in industrial biotechnology from the University of Puerto Rico at Mayaguez.

    Soumya Sudhakar SM ’20 is a PhD student in aeronautics and astronautics. Her work is focused on theco-design of new algorithms and integrated circuits for autonomous low-energy robotics that could have novel applications in aerospace and consumer electronics. Her contributions bring together the emerging robotics industry, integrated circuits industry, aerospace industry, and consumer electronics industry. Sudhakar earned her BSE in mechanical and aerospace engineering from Princeton University and her MS in aeronautics and astronautics from MIT.

    So-Yoon Yang is a PhD student in electrical engineering and computer science. Her work on the development of low-power, wireless, ingestible biomedical devices for health care is at the intersection of the medical device, integrated circuit, artificial intelligence, and pharmaceutical fields. Currently, the majority of wireless biomedical devices can only provide a limited range of medical data measured from outside the body. Ingestible devices hold promise for the next generation of personal health care because they do not require surgical implantation, can be useful for detecting physiological and pathophysiological signals, and can also function as therapeutic alternatives when treatment cannot be done externally. Yang earned her BS in electrical and computer engineering from Seoul National University in South Korea and her MS in electrical engineering from Caltech. More

  • in

    Q&A: Cathy Wu on developing algorithms to safely integrate robots into our world

    Cathy Wu is the Gilbert W. Winslow Assistant Professor of Civil and Environmental Engineering and a member of the MIT Institute for Data, Systems, and Society. As an undergraduate, Wu won MIT’s toughest robotics competition, and as a graduate student took the University of California at Berkeley’s first-ever course on deep reinforcement learning. Now back at MIT, she’s working to improve the flow of robots in Amazon warehouses under the Science Hub, a new collaboration between the tech giant and the MIT Schwarzman College of Computing. Outside of the lab and classroom, Wu can be found running, drawing, pouring lattes at home, and watching YouTube videos on math and infrastructure via 3Blue1Brown and Practical Engineering. She recently took a break from all of that to talk about her work.

    Q: What put you on the path to robotics and self-driving cars?

    A: My parents always wanted a doctor in the family. However, I’m bad at following instructions and became the wrong kind of doctor! Inspired by my physics and computer science classes in high school, I decided to study engineering. I wanted to help as many people as a medical doctor could.

    At MIT, I looked for applications in energy, education, and agriculture, but the self-driving car was the first to grab me. It has yet to let go! Ninety-four percent of serious car crashes are caused by human error and could potentially be prevented by self-driving cars. Autonomous vehicles could also ease traffic congestion, save energy, and improve mobility.

    I first learned about self-driving cars from Seth Teller during his guest lecture for the course Mobile Autonomous Systems Lab (MASLAB), in which MIT undergraduates compete to build the best full-functioning robot from scratch. Our ball-fetching bot, Putzputz, won first place. From there, I took more classes in machine learning, computer vision, and transportation, and joined Teller’s lab. I also competed in several mobility-related hackathons, including one sponsored by Hubway, now known as Blue Bike.

    Q: You’ve explored ways to help humans and autonomous vehicles interact more smoothly. What makes this problem so hard?

    A: Both systems are highly complex, and our classical modeling tools are woefully insufficient. Integrating autonomous vehicles into our existing mobility systems is a huge undertaking. For example, we don’t know whether autonomous vehicles will cut energy use by 40 percent, or double it. We need more powerful tools to cut through the uncertainty. My PhD thesis at Berkeley tried to do this. I developed scalable optimization methods in the areas of robot control, state estimation, and system design. These methods could help decision-makers anticipate future scenarios and design better systems to accommodate both humans and robots.

    Q: How is deep reinforcement learning, combining deep and reinforcement learning algorithms, changing robotics?

    A: I took John Schulman and Pieter Abbeel’s reinforcement learning class at Berkeley in 2015 shortly after Deepmind published their breakthrough paper in Nature. They had trained an agent via deep learning and reinforcement learning to play “Space Invaders” and a suite of Atari games at superhuman levels. That created quite some buzz. A year later, I started to incorporate reinforcement learning into problems involving mixed traffic systems, in which only some cars are automated. I realized that classical control techniques couldn’t handle the complex nonlinear control problems I was formulating.

    Deep RL is now mainstream but it’s by no means pervasive in robotics, which still relies heavily on classical model-based control and planning methods. Deep learning continues to be important for processing raw sensor data like camera images and radio waves, and reinforcement learning is gradually being incorporated. I see traffic systems as gigantic multi-robot systems. I’m excited for an upcoming collaboration with Utah’s Department of Transportation to apply reinforcement learning to coordinate cars with traffic signals, reducing congestion and thus carbon emissions.

    Q: You’ve talked about the MIT course, 6.007 (Signals and Systems), and its impact on you. What about it spoke to you?

    A: The mindset. That problems that look messy can be analyzed with common, and sometimes simple, tools. Signals are transformed by systems in various ways, but what do these abstract terms mean, anyway? A mechanical system can take a signal like gears turning at some speed and transform it into a lever turning at another speed. A digital system can take binary digits and turn them into other binary digits or a string of letters or an image. Financial systems can take news and transform it via millions of trading decisions into stock prices. People take in signals every day through advertisements, job offers, gossip, and so on, and translate them into actions that in turn influence society and other people. This humble class on signals and systems linked mechanical, digital, and societal systems and showed me how foundational tools can cut through the noise.

    Q: In your project with Amazon you’re training warehouse robots to pick up, sort, and deliver goods. What are the technical challenges?

    A: This project involves assigning robots to a given task and routing them there. [Professor] Cynthia Barnhart’s team is focused on task assignment, and mine, on path planning. Both problems are considered combinatorial optimization problems because the solution involves a combination of choices. As the number of tasks and robots increases, the number of possible solutions grows exponentially. It’s called the curse of dimensionality. Both problems are what we call NP Hard; there may not be an efficient algorithm to solve them. Our goal is to devise a shortcut.

    Routing a single robot for a single task isn’t difficult. It’s like using Google Maps to find the shortest path home. It can be solved efficiently with several algorithms, including Dijkstra’s. But warehouses resemble small cities with hundreds of robots. When traffic jams occur, customers can’t get their packages as quickly. Our goal is to develop algorithms that find the most efficient paths for all of the robots.

    Q: Are there other applications?

    A: Yes. The algorithms we test in Amazon warehouses might one day help to ease congestion in real cities. Other potential applications include controlling planes on runways, swarms of drones in the air, and even characters in video games. These algorithms could also be used for other robotic planning tasks like scheduling and routing.

    Q: AI is evolving rapidly. Where do you hope to see the big breakthroughs coming?

    A: I’d like to see deep learning and deep RL used to solve societal problems involving mobility, infrastructure, social media, health care, and education. Deep RL now has a toehold in robotics and industrial applications like chip design, but we still need to be careful in applying it to systems with humans in the loop. Ultimately, we want to design systems for people. Currently, we simply don’t have the right tools.

    Q: What worries you most about AI taking on more and more specialized tasks?

    A: AI has the potential for tremendous good, but it could also help to accelerate the widening gap between the haves and the have-nots. Our political and regulatory systems could help to integrate AI into society and minimize job losses and income inequality, but I worry that they’re not equipped yet to handle the firehose of AI.

    Q: What’s the last great book you read?

    A: “How to Avoid a Climate Disaster,” by Bill Gates. I absolutely loved the way that Gates was able to take an overwhelmingly complex topic and distill it down into words that everyone can understand. His optimism inspires me to keep pushing on applications of AI and robotics to help avoid a climate disaster. More

  • in

    Q&A: More-sustainable concrete with machine learning

    As a building material, concrete withstands the test of time. Its use dates back to early civilizations, and today it is the most popular composite choice in the world. However, it’s not without its faults. Production of its key ingredient, cement, contributes 8-9 percent of the global anthropogenic CO2 emissions and 2-3 percent of energy consumption, which is only projected to increase in the coming years. With aging United States infrastructure, the federal government recently passed a milestone bill to revitalize and upgrade it, along with a push to reduce greenhouse gas emissions where possible, putting concrete in the crosshairs for modernization, too.

    Elsa Olivetti, the Esther and Harold E. Edgerton Associate Professor in the MIT Department of Materials Science and Engineering, and Jie Chen, MIT-IBM Watson AI Lab research scientist and manager, think artificial intelligence can help meet this need by designing and formulating new, more sustainable concrete mixtures, with lower costs and carbon dioxide emissions, while improving material performance and reusing manufacturing byproducts in the material itself. Olivetti’s research improves environmental and economic sustainability of materials, and Chen develops and optimizes machine learning and computational techniques, which he can apply to materials reformulation. Olivetti and Chen, along with their collaborators, have recently teamed up for an MIT-IBM Watson AI Lab project to make concrete more sustainable for the benefit of society, the climate, and the economy.

    Q: What applications does concrete have, and what properties make it a preferred building material?

    Olivetti: Concrete is the dominant building material globally with an annual consumption of 30 billion metric tons. That is over 20 times the next most produced material, steel, and the scale of its use leads to considerable environmental impact, approximately 5-8 percent of global greenhouse gas (GHG) emissions. It can be made locally, has a broad range of structural applications, and is cost-effective. Concrete is a mixture of fine and coarse aggregate, water, cement binder (the glue), and other additives.

    Q: Why isn’t it sustainable, and what research problems are you trying to tackle with this project?

    Olivetti: The community is working on several ways to reduce the impact of this material, including alternative fuels use for heating the cement mixture, increasing energy and materials efficiency and carbon sequestration at production facilities, but one important opportunity is to develop an alternative to the cement binder.

    While cement is 10 percent of the concrete mass, it accounts for 80 percent of the GHG footprint. This impact is derived from the fuel burned to heat and run the chemical reaction required in manufacturing, but also the chemical reaction itself releases CO2 from the calcination of limestone. Therefore, partially replacing the input ingredients to cement (traditionally ordinary Portland cement or OPC) with alternative materials from waste and byproducts can reduce the GHG footprint. But use of these alternatives is not inherently more sustainable because wastes might have to travel long distances, which adds to fuel emissions and cost, or might require pretreatment processes. The optimal way to make use of these alternate materials will be situation-dependent. But because of the vast scale, we also need solutions that account for the huge volumes of concrete needed. This project is trying to develop novel concrete mixtures that will decrease the GHG impact of the cement and concrete, moving away from the trial-and-error processes towards those that are more predictive.

    Chen: If we want to fight climate change and make our environment better, are there alternative ingredients or a reformulation we could use so that less greenhouse gas is emitted? We hope that through this project using machine learning we’ll be able to find a good answer.

    Q: Why is this problem important to address now, at this point in history?

    Olivetti: There is urgent need to address greenhouse gas emissions as aggressively as possible, and the road to doing so isn’t necessarily straightforward for all areas of industry. For transportation and electricity generation, there are paths that have been identified to decarbonize those sectors. We need to move much more aggressively to achieve those in the time needed; further, the technological approaches to achieve that are more clear. However, for tough-to-decarbonize sectors, such as industrial materials production, the pathways to decarbonization are not as mapped out.

    Q: How are you planning to address this problem to produce better concrete?

    Olivetti: The goal is to predict mixtures that will both meet performance criteria, such as strength and durability, with those that also balance economic and environmental impact. A key to this is to use industrial wastes in blended cements and concretes. To do this, we need to understand the glass and mineral reactivity of constituent materials. This reactivity not only determines the limit of the possible use in cement systems but also controls concrete processing, and the development of strength and pore structure, which ultimately control concrete durability and life-cycle CO2 emissions.

    Chen: We investigate using waste materials to replace part of the cement component. This is something that we’ve hypothesized would be more sustainable and economic — actually waste materials are common, and they cost less. Because of the reduction in the use of cement, the final concrete product would be responsible for much less carbon dioxide production. Figuring out the right concrete mixture proportion that makes endurable concretes while achieving other goals is a very challenging problem. Machine learning is giving us an opportunity to explore the advancement of predictive modeling, uncertainty quantification, and optimization to solve the issue. What we are doing is exploring options using deep learning as well as multi-objective optimization techniques to find an answer. These efforts are now more feasible to carry out, and they will produce results with reliability estimates that we need to understand what makes a good concrete.

    Q: What kinds of AI and computational techniques are you employing for this?

    Olivetti: We use AI techniques to collect data on individual concrete ingredients, mix proportions, and concrete performance from the literature through natural language processing. We also add data obtained from industry and/or high throughput atomistic modeling and experiments to optimize the design of concrete mixtures. Then we use this information to develop insight into the reactivity of possible waste and byproduct materials as alternatives to cement materials for low-CO2 concrete. By incorporating generic information on concrete ingredients, the resulting concrete performance predictors are expected to be more reliable and transformative than existing AI models.

    Chen: The final objective is to figure out what constituents, and how much of each, to put into the recipe for producing the concrete that optimizes the various factors: strength, cost, environmental impact, performance, etc. For each of the objectives, we need certain models: We need a model to predict the performance of the concrete (like, how long does it last and how much weight does it sustain?), a model to estimate the cost, and a model to estimate how much carbon dioxide is generated. We will need to build these models by using data from literature, from industry, and from lab experiments.

    We are exploring Gaussian process models to predict the concrete strength, going forward into days and weeks. This model can give us an uncertainty estimate of the prediction as well. Such a model needs specification of parameters, for which we will use another model to calculate. At the same time, we also explore neural network models because we can inject domain knowledge from human experience into them. Some models are as simple as multi-layer perceptions, while some are more complex, like graph neural networks. The goal here is that we want to have a model that is not only accurate but also robust — the input data is noisy, and the model must embrace the noise, so that its prediction is still accurate and reliable for the multi-objective optimization.

    Once we have built models that we are confident with, we will inject their predictions and uncertainty estimates into the optimization of multiple objectives, under constraints and under uncertainties.

    Q: How do you balance cost-benefit trade-offs?

    Chen: The multiple objectives we consider are not necessarily consistent, and sometimes they are at odds with each other. The goal is to identify scenarios where the values for our objectives cannot be further pushed simultaneously without compromising one or a few. For example, if you want to further reduce the cost, you probably have to suffer the performance or suffer the environmental impact. Eventually, we will give the results to policymakers and they will look into the results and weigh the options. For example, they may be able to tolerate a slightly higher cost under a significant reduction in greenhouse gas. Alternatively, if the cost varies little but the concrete performance changes drastically, say, doubles or triples, then this is definitely a favorable outcome.

    Q: What kinds of challenges do you face in this work?

    Chen: The data we get either from industry or from literature are very noisy; the concrete measurements can vary a lot, depending on where and when they are taken. There are also substantial missing data when we integrate them from different sources, so, we need to spend a lot of effort to organize and make the data usable for building and training machine learning models. We also explore imputation techniques that substitute missing features, as well as models that tolerate missing features, in our predictive modeling and uncertainty estimate.

    Q: What do you hope to achieve through this work?

    Chen: In the end, we are suggesting either one or a few concrete recipes, or a continuum of recipes, to manufacturers and policymakers. We hope that this will provide invaluable information for both the construction industry and for the effort of protecting our beloved Earth.

    Olivetti: We’d like to develop a robust way to design cements that make use of waste materials to lower their CO2 footprint. Nobody is trying to make waste, so we can’t rely on one stream as a feedstock if we want this to be massively scalable. We have to be flexible and robust to shift with feedstocks changes, and for that we need improved understanding. Our approach to develop local, dynamic, and flexible alternatives is to learn what makes these wastes reactive, so we know how to optimize their use and do so as broadly as possible. We do that through predictive model development through software we have developed in my group to automatically extract data from literature on over 5 million texts and patents on various topics. We link this to the creative capabilities of our IBM collaborators to design methods that predict the final impact of new cements. If we are successful, we can lower the emissions of this ubiquitous material and play our part in achieving carbon emissions mitigation goals.

    Other researchers involved with this project include Stefanie Jegelka, the X-Window Consortium Career Development Associate Professor in the MIT Department of Electrical Engineering and Computer Science; Richard Goodwin, IBM principal researcher; Soumya Ghosh, MIT-IBM Watson AI Lab research staff member; and Kristen Severson, former research staff member. Collaborators included Nghia Hoang, former research staff member with MIT-IBM Watson AI Lab and IBM Research; and Jeremy Gregory, research scientist in the MIT Department of Civil and Environmental Engineering and executive director of the MIT Concrete Sustainability Hub.

    This research is supported by the MIT-IBM Watson AI Lab. More