More stories

  • in

    Supporting sustainability, digital health, and the future of work

    The MIT and Accenture Convergence Initiative for Industry and Technology has selected three new research projects that will receive support from the initiative. The research projects aim to accelerate progress in meeting complex societal needs through new business convergence insights in technology and innovation.

    Established in MIT’s School of Engineering and now in its third year, the MIT and Accenture Convergence Initiative is furthering its mission to bring together technological experts from across business and academia to share insights and learn from one another. Recently, Thomas W. Malone, the Patrick J. McGovern (1959) Professor of Management, joined the initiative as its first-ever faculty lead. The research projects relate to three of the initiative’s key focus areas: sustainability, digital health, and the future of work.

    “The solutions these research teams are developing have the potential to have tremendous impact,” says Anantha Chandrakasan, dean of the School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science. “They embody the initiative’s focus on advancing data-driven research that addresses technology and industry convergence.”

    “The convergence of science and technology driven by advancements in generative AI, digital twins, quantum computing, and other technologies makes this an especially exciting time for Accenture and MIT to be undertaking this joint research,” says Kenneth Munie, senior managing director at Accenture Strategy, Life Sciences. “Our three new research projects focusing on sustainability, digital health, and the future of work have the potential to help guide and shape future innovations that will benefit the way we work and live.”

    The MIT and Accenture Convergence Initiative charter project researchers are described below.

    Accelerating the journey to net zero with industrial clusters

    Jessika Trancik is a professor at the Institute for Data, Systems, and Society (IDSS). Trancik’s research examines the dynamic costs, performance, and environmental impacts of energy systems to inform climate policy and accelerate beneficial and equitable technology innovation. Trancik’s project aims to identify how industrial clusters can enable companies to derive greater value from decarbonization, potentially making companies more willing to invest in the clean energy transition.

    To meet the ambitious climate goals that have been set by countries around the world, rising greenhouse gas emissions trends must be rapidly reversed. Industrial clusters — geographically co-located or otherwise-aligned groups of companies representing one or more industries — account for a significant portion of greenhouse gas emissions globally. With major energy consumers “clustered” in proximity, industrial clusters provide a potential platform to scale low-carbon solutions by enabling the aggregation of demand and the coordinated investment in physical energy supply infrastructure.

    In addition to Trancik, the research team working on this project will include Aliza Khurram, a postdoc in IDSS; Micah Ziegler, an IDSS research scientist; Melissa Stark, global energy transition services lead at Accenture; Laura Sanderfer, strategy consulting manager at Accenture; and Maria De Miguel, strategy senior analyst at Accenture.

    Eliminating childhood obesity

    Anette “Peko” Hosoi is the Neil and Jane Pappalardo Professor of Mechanical Engineering. A common theme in her work is the fundamental study of shape, kinematic, and rheological optimization of biological systems with applications to the emergent field of soft robotics. Her project will use both data from existing studies and synthetic data to create a return-on-investment (ROI) calculator for childhood obesity interventions so that companies can identify earlier returns on their investment beyond reduced health-care costs.

    Childhood obesity is too prevalent to be solved by a single company, industry, drug, application, or program. In addition to the physical and emotional impact on children, society bears a cost through excess health care spending, lost workforce productivity, poor school performance, and increased family trauma. Meaningful solutions require multiple organizations, representing different parts of society, working together with a common understanding of the problem, the economic benefits, and the return on investment. ROI is particularly difficult to defend for any single organization because investment and return can be separated by many years and involve asymmetric investments, returns, and allocation of risk. Hosoi’s project will consider the incentives for a particular entity to invest in programs in order to reduce childhood obesity.

    Hosoi will be joined by graduate students Pragya Neupane and Rachael Kha, both of IDSS, as well a team from Accenture that includes Kenneth Munie, senior managing director at Accenture Strategy, Life Sciences; Kaveh Safavi, senior managing director in Accenture Health Industry; and Elizabeth Naik, global health and public service research lead.

    Generating innovative organizational configurations and algorithms for dealing with the problem of post-pandemic employment

    Thomas Malone is the Patrick J. McGovern (1959) Professor of Management at the MIT Sloan School of Management and the founding director of the MIT Center for Collective Intelligence. His research focuses on how new organizations can be designed to take advantage of the possibilities provided by information technology. Malone will be joined in this project by John Horton, the Richard S. Leghorn (1939) Career Development Professor at the MIT Sloan School of Management, whose research focuses on the intersection of labor economics, market design, and information systems. Malone and Horton’s project will look to reshape the future of work with the help of lessons learned in the wake of the pandemic.

    The Covid-19 pandemic has been a major disrupter of work and employment, and it is not at all obvious how governments, businesses, and other organizations should manage the transition to a desirable state of employment as the pandemic recedes. Using natural language processing algorithms such as GPT-4, this project will look to identify new ways that companies can use AI to better match applicants to necessary jobs, create new types of jobs, assess skill training needed, and identify interventions to help include women and other groups whose employment was disproportionately affected by the pandemic.

    In addition to Malone and Horton, the research team will include Rob Laubacher, associate director and research scientist at the MIT Center for Collective Intelligence, and Kathleen Kennedy, executive director at the MIT Center for Collective Intelligence and senior director at MIT Horizon. The team will also include Nitu Nivedita, managing director of artificial intelligence at Accenture, and Thomas Hancock, data science senior manager at Accenture. More

  • in

    Artificial intelligence for augmentation and productivity

    The MIT Stephen A. Schwarzman College of Computing has awarded seed grants to seven projects that are exploring how artificial intelligence and human-computer interaction can be leveraged to enhance modern work spaces to achieve better management and higher productivity.

    Funded by Andrew W. Houston ’05 and Dropbox Inc., the projects are intended to be interdisciplinary and bring together researchers from computing, social sciences, and management.

    The seed grants can enable the project teams to conduct research that leads to bigger endeavors in this rapidly evolving area, as well as build community around questions related to AI-augmented management.

    The seven selected projects and research leads include:

    “LLMex: Implementing Vannevar Bush’s Vision of the Memex Using Large Language Models,” led by Patti Maes of the Media Lab and David Karger of the Department of Electrical Engineering and Computer Science (EECS) and the Computer Science and Artificial Intelligence Laboratory (CSAIL). Inspired by Vannevar Bush’s Memex, this project proposes to design, implement, and test the concept of memory prosthetics using large language models (LLMs). The AI-based system will intelligently help an individual keep track of vast amounts of information, accelerate productivity, and reduce errors by automatically recording their work actions and meetings, supporting retrieval based on metadata and vague descriptions, and suggesting relevant, personalized information proactively based on the user’s current focus and context.

    “Using AI Agents to Simulate Social Scenarios,” led by John Horton of the MIT Sloan School of Management and Jacob Andreas of EECS and CSAIL. This project imagines the ability to easily simulate policies, organizational arrangements, and communication tools with AI agents before implementation. Tapping into the capabilities of modern LLMs to serve as a computational model of humans makes this vision of social simulation more realistic, and potentially more predictive.

    “Human Expertise in the Age of AI: Can We Have Our Cake and Eat it Too?” led by Manish Raghavan of MIT Sloan and EECS, and Devavrat Shah of EECS and the Laboratory for Information and Decision Systems. Progress in machine learning, AI, and in algorithmic decision aids has raised the prospect that algorithms may complement human decision-making in a wide variety of settings. Rather than replacing human professionals, this project sees a future where AI and algorithmic decision aids play a role that is complementary to human expertise.

    “Implementing Generative AI in U.S. Hospitals,” led by Julie Shah of the Department of Aeronautics and Astronautics and CSAIL, Retsef Levi of MIT Sloan and the Operations Research Center, Kate Kellog of MIT Sloan, and Ben Armstrong of the Industrial Performance Center. In recent years, studies have linked a rise in burnout from doctors and nurses in the United States with increased administrative burdens associated with electronic health records and other technologies. This project aims to develop a holistic framework to study how generative AI technologies can both increase productivity for organizations and improve job quality for workers in health care settings.

    “Generative AI Augmented Software Tools to Democratize Programming,” led by Harold Abelson of EECS and CSAIL, Cynthia Breazeal of the Media Lab, and Eric Klopfer of the Comparative Media Studies/Writing. Progress in generative AI over the past year is fomenting an upheaval in assumptions about future careers in software and deprecating the role of coding. This project will stimulate a similar transformation in computing education for those who have no prior technical training by creating a software tool that could eliminate much of the need for learners to deal with code when creating applications.

    “Acquiring Expertise and Societal Productivity in a World of Artificial Intelligence,” led by David Atkin and Martin Beraja of the Department of Economics, and Danielle Li of MIT Sloan. Generative AI is thought to augment the capabilities of workers performing cognitive tasks. This project seeks to better understand how the arrival of AI technologies may impact skill acquisition and productivity, and to explore complementary policy interventions that will allow society to maximize the gains from such technologies.

    “AI Augmented Onboarding and Support,” led by Tim Kraska of EECS and CSAIL, and Christoph Paus of the Department of Physics. While LLMs have made enormous leaps forward in recent years and are poised to fundamentally change the way students and professionals learn about new tools and systems, there is often a steep learning curve which people have to climb in order to make full use of the resource. To help mitigate the issue, this project proposes the development of new LLM-powered onboarding and support systems that will positively impact the way support teams operate and improve the user experience. More

  • in

    Novo Nordisk to support MIT postdocs working at the intersection of AI and life sciences

    MIT’s School of Engineering and global health care company Novo Nordisk has announced the launch of a multi-year program to support postdoctoral fellows conducting research at the intersection of artificial intelligence and data science with life sciences. The MIT-Novo Nordisk Artificial Intelligence Postdoctoral Fellows Program will welcome its first cohort of up to 10 postdocs for a two-year term this fall. The program will provide up to $10 million for an annual cohort of up to 10 postdoc for two-year terms.

    “The research being conducted at the intersection of AI and life sciences has the potential to transform health care as we know it,” says Anantha Chandrakasan, dean of the School of Engineering and Vannevar Bush Professor of Electrical Engineering and Computer Science. “I am thrilled that the MIT-Novo Nordisk Program will support early-career researchers who work in this space.”

    The launch of the MIT-Novo Nordisk Program coincides with the 100th anniversary celebration of Novo Nordisk. The company was founded in 1923 and treated its first patients with insulin, which had recently been discovered in March of that year.

    “The use of AI in the health care industry presents a massive opportunity to improve the lives of people living with chronic diseases,” says Thomas Senderovitz, senior vice president for data science at Novo Nordisk. “Novo Nordisk is committed to the development of new, innovative solutions, and MIT hosts some of the most outstanding researchers in the field. We are therefore excited to support postdocs working on the cutting edge of AI and life sciences.”

    The MIT-Novo Nordisk Program will support postdocs advancing the use of AI in life science and health. Postdocs will join an annual cohort that participates in frequent events and gatherings. The cohort will meet regularly to exchange ideas about their work and discuss ways to amplify their impact.

    “We are excited to welcome postdocs working on AI, data science, health, and life sciences — research areas of strategic importance across MIT,” adds Chandrakasan.

    A central focus of the program will be offering postdocs professional development and mentorship opportunities. Fellows will be invited to entrepreneurship-focused workshops that enable them to learn from company founders, venture capitalists, and other entrepreneurial leaders. Fellows will also have the opportunity to receive mentorship from experts in life sciences and data science.

    “MIT is always exploring opportunities to innovate and enhance the postdoctoral experience,” adds MIT Provost Cynthia Barnhart. “The MIT-Novo Nordisk Program has been thoughtfully designed to introduce fellows to a wealth of experiences, skill sets, and perspectives that support their professional growth while prioritizing a sense of community with their cohort.”

    Angela Belcher, head of the Department of Biological Engineering, the James Mason Crafts Professor of Biological Engineering and Materials Science, and member of the Koch Institute for Integrative Cancer Research, and Asu Ozdaglar, deputy dean of academics for the MIT Schwarzman College of Computing and head of the Department of Electrical Engineering and Computer Science, will serve as co-faculty leads for the program.

    The new program complements a separate postdoctoral fellowship program at MIT supported by the Novo Nordisk Foundation that focuses on enabling interdisciplinary research. More

  • in

    Bringing the social and ethical responsibilities of computing to the forefront

    There has been a remarkable surge in the use of algorithms and artificial intelligence to address a wide range of problems and challenges. While their adoption, particularly with the rise of AI, is reshaping nearly every industry sector, discipline, and area of research, such innovations often expose unexpected consequences that involve new norms, new expectations, and new rules and laws.

    To facilitate deeper understanding, the Social and Ethical Responsibilities of Computing (SERC), a cross-cutting initiative in the MIT Schwarzman College of Computing, recently brought together social scientists and humanists with computer scientists, engineers, and other computing faculty for an exploration of the ways in which the broad applicability of algorithms and AI has presented both opportunities and challenges in many aspects of society.

    “The very nature of our reality is changing. AI has the ability to do things that until recently were solely the realm of human intelligence — things that can challenge our understanding of what it means to be human,” remarked Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing, in his opening address at the inaugural SERC Symposium. “This poses philosophical, conceptual, and practical questions on a scale not experienced since the start of the Enlightenment. In the face of such profound change, we need new conceptual maps for navigating the change.”

    The symposium offered a glimpse into the vision and activities of SERC in both research and education. “We believe our responsibility with SERC is to educate and equip our students and enable our faculty to contribute to responsible technology development and deployment,” said Georgia Perakis, the William F. Pounds Professor of Management in the MIT Sloan School of Management, co-associate dean of SERC, and the lead organizer of the symposium. “We’re drawing from the many strengths and diversity of disciplines across MIT and beyond and bringing them together to gain multiple viewpoints.”

    Through a succession of panels and sessions, the symposium delved into a variety of topics related to the societal and ethical dimensions of computing. In addition, 37 undergraduate and graduate students from a range of majors, including urban studies and planning, political science, mathematics, biology, electrical engineering and computer science, and brain and cognitive sciences, participated in a poster session to exhibit their research in this space, covering such topics as quantum ethics, AI collusion in storage markets, computing waste, and empowering users on social platforms for better content credibility.

    Showcasing a diversity of work

    In three sessions devoted to themes of beneficent and fair computing, equitable and personalized health, and algorithms and humans, the SERC Symposium showcased work by 12 faculty members across these domains.

    One such project from a multidisciplinary team of archaeologists, architects, digital artists, and computational social scientists aimed to preserve endangered heritage sites in Afghanistan with digital twins. The project team produced highly detailed interrogable 3D models of the heritage sites, in addition to extended reality and virtual reality experiences, as learning resources for audiences that cannot access these sites.

    In a project for the United Network for Organ Sharing, researchers showed how they used applied analytics to optimize various facets of an organ allocation system in the United States that is currently undergoing a major overhaul in order to make it more efficient, equitable, and inclusive for different racial, age, and gender groups, among others.

    Another talk discussed an area that has not yet received adequate public attention: the broader implications for equity that biased sensor data holds for the next generation of models in computing and health care.

    A talk on bias in algorithms considered both human bias and algorithmic bias, and the potential for improving results by taking into account differences in the nature of the two kinds of bias.

    Other highlighted research included the interaction between online platforms and human psychology; a study on whether decision-makers make systemic prediction mistakes on the available information; and an illustration of how advanced analytics and computation can be leveraged to inform supply chain management, operations, and regulatory work in the food and pharmaceutical industries.

    Improving the algorithms of tomorrow

    “Algorithms are, without question, impacting every aspect of our lives,” said Asu Ozdaglar, deputy dean of academics for the MIT Schwarzman College of Computing and head of the Department of Electrical Engineering and Computer Science, in kicking off a panel she moderated on the implications of data and algorithms.

    “Whether it’s in the context of social media, online commerce, automated tasks, and now a much wider range of creative interactions with the advent of generative AI tools and large language models, there’s little doubt that much more is to come,” Ozdaglar said. “While the promise is evident to all of us, there’s a lot to be concerned as well. This is very much time for imaginative thinking and careful deliberation to improve the algorithms of tomorrow.”

    Turning to the panel, Ozdaglar asked experts from computing, social science, and data science for insights on how to understand what is to come and shape it to enrich outcomes for the majority of humanity.

    Sarah Williams, associate professor of technology and urban planning at MIT, emphasized the critical importance of comprehending the process of how datasets are assembled, as data are the foundation for all models. She also stressed the need for research to address the potential implication of biases in algorithms that often find their way in through their creators and the data used in their development. “It’s up to us to think about our own ethical solutions to these problems,” she said. “Just as it’s important to progress with the technology, we need to start the field of looking at these questions of what biases are in the algorithms? What biases are in the data, or in that data’s journey?”

    Shifting focus to generative models and whether the development and use of these technologies should be regulated, the panelists — which also included MIT’s Srini Devadas, professor of electrical engineering and computer science, John Horton, professor of information technology, and Simon Johnson, professor of entrepreneurship — all concurred that regulating open-source algorithms, which are publicly accessible, would be difficult given that regulators are still catching up and struggling to even set guardrails for technology that is now 20 years old.

    Returning to the question of how to effectively regulate the use of these technologies, Johnson proposed a progressive corporate tax system as a potential solution. He recommends basing companies’ tax payments on their profits, especially for large corporations whose massive earnings go largely untaxed due to offshore banking. By doing so, Johnson said that this approach can serve as a regulatory mechanism that discourages companies from trying to “own the entire world” by imposing disincentives.

    The role of ethics in computing education

    As computing continues to advance with no signs of slowing down, it is critical to educate students to be intentional in the social impact of the technologies they will be developing and deploying into the world. But can one actually be taught such things? If so, how?

    Caspar Hare, professor of philosophy at MIT and co-associate dean of SERC, posed this looming question to faculty on a panel he moderated on the role of ethics in computing education. All experienced in teaching ethics and thinking about the social implications of computing, each panelist shared their perspective and approach.

    A strong advocate for the importance of learning from history, Eden Medina, associate professor of science, technology, and society at MIT, said that “often the way we frame computing is that everything is new. One of the things that I do in my teaching is look at how people have confronted these issues in the past and try to draw from them as a way to think about possible ways forward.” Medina regularly uses case studies in her classes and referred to a paper written by Yale University science historian Joanna Radin on the Pima Indian Diabetes Dataset that raised ethical issues on the history of that particular collection of data that many don’t consider as an example of how decisions around technology and data can grow out of very specific contexts.

    Milo Phillips-Brown, associate professor of philosophy at Oxford University, talked about the Ethical Computing Protocol that he co-created while he was a SERC postdoc at MIT. The protocol, a four-step approach to building technology responsibly, is designed to train computer science students to think in a better and more accurate way about the social implications of technology by breaking the process down into more manageable steps. “The basic approach that we take very much draws on the fields of value-sensitive design, responsible research and innovation, participatory design as guiding insights, and then is also fundamentally interdisciplinary,” he said.

    Fields such as biomedicine and law have an ethics ecosystem that distributes the function of ethical reasoning in these areas. Oversight and regulation are provided to guide front-line stakeholders and decision-makers when issues arise, as are training programs and access to interdisciplinary expertise that they can draw from. “In this space, we have none of that,” said John Basl, associate professor of philosophy at Northeastern University. “For current generations of computer scientists and other decision-makers, we’re actually making them do the ethical reasoning on their own.” Basl commented further that teaching core ethical reasoning skills across the curriculum, not just in philosophy classes, is essential, and that the goal shouldn’t be for every computer scientist be a professional ethicist, but for them to know enough of the landscape to be able to ask the right questions and seek out the relevant expertise and resources that exists.

    After the final session, interdisciplinary groups of faculty, students, and researchers engaged in animated discussions related to the issues covered throughout the day during a reception that marked the conclusion of the symposium. More

  • in

    Researchers develop novel AI-based estimator for manufacturing medicine

    When medical companies manufacture the pills and tablets that treat any number of illnesses, aches, and pains, they need to isolate the active pharmaceutical ingredient from a suspension and dry it. The process requires a human operator to monitor an industrial dryer, agitate the material, and watch for the compound to take on the right qualities for compressing into medicine. The job depends heavily on the operator’s observations.   

    Methods for making that process less subjective and a lot more efficient are the subject of a recent Nature Communications paper authored by researchers at MIT and Takeda. The paper’s authors devise a way to use physics and machine learning to categorize the rough surfaces that characterize particles in a mixture. The technique, which uses a physics-enhanced autocorrelation-based estimator (PEACE), could change pharmaceutical manufacturing processes for pills and powders, increasing efficiency and accuracy and resulting in fewer failed batches of pharmaceutical products.  

    “Failed batches or failed steps in the pharmaceutical process are very serious,” says Allan Myerson, a professor of practice in the MIT Department of Chemical Engineering and one of the study’s authors. “Anything that improves the reliability of the pharmaceutical manufacturing, reduces time, and improves compliance is a big deal.”

    The team’s work is part of an ongoing collaboration between Takeda and MIT, launched in 2020. The MIT-Takeda Program aims to leverage the experience of both MIT and Takeda to solve problems at the intersection of medicine, artificial intelligence, and health care.

    In pharmaceutical manufacturing, determining whether a compound is adequately mixed and dried ordinarily requires stopping an industrial-sized dryer and taking samples off the manufacturing line for testing. Researchers at Takeda thought artificial intelligence could improve the task and reduce stoppages that slow down production. Originally the research team planned to use videos to train a computer model to replace a human operator. But determining which videos to use to train the model still proved too subjective. Instead, the MIT-Takeda team decided to illuminate particles with a laser during filtration and drying, and measure particle size distribution using physics and machine learning. 

    “We just shine a laser beam on top of this drying surface and observe,” says Qihang Zhang, a doctoral student in MIT’s Department of Electrical Engineering and Computer Science and the study’s first author. 

    Play video

    A physics-derived equation describes the interaction between the laser and the mixture, while machine learning characterizes the particle sizes. The process doesn’t require stopping and starting the process, which means the entire job is more secure and more efficient than standard operating procedure, according to George Barbastathis, professor of mechanical engineering at MIT and corresponding author of the study.

    The machine learning algorithm also does not require many datasets to learn its job, because the physics allows for speedy training of the neural network.

    “We utilize the physics to compensate for the lack of training data, so that we can train the neural network in an efficient way,” says Zhang. “Only a tiny amount of experimental data is enough to get a good result.”

    Today, the only inline processes used for particle measurements in the pharmaceutical industry are for slurry products, where crystals float in a liquid. There is no method for measuring particles within a powder during mixing. Powders can be made from slurries, but when a liquid is filtered and dried its composition changes, requiring new measurements. In addition to making the process quicker and more efficient, using the PEACE mechanism makes the job safer because it requires less handling of potentially highly potent materials, the authors say. 

    The ramifications for pharmaceutical manufacturing could be significant, allowing drug production to be more efficient, sustainable, and cost-effective, by reducing the number of experiments companies need to conduct when making products. Monitoring the characteristics of a drying mixture is an issue the industry has long struggled with, according to Charles Papageorgiou, the director of Takeda’s Process Chemistry Development group and one of the study’s authors. 

    “It is a problem that a lot of people are trying to solve, and there isn’t a good sensor out there,” says Papageorgiou. “This is a pretty big step change, I think, with respect to being able to monitor, in real time, particle size distribution.”

    Papageorgiou said that the mechanism could have applications in other industrial pharmaceutical operations. At some point, the laser technology may be able to train video imaging, allowing manufacturers to use a camera for analysis rather than laser measurements. The company is now working to assess the tool on different compounds in its lab. 

    The results come directly from collaboration between Takeda and three MIT departments: Mechanical Engineering, Chemical Engineering, and Electrical Engineering and Computer Science. Over the last three years, researchers at MIT and Takeda have worked together on 19 projects focused on applying machine learning and artificial intelligence to problems in the health-care and medical industry as part of the MIT-Takeda Program. 

    Often, it can take years for academic research to translate to industrial processes. But researchers are hopeful that direct collaboration could shorten that timeline. Takeda is a walking distance away from MIT’s campus, which allowed researchers to set up tests in the company’s lab, and real-time feedback from Takeda helped MIT researchers structure their research based on the company’s equipment and operations. 

    Combining the expertise and mission of both entities helps researchers ensure their experimental results will have real-world implications. The team has already filed for two patents and has plans to file for a third.   More

  • in

    Illuminating the money trail

    You may not know this, but the U.S. imposes a 12.5 percent import tariff on imported flashlights. However, for a product category the federal government describes as “portable electric lamps designed to function by their own source of energy, other than flashlights,” the import tariff is just 3.5 percent.

    At a glance, this seems inexplicable. Why is one kind of self-powered portable light taxed more heavily than another? According to MIT political science professor In Song Kim, a policy discrepancy like this often stems from the difference in firms’ political power, as well as the extent to which firms are empowered by global production networks. This is a subject Kim has spent years examining in detail, producing original scholarly results while opening up a wealth of big data about politics to the public.

    “We all understand companies as being important economic agents,” Kim says. “But companies are political agents, too. They are very important political actors.”

    In particular, Kim’s work has illuminated the effects of lobbying upon U.S. trade policy. International trade is often presented as an unalloyed good, opening up markets and fueling growth. Beyond that, trade issues are usually described at the industry level; we hear about what the agriculture lobby or auto industry wants. But in reality, different firms want different things, even within the same industry.

    As Kim’s work shows, most firms lobby for policies pertaining to specific components of their products, and trade policy consists heavily of carve-outs for companies, not industry-wide standards. Firms making non-flashlight portable lights, it would seem, are good at lobbying, but the benefits clearly do not carry over to all portable light makers, as long as products are not perfect substitutes for each other. Meanwhile, as Kim’s research also shows, lobbying helps firms grow faster in size, even as lobbying-influenced policies may slow down the economy as a whole.

    “All our existing theories suggest that trade policy is a public good, in the sense that the benefits of open trade, the gains from trade, will be enjoyed by the public and will benefit the country as a whole,” Kim says. “But what I’ve learned is that trade policies are very, very granular. It’s become obvious to me that trade is no longer a public good. It’s actually a private good for individual companies.”

    Kim’s work includes over a dozen published journal articles over the last several years, several other forthcoming research papers, and a book he is currently writing. At the same time, Kim has created a public database, LobbyView, which tracks money in U.S. politics extending back to 1999. LobbyView, as an important collection of political information, has research, educational, and public-interest applications, enabling others, in academia or outside it, to further delve into the topic.

    “I want to contribute to the scholarly community, and I also want to create a public [resource] for our MIT community [and beyond], so we can all study politics through it,” Kim says.

    Keeping the public good in sight

    Kim grew up in South Korea, in a setting where politics was central to daily life. Kim’s grandfather, Kim jae-soon, was the Speaker of the National Assembly in South Korea from 1988 through 1990 and an important figure in the country’s government.

    “I’ve always been fascinated by politics,” says Kim, who remembers prominent political figures dropping by the family home when he was young. One of the principal lessons Kim learned about politics from his grandfather, however, was not about proximity to power, but the importance of public service. The enduring lesson of his family’s engagement with politics, Kim says, is that “I truly believe in contributing to the public good.”

    Kim’s found his own way of contributing to the public good not as a politician but as a scholar of politics. Kim received his BA in political science from Yonsei University in Seoul but decided he wanted to pursue graduate studies in the U.S. He earned an MA in law and diplomacy from the Fletcher School of Tufts University, then an MA in political science at George Washington University. By this time, Kim had become focused on the quantitative analysis of trade policy; for his PhD work, he attended Princeton University and was awarded his doctorate in 2014, joining the MIT faculty that year.

    Among the key pieces of research Kim has published, one paper, “Political Cleavages within Industry: Firm-level Lobbying for Trade Liberalization,” published in the American Political Science Review and growing out of his dissertation research, helped show how remarkably specialized many trade policies are. As of 2017, the U.S. had almost 17,000 types of products it made tariff decisions about. Many of these are the component parts of a product; about two-thirds of international trade consists of manufactured components that get shipped around during the production process, rather than raw goods or finished products. That paper won the 2018 Michael Wallerstein Award for the best published article in political economy in the previous year.

    Another 2017 paper Kim co-authored, “The Charmed Life of Superstar Exporters,” from the Journal of Politics, provides more empirical evidence of the differences among firms within an industry. The “superstar” firms that are the largest exporters tend to lobby the most about trade politics; a firm’s characteristics reveal more about its preferences for open trade than the possibility that its industry as a whole will gain a comparative advantage internationally.

    Kim often uses large-scale data and computational methods to study international trade and trade politics. Still another paper he has co-authored, “Measuring Trade Profile with Granular Product-level Trade Data,” published in the American Journal of Political Science in 2020, traces trade relationships in highly specific terms. Looking at over 2 billion observations of international trade data, Kim developed an algorithm to group countries based on which products they import and export. The methodology helps researchers to learn about the highly different developmental paths that countries follow, and about the deepening international competition between countries such as the U.S. and China.

    At other times, Kim has analyzed who is influencing trade policy. His paper “Mapping Political Communities,” from the journal Political Analysis in 2021, looks at the U.S. Congress and uses mandatory reports filed by lobbyists to build a picture of which interests groups are most closely connected to which politicians.

    Kim has published all his papers while balancing both his scholarly research and the public launch of LobbyView, which occurred in 2018. He was awarded tenure by MIT in the spring of 2022. Currently he is an associate professor in the Department of Political Science and a faculty affiliate of the Institute for Data, Systems, and Society.

    By the book

    Kim has continued to explore firm-level lobbying dynamics, although his recent research runs in a few directions. In a 2021 working paper, Kim and co-author Federico Huneeus of the Central Bank of Chile built a model estimating that eliminating lobbying in the U.S. could increase productivity by as much as 6 percent.

    “Political rents [favorable policies] given to particular companies might introduce inefficiencies or a misallocation of resources in the economy,” Kim says. “You could allocate those resources to more productive although politically inactive firms, but now they’re given to less productive and yet politically active big companies, increasing market concentration and monopolies.”

    Kim is on sabbatical during the 2022-23 academic year, working on a book about the importance of firms’ political activities in trade policymaking. The book will have an expansive timeframe, dating back to ancient times, which underscores the salience of trade policy across eras. At the same time, the book will analyze the distinctive features of modern trade politics with deepening global production networks.

    “I’m trying to allow people to learn about the history of trade politics, to show how the politics have changed over time,” Kim says. “In doing that, I’m also highlighting the importance of firm-to-firm trade and the emergence of new trade coalitions among firms in different countries and industries that are linked through the global production chain.”

    While continuing his own scholarly research, Kim still leads LobbyView, which he views both as a big data resource for any scholars interested in money in politics and an excellent teaching resource for his MIT classes, as students can tap into it for projects and papers. LobbyView contains so much data, in fact, that part of the challenge is finding ways to mine it effectively.

    “It really offers me an opportunity to work with MIT students,” Kim says of LobbyView. “What I think I can contribute is to bring those technologies to our understanding of politics. Having this unique data set can really allow students here to use technology to learn about politics, and I believe that fits the MIT identity.” More

  • in

    Boosting passenger experience and increasing connectivity at the Hong Kong International Airport

    Recently, a cohort of 36 students from MIT and universities across Hong Kong came together for the MIT Entrepreneurship and Maker Skills Integrator (MEMSI), an intense two-week startup boot camp hosted at the MIT Hong Kong Innovation Node.

    “We’re very excited to be in Hong Kong,” said Professor Charles Sodini, LeBel Professor of Electrical Engineering and faculty director of the Node. “The dream always was to bring MIT and Hong Kong students together.”

    Students collaborated on six teams to meet real-world industry challenges through action learning, defining a problem, designing a solution, and crafting a business plan. The experience culminated in the MEMSI Showcase, where each team presented its process and unique solution to a panel of judges. “The MEMSI program is a great demonstration of important international educational goals for MIT,” says Professor Richard Lester, associate provost for international activities and chair of the Node Steering Committee at MIT. “It creates opportunities for our students to solve problems in a particular and distinctive cultural context, and to learn how innovations can cross international boundaries.” 

    Meeting an urgent challenge in the travel and tourism industry

    The Hong Kong Airport Authority (AAHK) served as the program’s industry partner for the third consecutive year, challenging students to conceive innovative ideas to make passenger travel more personalized from end-to-end while increasing connectivity. As the travel industry resuscitates profitability and welcomes crowds back amidst ongoing delays and labor shortages, the need for a more passenger-centric travel ecosystem is urgent.

    The airport is the third-busiest international passenger airport and the world’s busiest cargo transit. Students experienced an insider’s tour of the Hong Kong International Airport to gain on-the-ground orientation. They observed firsthand the complex logistics, possibilities, and constraints of operating with a team of 78,000 employees who serve 71.5 million passengers with unique needs and itineraries.

    Throughout the program, the cohort was coached and supported by MEMSI alumni, travel industry mentors, and MIT faculty such as Richard de Neufville, professor of engineering systems.

    The mood inside the open-plan MIT Hong Kong Innovation Node was nonstop energetic excitement for the entire program. Each of the six teams was composed of students from MIT and from Hong Kong universities. They learned to work together under time pressure, develop solutions, receive feedback from industry mentors, and iterate around the clock.

    “MEMSI was an enriching and amazing opportunity to learn about entrepreneurship while collaborating with a diverse team to solve a complex problem,” says Maria Li, a junior majoring in computer science, economics, and data science at MIT. “It was incredible to see the ideas we initially came up with as a team turn into a single, thought-out solution by the end.”

    Unsurprisingly given MIT’s focus on piloting the latest technology and the tech-savvy culture of Hong Kong as a global center, many team projects focused on virtual reality, apps, and wearable technology designed to make passengers’ journeys more individualized, efficient, or enjoyable.

    After observing geospatial patterns charting passengers’ movement through an airport, one team realized that many people on long trips aim to meet fitness goals by consciously getting their daily steps power walking the expansive terminals. The team’s prototype, FitAir, is a smart, biometric token integrated virtual coach, which plans walking routes within the airport to promote passenger health and wellness.

    Another team noted a common frustration among frequent travelers who manage multiple mileage rewards program profiles, passwords, and status reports. They proposed AirPoint, a digital wallet that consolidates different rewards programs and presents passengers with all their airport redemption opportunities in one place.

    “Today, there is no loser,” said Vivian Cheung, chief operating officer of AAHK, who served as one of the judges. “Everyone is a winner. I am a winner, too. I have learned a lot from the showcase. Some of the ideas, I believe, can really become a business.”

    Cheung noted that in just 12 days, all teams observed and solved her organization’s pain points and successfully designed solutions to address them.

    More than a competition

    Although many of the models pitched are inventive enough to potentially shape the future of travel, the main focus of MEMSI isn’t to act as yet another startup challenge and incubator.

    “What we’re really focusing on is giving students the ability to learn entrepreneurial thinking,” explains Marina Chan, senior director and head of education at the Node. “It’s the dynamic experience in a highly connected environment that makes being in Hong Kong truly unique. When students can adapt and apply theory to an international context, it builds deeper cultural competency.”

    From an aerial view, the boot camp produced many entrepreneurs in the making and lasting friendships, and respect for other cultural backgrounds and operating environments.

    “I learned the overarching process of how to make a startup pitch, all the way from idea generation, market research, and making business models, to the pitch itself and the presentation,” says Arun Wongprommoon, a senior double majoring in computer science and engineering and linguistics.  “It was all a black box to me before I came into the program.”

    He said he gained tremendous respect for the startup world and the pure hard work and collaboration required to get ahead.

    Spearheaded by the Node, MEMSI is a collaboration among the MIT Innovation Initiative, the Martin Trust Center for Entrepreneurship, the MIT International Science and Technology Initiatives, and Project Manus. Learn more about applying to MEMSI. More

  • in

    Gaining real-world industry experience through Break Through Tech AI at MIT

    Taking what they learned conceptually about artificial intelligence and machine learning (ML) this year, students from across the Greater Boston area had the opportunity to apply their new skills to real-world industry projects as part of an experiential learning opportunity offered through Break Through Tech AI at MIT.

    Hosted by the MIT Schwarzman College of Computing, Break Through Tech AI is a pilot program that aims to bridge the talent gap for women and underrepresented genders in computing fields by providing skills-based training, industry-relevant portfolios, and mentoring to undergraduate students in regional metropolitan areas in order to position them more competitively for careers in data science, machine learning, and artificial intelligence.

    “Programs like Break Through Tech AI gives us opportunities to connect with other students and other institutions, and allows us to bring MIT’s values of diversity, equity, and inclusion to the learning and application in the spaces that we hold,” says Alana Anderson, assistant dean of diversity, equity, and inclusion for the MIT Schwarzman College of Computing.

    The inaugural cohort of 33 undergraduates from 18 Greater Boston-area schools, including Salem State University, Smith College, and Brandeis University, began the free, 18-month program last summer with an eight-week, online skills-based course to learn the basics of AI and machine learning. Students then split into small groups in the fall to collaborate on six machine learning challenge projects presented to them by MathWorks, MIT-IBM Watson AI Lab, and Replicate. The students dedicated five hours or more each week to meet with their teams, teaching assistants, and project advisors, including convening once a month at MIT, while juggling their regular academic course load with other daily activities and responsibilities.

    The challenges gave the undergraduates the chance to help contribute to actual projects that industry organizations are working on and to put their machine learning skills to the test. Members from each organization also served as project advisors, providing encouragement and guidance to the teams throughout.

    “Students are gaining industry experience by working closely with their project advisors,” says Aude Oliva, director of strategic industry engagement at the MIT Schwarzman College of Computing and the MIT director of the MIT-IBM Watson AI Lab. “These projects will be an add-on to their machine learning portfolio that they can share as a work example when they’re ready to apply for a job in AI.”

    Over the course of 15 weeks, teams delved into large-scale, real-world datasets to train, test, and evaluate machine learning models in a variety of contexts.

    In December, the students celebrated the fruits of their labor at a showcase event held at MIT in which the six teams gave final presentations on their AI projects. The projects not only allowed the students to build up their AI and machine learning experience, it helped to “improve their knowledge base and skills in presenting their work to both technical and nontechnical audiences,” Oliva says.

    For a project on traffic data analysis, students got trained on MATLAB, a programming and numeric computing platform developed by MathWorks, to create a model that enables decision-making in autonomous driving by predicting future vehicle trajectories. “It’s important to realize that AI is not that intelligent. It’s only as smart as you make it and that’s exactly what we tried to do,” said Brandeis University student Srishti Nautiyal as she introduced her team’s project to the audience. With companies already making autonomous vehicles from planes to trucks a reality, Nautiyal, a physics and mathematics major, shared that her team was also highly motivated to consider the ethical issues of the technology in their model for the safety of passengers, drivers, and pedestrians.

    Using census data to train a model can be tricky because they are often messy and full of holes. In a project on algorithmic fairness for the MIT-IBM Watson AI Lab, the hardest task for the team was having to clean up mountains of unorganized data in a way where they could still gain insights from them. The project — which aimed to create demonstration of fairness applied on a real dataset to evaluate and compare effectiveness of different fairness interventions and fair metric learning techniques — could eventually serve as an educational resource for data scientists interested in learning about fairness in AI and using it in their work, as well as to promote the practice of evaluating the ethical implications of machine learning models in industry.

    Other challenge projects included an ML-assisted whiteboard for nontechnical people to interact with ready-made machine learning models, and a sign language recognition model to help disabled people communicate with others. A team that worked on a visual language app set out to include over 50 languages in their model to increase access for the millions of people that are visually impaired throughout the world. According to the team, similar apps on the market currently only offer up to 23 languages. 

    Throughout the semester, students persisted and demonstrated grit in order to cross the finish line on their projects. With the final presentations marking the conclusion of the fall semester, students will return to MIT in the spring to continue their Break Through Tech AI journey to tackle another round of AI projects. This time, the students will work with Google on new machine learning challenges that will enable them to hone their AI skills even further with an eye toward launching a successful career in AI. More