More stories

  • in

    The tenured engineers of 2023

    In 2023, MIT granted tenure to nine faculty members across the School of Engineering. This year’s tenured engineers hold appointments in the departments of Biological Engineering, Civil and Environmental Engineering, Electrical Engineering and Computer Science (which reports jointly to the School of Engineering and MIT Schwarzman College of Computing), Materials Science and Engineering, and Mechanical Engineering, as well as the Institute for Medical Engineering and Science (IMES).

    “I am truly inspired by this remarkable group of talented faculty members,” says Anantha Chandrakasan, dean of the School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science. “The work they are doing, both in the lab and in the classroom, has made a tremendous impact at MIT and in the wider world. Their important research has applications in a diverse range of fields and industries. I am thrilled to congratulate them on the milestone of receiving tenure.”

    This year’s newly tenured engineering faculty include:

    Michael Birnbaum, Class of 1956 Career Development Professor, associate professor of biological engineering, and faculty member at the Koch Institute for Integrative Cancer Research at MIT, works on understanding and manipulating immune recognition in cancer and infections. By using a variety of techniques to study the antigen recognition of T cells, he and his team aim to develop the next generation of immunotherapies.  
    Tamara Broderick, associate professor of electrical engineering and computer science and member of the MIT Laboratory for Information and Decision Systems (LIDS) and the MIT Institute for Data, Systems, and Society (IDSS), works to provide fast and reliable quantification of uncertainty and robustness in modern data analysis procedures. Broderick and her research group develop data analysis tools with applications in fields, including genetics, economics, and assistive technology. 
    Tal Cohen, associate professor of civil and environmental engineering and mechanical engineering, uses nonlinear solid mechanics to understand how materials behave under extreme conditions. By studying material instabilities, extreme dynamic loading conditions, growth, and chemical coupling, Cohen and her team combine theoretical models and experiments to shape our understanding of the observed phenomena and apply those insights in the design and characterization of material systems. 
    Betar Gallant, Class of 1922 Career Development Professor and associate professor of mechanical engineering, develops advanced materials and chemistries for next-generation lithium-ion and lithium primary batteries and electrochemical carbon dioxide mitigation technologies. Her group’s work could lead to higher-energy and more sustainable batteries for electric vehicles, longer-lasting implantable medical devices, and new methods of carbon capture and conversion. 
    Rafael Jaramillo, Thomas Lord Career Development Professor and associate professor of materials science and engineering, studies the synthesis, properties, and applications of electronic materials, particularly chalcogenide compound semiconductors. His work has applications in microelectronics, integrated photonics, telecommunications, and photovoltaics. 
    Benedetto Marelli, associate professor of civil and environmental engineering, conducts research on the synthesis, assembly, and nanomanufacturing of structural biopolymers. He and his research team develop biomaterials for applications in agriculture, food security, and food safety. 
    Ellen Roche, Latham Family Career Development Professor, an associate professor of mechanical engineering, and a core faculty of IMES, designs and develops implantable, biomimetic therapeutic devices and soft robotics that mechanically assist and repair tissue, deliver therapies, and enable enhanced preclinical testing. Her devices have a wide range of applications in human health, including cardiovascular and respiratory disease. 
    Serguei Saavedra, associate professor of civil and environmental engineering, uses systems thinking, synthesis, and mathematical modeling to study the persistence of ecological systems under changing environments. His theoretical research is used to develop hypotheses and corroborate predictions of how ecological systems respond to climate change. 
    Justin Solomon, associate professor of electrical engineering and computer science and member of the MIT Computer Science and Artificial Intelligence Laboratory and MIT Center for Computational Science and Engineering, works at the intersection of geometry, large-scale optimization, computer graphics, and machine learning. His research has diverse applications in machine learning, computer graphics, and geometric data processing.  More

  • in

    The curse of variety in transportation systems

    Cathy Wu has always delighted in systems that run smoothly. In high school, she designed a project to optimize the best route for getting to class on time. Her research interests and career track are evidence of a propensity for organizing and optimizing, coupled with a strong sense of responsibility to contribute to society instilled by her parents at a young age.

    As an undergraduate at MIT, Wu explored domains like agriculture, energy, and education, eventually homing in on transportation. “Transportation touches each of our lives,” she says. “Every day, we experience the inefficiencies and safety issues as well as the environmental harms associated with our transportation systems. I believe we can and should do better.”

    But doing so is complicated. Consider the long-standing issue of traffic systems control. Wu explains that it is not one problem, but more accurately a family of control problems impacted by variables like time of day, weather, and vehicle type — not to mention the types of sensing and communication technologies used to measure roadway information. Every differentiating factor introduces an exponentially larger set of control problems. There are thousands of control-problem variations and hundreds, if not thousands, of studies and papers dedicated to each problem. Wu refers to the sheer number of variations as the curse of variety — and it is hindering innovation.

    Play video

    “To prove that a new control strategy can be safely deployed on our streets can take years. As time lags, we lose opportunities to improve safety and equity while mitigating environmental impacts. Accelerating this process has huge potential,” says Wu.  

    Which is why she and her group in the MIT Laboratory for Information and Decision Systems are devising machine learning-based methods to solve not just a single control problem or a single optimization problem, but families of control and optimization problems at scale. “In our case, we’re examining emerging transportation problems that people have spent decades trying to solve with classical approaches. It seems to me that we need a different approach.”

    Optimizing intersections

    Currently, Wu’s largest research endeavor is called Project Greenwave. There are many sectors that directly contribute to climate change, but transportation is responsible for the largest share of greenhouse gas emissions — 29 percent, of which 81 percent is due to land transportation. And while much of the conversation around mitigating environmental impacts related to mobility is focused on electric vehicles (EVs), electrification has its drawbacks. EV fleet turnover is time-consuming (“on the order of decades,” says Wu), and limited global access to the technology presents a significant barrier to widespread adoption.

    Wu’s research, on the other hand, addresses traffic control problems by leveraging deep reinforcement learning. Specifically, she is looking at traffic intersections — and for good reason. In the United States alone, there are more than 300,000 signalized intersections where vehicles must stop or slow down before re-accelerating. And every re-acceleration burns fossil fuels and contributes to greenhouse gas emissions.

    Highlighting the magnitude of the issue, Wu says, “We have done preliminary analysis indicating that up to 15 percent of land transportation CO2 is wasted through energy spent idling and re-accelerating at intersections.”

    To date, she and her group have modeled 30,000 different intersections across 10 major metropolitan areas in the United States. That is 30,000 different configurations, roadway topologies (e.g., grade of road or elevation), different weather conditions, and variations in travel demand and fuel mix. Each intersection and its corresponding scenarios represents a unique multi-agent control problem.

    Wu and her team are devising techniques that can solve not just one, but a whole family of problems comprised of tens of thousands of scenarios. Put simply, the idea is to coordinate the timing of vehicles so they arrive at intersections when traffic lights are green, thereby eliminating the start, stop, re-accelerate conundrum. Along the way, they are building an ecosystem of tools, datasets, and methods to enable roadway interventions and impact assessments of strategies to significantly reduce carbon-intense urban driving.

    Play video

    Their collaborator on the project is the Utah Department of Transportation, which Wu says has played an essential role, in part by sharing data and practical knowledge that she and her group otherwise would not have been able to access publicly.

    “I appreciate industry and public sector collaborations,” says Wu. “When it comes to important societal problems, one really needs grounding with practitioners. One needs to be able to hear the perspectives in the field. My interactions with practitioners expand my horizons and help ground my research. You never know when you’ll hear the perspective that is the key to the solution, or perhaps the key to understanding the problem.”

    Finding the best routes

    In a similar vein, she and her research group are tackling large coordination problems. For example, vehicle routing. “Every day, delivery trucks route more than a hundred thousand packages for the city of Boston alone,” says Wu. Accomplishing the task requires, among other things, figuring out which trucks to use, which packages to deliver, and the order in which to deliver them as efficiently as possible. If and when the trucks are electrified, they will need to be charged, adding another wrinkle to the process and further complicating route optimization.

    The vehicle routing problem, and therefore the scope of Wu’s work, extends beyond truck routing for package delivery. Ride-hailing cars may need to pick up objects as well as drop them off; and what if delivery is done by bicycle or drone? In partnership with Amazon, for example, Wu and her team addressed routing and path planning for hundreds of robots (up to 800) in their warehouses.

    Every variation requires custom heuristics that are expensive and time-consuming to develop. Again, this is really a family of problems — each one complicated, time-consuming, and currently unsolved by classical techniques — and they are all variations of a central routing problem. The curse of variety meets operations and logistics.

    By combining classical approaches with modern deep-learning methods, Wu is looking for a way to automatically identify heuristics that can effectively solve all of these vehicle routing problems. So far, her approach has proved successful.

    “We’ve contributed hybrid learning approaches that take existing solution methods for small problems and incorporate them into our learning framework to scale and accelerate that existing solver for large problems. And we’re able to do this in a way that can automatically identify heuristics for specialized variations of the vehicle routing problem.” The next step, says Wu, is applying a similar approach to multi-agent robotics problems in automated warehouses.

    Wu and her group are making big strides, in part due to their dedication to use-inspired basic research. Rather than applying known methods or science to a problem, they develop new methods, new science, to address problems. The methods she and her team employ are necessitated by societal problems with practical implications. The inspiration for the approach? None other than Louis Pasteur, who described his research style in a now-famous article titled “Pasteur’s Quadrant.” Anthrax was decimating the sheep population, and Pasteur wanted to better understand why and what could be done about it. The tools of the time could not solve the problem, so he invented a new field, microbiology, not out of curiosity but out of necessity. More

  • in

    Statistics, operations research, and better algorithms

    In this day and age, many companies and institutions are not just data-driven, but data-intensive. Insurers, health providers, government agencies, and social media platforms are all heavily dependent on data-rich models and algorithms to identify the characteristics of the people who use them, and to nudge their behavior in various ways.

    That doesn’t mean organizations are always using optimal models, however. Determining efficient algorithms is a research area of its own — and one where Rahul Mazumder happens to be a leading expert.

    Mazumder, an associate professor in the MIT Sloan School of Management and an affiliate of the Operations Research Center, works both to expand the techniques of model-building and to refine models that apply to particular problems. His work pertains to a wealth of areas, including statistics and operations research, with applications in finance, health care, advertising, online recommendations, and more.

    “There is engineering involved, there is science involved, there is implementation involved, there is theory involved, it’s at the junction of various disciplines,” says Mazumder, who is also affiliated with the Center for Statistics and Data Science and the MIT-IBM Watson AI Lab.

    There is also a considerable amount of practical-minded judgment, logic, and common-sense decision-making at play, in order to bring the right techniques to bear on any individual task.

    “Statistics is about having data coming from a physical system, or computers, or humans, and you want to make sense of the data,” Mazumder says. “And you make sense of it by building models because that gives some pattern to a dataset. But of course, there is a lot of subjectivity in that. So, there is subjectivity in statistics, but also mathematical rigor.”

    Over roughly the last decade, Mazumder, often working with co-authors, has published about 40 peer-reviewed papers, won multiple academic awards, collaborated with major companies about their work, and helped advise graduate students. For his research and teaching, Mazumder was granted tenure by MIT last year.

    From deep roots to new tools

    Mazumder grew up in Kolkata, India, where his father was a professor at the Indian Statistical Institute and his mother was a schoolteacher. Mazumder received his undergraduate and master’s degrees from the Indian Statistical Institute as well, although without really focusing on the same areas as his father, whose work was in fluid mechanics.

    For his doctoral work, Mazumder attended Stanford University, where he earned his PhD in 2012. After a year as a postdoc at MIT’s Operations Research Center, he joined the faculty at Columbia University, then moved to MIT in 2015.

    While Mazumder’s work has many facets, his research portfolio does have notable central achievements. Mazumder has helped combine ideas from two branches of optimization to facilitate addressing computational problems in statistics. One of these branches, discrete optimization, uses discrete variables — integers — to find the best candidate among a finite set of options. This can relate to operational efficiency: What is the shortest route someone might take while making a designated set of stops? Convex optimization, on the other hand, encompasses an array of algorithms that can obtain the best solution for what Mazumder calls “nicely behaved” mathematical functions. They are typically applied to optimize continuous decisions in financial portfolio allocation and health care outcomes, among other things.

    In some recent papers, such as “Fast best subset selection: Coordinate descent and local combinatorial optimization algorithms,” co-authored with Hussein Hazimeh and published in Operations Research in 2020, and in “Sparse regression at scale: branch-and-bound rooted in first-order optimization,” co-authored with Hazimeh and A. Saab and published in Mathematical Programming in 2022, Mazumder has found ways to combine ideas from the two branches.

    “The tools and techniques we are using are new for the class of statistical problems because we are combining different developments in convex optimization and exploring that within discrete optimization,” Mazumder says.

    As new as these tools are, however, Mazumder likes working on techniques that “have old roots,” as he puts it. The two types of optimization methods were considered less separate in the 1950s or 1960s, he says, then grew apart.

    “I like to go back and see how things developed,” Mazumder says. “If I look back in history at [older] papers, it’s actually very fascinating. One thing was developed, another was developed, another was developed kind of independently, and after a while you see connections across them. If I go back, I see some parallels. And that actually helps in my thought process.”

    Predictions and parsimony

    Mazumder’s work is often aimed at simplifying the model or algorithm being applied to a problem. In some instances, bigger models would require enormous amounts of processing power, so simpler methods can provide equally good results while using fewer resources. In other cases — ranging from the finance and tech firms Mazumder has sometimes collaborated with — simpler models may work better by having fewer moving parts.

    “There is a notion of parsimony involved,” Mazumder says. Genomic studies aim to find particularly influential genes; similarly, tech giants may benefit from simpler models of consumer behavior, not more complex ones, when they are recommending a movie to you.

    Very often, Mazumder says, modeling “is a very large-scale prediction problem. But we don’t think all the features or attributes are going to be important. A small collection is going to be important. Why? Because if you think about movies, there are not really 20,000 different movies; there are genres of movies. If you look at individual users, there are hundreds of millions of users, but really they are grouped together into cliques. Can you capture the parsimony in a model?”

    One part of his career that does not lend itself to parsimony, Mazumder feels, is crediting others. In conversation he emphasizes how grateful he is to his mentors in academia, and how much of his work is developed in concert with collaborators and, in particular, his students at MIT. 

    “I really, really like working with my students,” Mazumder says. “I perceive my students as my colleagues. Some of these problems, I thought they could not be solved, but then we just made it work. Of course, no method is perfect. But the fact we can use ideas from different areas in optimization with very deep roots, to address problems of core statistics and machine learning interest, is very exciting.”

    Teaching and doing research at MIT, Mazumder says, allows him to push forward on difficult problems — while also being pushed along by the interest and work of others around him.

    “MIT is a very vibrant community,” Mazumder says. “The thing I find really fascinating is, people here are very driven. They want to make a change in whatever area they are working in. And I also feel motivated to do this.” More

  • in

    3 Questions: Honing robot perception and mapping

    Walking to a friend’s house or browsing the aisles of a grocery store might feel like simple tasks, but they in fact require sophisticated capabilities. That’s because humans are able to effortlessly understand their surroundings and detect complex information about patterns, objects, and their own location in the environment.

    What if robots could perceive their environment in a similar way? That question is on the minds of MIT Laboratory for Information and Decision Systems (LIDS) researchers Luca Carlone and Jonathan How. In 2020, a team led by Carlone released the first iteration of Kimera, an open-source library that enables a single robot to construct a three-dimensional map of its environment in real time, while labeling different objects in view. Last year, Carlone’s and How’s research groups (SPARK Lab and Aerospace Controls Lab) introduced Kimera-Multi, an updated system in which multiple robots communicate among themselves in order to create a unified map. A 2022 paper associated with the project recently received this year’s IEEE Transactions on Robotics King-Sun Fu Memorial Best Paper Award, given to the best paper published in the journal in 2022.

    Carlone, who is the Leonardo Career Development Associate Professor of Aeronautics and Astronautics, and How, the Richard Cockburn Maclaurin Professor in Aeronautics and Astronautics, spoke to LIDS about Kimera-Multi and the future of how robots might perceive and interact with their environment.

    Q: Currently your labs are focused on increasing the number of robots that can work together in order to generate 3D maps of the environment. What are some potential advantages to scaling this system?

    How: The key benefit hinges on consistency, in the sense that a robot can create an independent map, and that map is self-consistent but not globally consistent. We’re aiming for the team to have a consistent map of the world; that’s the key difference in trying to form a consensus between robots as opposed to mapping independently.

    Carlone: In many scenarios it’s also good to have a bit of redundancy. For example, if we deploy a single robot in a search-and-rescue mission, and something happens to that robot, it would fail to find the survivors. If multiple robots are doing the exploring, there’s a much better chance of success. Scaling up the team of robots also means that any given task may be completed in a shorter amount of time.

    Q: What are some of the lessons you’ve learned from recent experiments, and challenges you’ve had to overcome while designing these systems?

    Carlone: Recently we did a big mapping experiment on the MIT campus, in which eight robots traversed up to 8 kilometers in total. The robots have no prior knowledge of the campus, and no GPS. Their main tasks are to estimate their own trajectory and build a map around it. You want the robots to understand the environment as humans do; humans not only understand the shape of obstacles, to get around them without hitting them, but also understand that an object is a chair, a desk, and so on. There’s the semantics part.

    The interesting thing is that when the robots meet each other, they exchange information to improve their map of the environment. For instance, if robots connect, they can leverage information to correct their own trajectory. The challenge is that if you want to reach a consensus between robots, you don’t have the bandwidth to exchange too much data. One of the key contributions of our 2022 paper is to deploy a distributed protocol, in which robots exchange limited information but can still agree on how the map looks. They don’t send camera images back and forth but only exchange specific 3D coordinates and clues extracted from the sensor data. As they continue to exchange such data, they can form a consensus.

    Right now we are building color-coded 3D meshes or maps, in which the color contains some semantic information, like “green” corresponds to grass, and “magenta” to a building. But as humans, we have a much more sophisticated understanding of reality, and we have a lot of prior knowledge about relationships between objects. For instance, if I was looking for a bed, I would go to the bedroom instead of exploring the entire house. If you start to understand the complex relationships between things, you can be much smarter about what the robot can do in the environment. We’re trying to move from capturing just one layer of semantics, to a more hierarchical representation in which the robots understand rooms, buildings, and other concepts.

    Q: What kinds of applications might Kimera and similar technologies lead to in the future?

    How: Autonomous vehicle companies are doing a lot of mapping of the world and learning from the environments they’re in. The holy grail would be if these vehicles could communicate with each other and share information, then they could improve models and maps that much quicker. The current solutions out there are individualized. If a truck pulls up next to you, you can’t see in a certain direction. Could another vehicle provide a field of view that your vehicle otherwise doesn’t have? This is a futuristic idea because it requires vehicles to communicate in new ways, and there are privacy issues to overcome. But if we could resolve those issues, you could imagine a significantly improved safety situation, where you have access to data from multiple perspectives, not only your field of view.

    Carlone: These technologies will have a lot of applications. Earlier I mentioned search and rescue. Imagine that you want to explore a forest and look for survivors, or map buildings after an earthquake in a way that can help first responders access people who are trapped. Another setting where these technologies could be applied is in factories. Currently, robots that are deployed in factories are very rigid. They follow patterns on the floor, and are not really able to understand their surroundings. But if you’re thinking about much more flexible factories in the future, robots will have to cooperate with humans and exist in a much less structured environment. More

  • in

    Bringing the social and ethical responsibilities of computing to the forefront

    There has been a remarkable surge in the use of algorithms and artificial intelligence to address a wide range of problems and challenges. While their adoption, particularly with the rise of AI, is reshaping nearly every industry sector, discipline, and area of research, such innovations often expose unexpected consequences that involve new norms, new expectations, and new rules and laws.

    To facilitate deeper understanding, the Social and Ethical Responsibilities of Computing (SERC), a cross-cutting initiative in the MIT Schwarzman College of Computing, recently brought together social scientists and humanists with computer scientists, engineers, and other computing faculty for an exploration of the ways in which the broad applicability of algorithms and AI has presented both opportunities and challenges in many aspects of society.

    “The very nature of our reality is changing. AI has the ability to do things that until recently were solely the realm of human intelligence — things that can challenge our understanding of what it means to be human,” remarked Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing, in his opening address at the inaugural SERC Symposium. “This poses philosophical, conceptual, and practical questions on a scale not experienced since the start of the Enlightenment. In the face of such profound change, we need new conceptual maps for navigating the change.”

    The symposium offered a glimpse into the vision and activities of SERC in both research and education. “We believe our responsibility with SERC is to educate and equip our students and enable our faculty to contribute to responsible technology development and deployment,” said Georgia Perakis, the William F. Pounds Professor of Management in the MIT Sloan School of Management, co-associate dean of SERC, and the lead organizer of the symposium. “We’re drawing from the many strengths and diversity of disciplines across MIT and beyond and bringing them together to gain multiple viewpoints.”

    Through a succession of panels and sessions, the symposium delved into a variety of topics related to the societal and ethical dimensions of computing. In addition, 37 undergraduate and graduate students from a range of majors, including urban studies and planning, political science, mathematics, biology, electrical engineering and computer science, and brain and cognitive sciences, participated in a poster session to exhibit their research in this space, covering such topics as quantum ethics, AI collusion in storage markets, computing waste, and empowering users on social platforms for better content credibility.

    Showcasing a diversity of work

    In three sessions devoted to themes of beneficent and fair computing, equitable and personalized health, and algorithms and humans, the SERC Symposium showcased work by 12 faculty members across these domains.

    One such project from a multidisciplinary team of archaeologists, architects, digital artists, and computational social scientists aimed to preserve endangered heritage sites in Afghanistan with digital twins. The project team produced highly detailed interrogable 3D models of the heritage sites, in addition to extended reality and virtual reality experiences, as learning resources for audiences that cannot access these sites.

    In a project for the United Network for Organ Sharing, researchers showed how they used applied analytics to optimize various facets of an organ allocation system in the United States that is currently undergoing a major overhaul in order to make it more efficient, equitable, and inclusive for different racial, age, and gender groups, among others.

    Another talk discussed an area that has not yet received adequate public attention: the broader implications for equity that biased sensor data holds for the next generation of models in computing and health care.

    A talk on bias in algorithms considered both human bias and algorithmic bias, and the potential for improving results by taking into account differences in the nature of the two kinds of bias.

    Other highlighted research included the interaction between online platforms and human psychology; a study on whether decision-makers make systemic prediction mistakes on the available information; and an illustration of how advanced analytics and computation can be leveraged to inform supply chain management, operations, and regulatory work in the food and pharmaceutical industries.

    Improving the algorithms of tomorrow

    “Algorithms are, without question, impacting every aspect of our lives,” said Asu Ozdaglar, deputy dean of academics for the MIT Schwarzman College of Computing and head of the Department of Electrical Engineering and Computer Science, in kicking off a panel she moderated on the implications of data and algorithms.

    “Whether it’s in the context of social media, online commerce, automated tasks, and now a much wider range of creative interactions with the advent of generative AI tools and large language models, there’s little doubt that much more is to come,” Ozdaglar said. “While the promise is evident to all of us, there’s a lot to be concerned as well. This is very much time for imaginative thinking and careful deliberation to improve the algorithms of tomorrow.”

    Turning to the panel, Ozdaglar asked experts from computing, social science, and data science for insights on how to understand what is to come and shape it to enrich outcomes for the majority of humanity.

    Sarah Williams, associate professor of technology and urban planning at MIT, emphasized the critical importance of comprehending the process of how datasets are assembled, as data are the foundation for all models. She also stressed the need for research to address the potential implication of biases in algorithms that often find their way in through their creators and the data used in their development. “It’s up to us to think about our own ethical solutions to these problems,” she said. “Just as it’s important to progress with the technology, we need to start the field of looking at these questions of what biases are in the algorithms? What biases are in the data, or in that data’s journey?”

    Shifting focus to generative models and whether the development and use of these technologies should be regulated, the panelists — which also included MIT’s Srini Devadas, professor of electrical engineering and computer science, John Horton, professor of information technology, and Simon Johnson, professor of entrepreneurship — all concurred that regulating open-source algorithms, which are publicly accessible, would be difficult given that regulators are still catching up and struggling to even set guardrails for technology that is now 20 years old.

    Returning to the question of how to effectively regulate the use of these technologies, Johnson proposed a progressive corporate tax system as a potential solution. He recommends basing companies’ tax payments on their profits, especially for large corporations whose massive earnings go largely untaxed due to offshore banking. By doing so, Johnson said that this approach can serve as a regulatory mechanism that discourages companies from trying to “own the entire world” by imposing disincentives.

    The role of ethics in computing education

    As computing continues to advance with no signs of slowing down, it is critical to educate students to be intentional in the social impact of the technologies they will be developing and deploying into the world. But can one actually be taught such things? If so, how?

    Caspar Hare, professor of philosophy at MIT and co-associate dean of SERC, posed this looming question to faculty on a panel he moderated on the role of ethics in computing education. All experienced in teaching ethics and thinking about the social implications of computing, each panelist shared their perspective and approach.

    A strong advocate for the importance of learning from history, Eden Medina, associate professor of science, technology, and society at MIT, said that “often the way we frame computing is that everything is new. One of the things that I do in my teaching is look at how people have confronted these issues in the past and try to draw from them as a way to think about possible ways forward.” Medina regularly uses case studies in her classes and referred to a paper written by Yale University science historian Joanna Radin on the Pima Indian Diabetes Dataset that raised ethical issues on the history of that particular collection of data that many don’t consider as an example of how decisions around technology and data can grow out of very specific contexts.

    Milo Phillips-Brown, associate professor of philosophy at Oxford University, talked about the Ethical Computing Protocol that he co-created while he was a SERC postdoc at MIT. The protocol, a four-step approach to building technology responsibly, is designed to train computer science students to think in a better and more accurate way about the social implications of technology by breaking the process down into more manageable steps. “The basic approach that we take very much draws on the fields of value-sensitive design, responsible research and innovation, participatory design as guiding insights, and then is also fundamentally interdisciplinary,” he said.

    Fields such as biomedicine and law have an ethics ecosystem that distributes the function of ethical reasoning in these areas. Oversight and regulation are provided to guide front-line stakeholders and decision-makers when issues arise, as are training programs and access to interdisciplinary expertise that they can draw from. “In this space, we have none of that,” said John Basl, associate professor of philosophy at Northeastern University. “For current generations of computer scientists and other decision-makers, we’re actually making them do the ethical reasoning on their own.” Basl commented further that teaching core ethical reasoning skills across the curriculum, not just in philosophy classes, is essential, and that the goal shouldn’t be for every computer scientist be a professional ethicist, but for them to know enough of the landscape to be able to ask the right questions and seek out the relevant expertise and resources that exists.

    After the final session, interdisciplinary groups of faculty, students, and researchers engaged in animated discussions related to the issues covered throughout the day during a reception that marked the conclusion of the symposium. More

  • in

    Celebrating the impact of IDSS

    The “interdisciplinary approach” is something that has been lauded for decades for its ability to break down silos and create new integrated approaches to research.

    For Munther Dahleh, founding director of the MIT Institute for Data, Systems, and Society (IDSS), showing the community that data science and statistics can transcend individual disciplines and form a new holistic approach to addressing complex societal challenges has been crucial to the institute’s success.

    “From the very beginning, it was critical that we recognized the areas of data science, statistics, AI, and, in a way, computing, as transdisciplinary,” says Dahleh, who is the William A. Coolidge Professor in Electrical Engineering and Computer Science. “We made that point over and over — these are areas that embed in your field. It is not ours; this organization is here for everyone.”

    On April 14-15, researchers from across and beyond MIT joined together to celebrate the accomplishments and impact IDSS has had on research and education since its inception in 2015. Taking the place of IDSS’s annual statistics and data science conference SDSCon, the celebration also doubled as a way to recognize Dahleh for his work creating and executing the vision of IDSS as he prepares to step down from his director position this summer.

    In addition to talks and panels on statistics and computation, smart systems, automation and artificial intelligence, conference participants discussed issues ranging from climate change, health care, and misinformation. Nobel Prize winner and IDSS affiliate Professor Esther Duflo spoke on large scale immunization efforts, former MLK Visiting Professor Craig Watkins joined a panel on equity and justice in AI, and IDSS Associate Director Alberto Abadie discussed synthetic controls for policy evaluation. Other policy questions were explored through lightning talks, including those by students from the Technology and Policy Program (TPP) within IDSS.

    A place to call home

    The list of IDSS accomplishments over the last eight years is long and growing. From creating a home for 21st century statistics at MIT after other unsuccessful attempts, to creating a new PhD preparing the trilingual student who is an expert in data science and social science in the context of a domain, to playing a key role in determining an effective process for Covid testing in the early days of the pandemic, IDSS has left its mark on MIT. More recently, IDSS launched an initiative using big data to help effect structural and normative change toward racial equity, and will continue to explore societal challenges through the lenses of statistics, social science, and science and engineering.

    “I’m very proud of what we’ve done and of all the people who have contributed to this. The leadership team has been phenomenal in their commitment and their creativity,” Dahleh says. “I always say it doesn’t take one person, it takes the village to do what we have done, and I am very proud of that.”

    Prior to the institute’s formation, Dahleh and others at MIT were brought together to answer one key question: How would MIT prepare for the future of systems and data?

    “Data science is a complex area because in some ways it’s everywhere and it belongs to everyone, similar to statistics and AI,” Dahleh says “The most important part of creating an organization to support it was making it clear that it was an organization for everyone.” The response the team came back with was to build an Institute: a department that could cross all other departments and schools.

    While Dahleh and others on the committee were creating this blueprint for the future, the events that would lead early IDSS hires like Caroline Uhler to join the team were also beginning to take shape. Uhler, now an MIT professor of computer science and co-director of the Eric and Wendy Schmidt Center at the Broad Institute, was a panelist at the celebration discussing statistics and human health.

    In 2015, Uhler was a faculty member at the Institute of Science and Technology in Austria looking to move back to the U.S. “I was looking for positions in all different types of departments related to statistics, including electrical engineering and computer science, which were areas not related to my degree,” Uhler says. “What really got me to MIT was Munther’s vision for building a modern type of statistics, and the unique opportunity to be part of building what statistics should be moving forward.”

    The breadth of the Statistics and Data Science Center has given it a unique and a robust character that makes for an attractive collaborative environment at MIT. “A lot of IDSS’s impact has been in giving people like me a home,” Uhler adds. “By building an institute for statistics that is across all schools instead of housed within a single department, it has created a home for everyone who is interested in the field.”

    Filling the gap

    For Ali Jadbabaie, former IDSS associate director and another early IDSS hire, being in the right place at the right time landed him in the center of it all. A control theory expert and network scientist by training, Jadbabaie first came to MIT during a sabbatical from his position as a professor at the University of Pennsylvania.

    “My time at MIT coincided with the early discussions around forming IDSS and given my experience they asked me to stay and help with its creation,” Jadbabaie says. He is now head of the Department of Civil and Environmental Engineering at MIT, and he spoke at the celebration about a new MIT major in climate system science and engineering.

    A critical early accomplishment of IDSS was the creation of a doctoral program in social and engineering systems (SES), which has the goal of educating and fostering the success of a new type of PhD student, says Jadbabaie.

    “We realized we had this opportunity to educate a new type of PhD student who was conversant in the math of information sciences and statistics in addition to an understanding of a domain — infrastructures, climate, political polarization — in which problems arise,” he says. “This program would provide training in statistics and data science, the math of information sciences and a branch of social science that is relevant to their domain.”

    “SES has been filling a gap,” adds Jadbabaie. “We wanted to bring quantitative reasoning to areas in social sciences, particularly as they interact with complex engineering systems.”

    “My first year at MIT really broadened my horizon in terms of what was available and exciting,” says Manxi Wu, a member of the first cohort of students in the SES program after starting out in the Master of Science in Transportation (MST) program. “My advisor introduced me to a number of interesting topics at the intersection of game theory, economics, and engineering systems, and in my second year I realized my interest was really about the societal scale systems, with transportation as my go-to application area when I think about how to make an impact in the real world.”

    Wu, now an assistant professor in the School of Operations Research and Information Engineering at Cornell, was a panelist at the Celebration’s session on smart infrastructure systems. She says that the beauty of the SES program lies in its ability to create a common ground between groups of students and researchers who all have different applications interests but share an eagerness to sharpen their technical skills.

    “While we may be working on very different application areas, the core methodologies, such as mathematical tools for data science and probability optimization, create a common language,” Wu says. “We are all capable of speaking the technical language, and our diversified interests give us even more to talk about.”

    In addition to the PhD program, IDSS has helped bring quality MIT programming to people around the globe with its MicroMasters Program in Statistics and Data Science (SDS), which recently celebrated the certification of over 1,000 learners. The MicroMasters is just one offering in the newly-minted IDSSx, a collection of online learning opportunities for learners at different skill levels and interests.

    “The impact of branding what MIT-IDSS does across the globe has been great,” Dahleh says. “In addition, we’ve created smaller online programs for continued education in data science and machine learning, which I think is also critical in educating the community at large.”

    Hopes for the future

    Through all of its accomplishments, the core mission of IDSS has never changed.

    “The belief was always to create an institute focused on how data science can be used to solve pressing societal problems,” Dahleh says. “The organizational structure of IDSS as an MIT Institute has enabled it to promote data and systems as a transdiciplinary area that embeds in every domain to support its mission. This reverse ownership structure will continue to strengthen the presence of IDSS in MIT and will make it an essential unit within the Schwarzman College of Computing.”

    As Dahleh prepares to step down from his role, and Professor Martin Wainwright gets ready to fill his (very big) shoes as director, Dahleh’s colleagues say the real key to the success of IDSS all started with his passion and vision.

    “Creating a new academic unit within MIT is actually next to impossible,” Jadbabaie says. “It requires structural changes, as well as someone who has a strong understanding of multiple areas, who knows how to get people to work together collectively, and who has a mission.”

    “The most important thing is that he was inclusive,” he adds. “He didn’t try to create a gate around it and say these people are in and these people are not. I don’t think this would have ever happened without Munther at the helm.” More

  • in

    J-WAFS announces 2023 seed grant recipients

    Today, the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) announced its ninth round of seed grants to support innovative research projects at MIT. The grants are designed to fund research efforts that tackle challenges related to water and food for human use, with the ultimate goal of creating meaningful impact as the world population continues to grow and the planet undergoes significant climate and environmental changes.Ten new projects led by 15 researchers from seven different departments will be supported this year. The projects address a range of challenges by employing advanced materials, technology innovations, and new approaches to resource management. The new projects aim to remove harmful chemicals from water sources, develop monitoring and other systems to help manage various aquaculture industries, optimize water purification materials, and more.“The seed grant program is J-WAFS’ flagship grant initiative,” says J-WAFS executive director Renee J. Robins. “The funding is intended to spur groundbreaking MIT research addressing complex issues that are challenging our water and food systems. The 10 projects selected this year show great promise, and we look forward to the progress and accomplishments these talented researchers will make,” she adds.The 2023 J-WAFS seed grant researchers and their projects are:Sara Beery, an assistant professor in the Department of Electrical Engineering and Computer Science (EECS), is building the first completely automated system to estimate the size of salmon populations in the Pacific Northwest (PNW).Salmon are a keystone species in the PNW, feeding human populations for the last 7,500 years at least. However, overfishing, habitat loss, and climate change threaten extinction of salmon populations across the region. Accurate salmon counts during their seasonal migration to their natal river to spawn are essential for fisheries’ regulation and management but are limited by human capacity. Fish population monitoring is a widespread challenge in the United States and worldwide. Beery and her team are working to build a system that will provide a detailed picture of the state of salmon populations in unprecedented, spatial, and temporal resolution by combining sonar sensors and computer vision and machine learning (CVML) techniques. The sonar will capture individual fish as they swim upstream and CVML will train accurate algorithms to interpret the sonar video for detecting, tracking, and counting fish automatically while adapting to changing river conditions and fish densities.Another aquaculture project is being led by Michael Triantafyllou, the Henry L. and Grace Doherty Professor in Ocean Science and Engineering in the Department of Mechanical Engineering, and Robert Vincent, the assistant director at MIT’s Sea Grant Program. They are working with Otto Cordero, an associate professor in the Department of Civil and Environmental Engineering, to control harmful bacteria blooms in aquaculture algae feed production.

    Aquaculture in the United States represents a $1.5 billion industry annually and helps support 1.7 million jobs, yet many American hatcheries are not able to keep up with demand. One barrier to aquaculture production is the high degree of variability in survival rates, most likely caused by a poorly controlled microbiome that leads to bacterial infections and sub-optimal feed efficiency. Triantafyllou, Vincent, and Cordero plan to monitor the microbiome composition of a shellfish hatchery in order to identify possible causing agents of mortality, as well as beneficial microbes. They hope to pair microbe data with detail phenotypic information about the animal population to generate rapid diagnostic tests and explore the potential for microbiome therapies to protect larvae and prevent future outbreaks. The researchers plan to transfer their findings and technology to the local and regional aquaculture community to ensure healthy aquaculture production that will support the expansion of the U.S. aquaculture industry.

    David Des Marais is the Cecil and Ida Green Career Development Professor in the Department of Civil and Environmental Engineering. His 2023 J-WAFS project seeks to understand plant growth responses to elevated carbon dioxide (CO2) in the atmosphere, in the hopes of identifying breeding strategies that maximize crop yield under future CO2 scenarios.Today’s crop plants experience higher atmospheric CO2 than 20 or 30 years ago. Crops such as wheat, oat, barley, and rice typically increase their growth rate and biomass when grown at experimentally elevated atmospheric CO2. This is known as the so-called “CO2 fertilization effect.” However, not all plant species respond to rising atmospheric CO2 with increased growth, and for the ones that do, increased growth doesn’t necessarily correspond to increased crop yield. Using specially built plant growth chambers that can control the concentration of CO2, Des Marais will explore how CO2 availability impacts the development of tillers (branches) in the grass species Brachypodium. He will study how gene expression controls tiller development, and whether this is affected by the growing environment. The tillering response refers to how many branches a plant produces, which sets a limit on how much grain it can yield. Therefore, optimizing the tillering response to elevated CO2 could greatly increase yield. Des Marais will also look at the complete genome sequence of Brachypodium, wheat, oat, and barley to help identify genes relevant for branch growth.Darcy McRose, an assistant professor in the Department of Civil and Environmental Engineering, is researching whether a combination of plant metabolites and soil bacteria can be used to make mineral-associated phosphorus more bioavailable.The nutrient phosphorus is essential for agricultural plant growth, but when added as a fertilizer, phosphorus sticks to the surface of soil minerals, decreasing bioavailability, limiting plant growth, and accumulating residual phosphorus. Heavily fertilized agricultural soils often harbor large reservoirs of this type of mineral-associated “legacy” phosphorus. Redox transformations are one chemical process that can liberate mineral-associated phosphorus. However, this needs to be carefully controlled, as overly mobile phosphorus can lead to runoff and pollution of natural waters. Ideally, phosphorus would be made bioavailable when plants need it and immobile when they don’t. Many plants make small metabolites called coumarins that might be able to solubilize mineral-adsorbed phosphorus and be activated and inactivated under different conditions. McRose will use laboratory experiments to determine whether a combination of plant metabolites and soil bacteria can be used as a highly efficient and tunable system for phosphorus solubilization. She also aims to develop an imaging platform to investigate exchanges of phosphorus between plants and soil microbes.Many of the 2023 seed grants will support innovative technologies to monitor, quantify, and remediate various kinds of pollutants found in water. Two of the new projects address the problem of per- and polyfluoroalkyl substances (PFAS), human-made chemicals that have recently emerged as a global health threat. Known as “forever chemicals,” PFAS are used in many manufacturing processes. These chemicals are known to cause significant health issues including cancer, and they have become pervasive in soil, dust, air, groundwater, and drinking water. Unfortunately, the physical and chemical properties of PFAS render them difficult to detect and remove.Aristide Gumyusenge, the Merton C. Assistant Professor of Materials Science and Engineering, is using metal-organic frameworks for low-cost sensing and capture of PFAS. Most metal-organic frameworks (MOFs) are synthesized as particles, which complicates their high accuracy sensing performance due to defects such as intergranular boundaries. Thin, film-based electronic devices could enable the use of MOFs for many applications, especially chemical sensing. Gumyusenge’s project aims to design test kits based on two-dimensional conductive MOF films for detecting PFAS in drinking water. In early demonstrations, Gumyusenge and his team showed that these MOF films can sense PFAS at low concentrations. They will continue to iterate using a computation-guided approach to tune sensitivity and selectivity of the kits with the goal of deploying them in real-world scenarios.Carlos Portela, the Brit (1961) and Alex (1949) d’Arbeloff Career Development Professor in the Department of Mechanical Engineering, and Ariel Furst, the Cook Career Development Professor in the Department of Chemical Engineering, are building novel architected materials to act as filters for the removal of PFAS from water. Portela and Furst will design and fabricate nanoscale materials that use activated carbon and porous polymers to create a physical adsorption system. They will engineer the materials to have tunable porosities and morphologies that can maximize interactions between contaminated water and functionalized surfaces, while providing a mechanically robust system.Rohit Karnik is a Tata Professor and interim co-department head of the Department of Mechanical Engineering. He is working on another technology, his based on microbead sensors, to rapidly measure and monitor trace contaminants in water.Water pollution from both biological and chemical contaminants contributes to an estimated 1.36 million deaths annually. Chemical contaminants include pesticides and herbicides, heavy metals like lead, and compounds used in manufacturing. These emerging contaminants can be found throughout the environment, including in water supplies. The Environmental Protection Agency (EPA) in the United States sets recommended water quality standards, but states are responsible for developing their own monitoring criteria and systems, which must be approved by the EPA every three years. However, the availability of data on regulated chemicals and on candidate pollutants is limited by current testing methods that are either insensitive or expensive and laboratory-based, requiring trained scientists and technicians. Karnik’s project proposes a simple, self-contained, portable system for monitoring trace and emerging pollutants in water, making it suitable for field studies. The concept is based on multiplexed microbead-based sensors that use thermal or gravitational actuation to generate a signal. His proposed sandwich assay, a testing format that is appealing for environmental sensing, will enable both single-use and continuous monitoring. The hope is that the bead-based assays will increase the ease and reach of detecting and quantifying trace contaminants in water for both personal and industrial scale applications.Alexander Radosevich, a professor in the Department of Chemistry, and Timothy Swager, the John D. MacArthur Professor of Chemistry, are teaming up to create rapid, cost-effective, and reliable techniques for on-site arsenic detection in water.Arsenic contamination of groundwater is a problem that affects as many as 500 million people worldwide. Arsenic poisoning can lead to a range of severe health problems from cancer to cardiovascular and neurological impacts. Both the EPA and the World Health Organization have established that 10 parts per billion is a practical threshold for arsenic in drinking water, but measuring arsenic in water at such low levels is challenging, especially in resource-limited environments where access to sensitive laboratory equipment may not be readily accessible. Radosevich and Swager plan to develop reaction-based chemical sensors that bind and extract electrons from aqueous arsenic. In this way, they will exploit the inherent reactivity of aqueous arsenic to selectively detect and quantify it. This work will establish the chemical basis for a new method of detecting trace arsenic in drinking water.Rajeev Ram is a professor in the Department of Electrical Engineering and Computer Science. His J-WAFS research will advance a robust technology for monitoring nitrogen-containing pollutants, which threaten over 15,000 bodies of water in the United States alone.Nitrogen in the form of nitrate, nitrite, ammonia, and urea can run off from agricultural fertilizer and lead to harmful algal blooms that jeopardize human health. Unfortunately, monitoring these contaminants in the environment is challenging, as sensors are difficult to maintain and expensive to deploy. Ram and his students will work to establish limits of detection for nitrate, nitrite, ammonia, and urea in environmental, industrial, and agricultural samples using swept-source Raman spectroscopy. Swept-source Raman spectroscopy is a method of detecting the presence of a chemical by using a tunable, single mode laser that illuminates a sample. This method does not require costly, high-power lasers or a spectrometer. Ram will then develop and demonstrate a portable system that is capable of achieving chemical specificity in complex, natural environments. Data generated by such a system should help regulate polluters and guide remediation.Kripa Varanasi, a professor in the Department of Mechanical Engineering, and Angela Belcher, the James Mason Crafts Professor and head of the Department of Biological Engineering, will join forces to develop an affordable water disinfection technology that selectively identifies, adsorbs, and kills “superbugs” in domestic and industrial wastewater.Recent research predicts that antibiotic-resistance bacteria (superbugs) will result in $100 trillion in health care expenses and 10 million deaths annually by 2050. The prevalence of superbugs in our water systems has increased due to corroded pipes, contamination, and climate change. Current drinking water disinfection technologies are designed to kill all types of bacteria before human consumption. However, for certain domestic and industrial applications there is a need to protect the good bacteria required for ecological processes that contribute to soil and plant health. Varanasi and Belcher will combine material, biological, process, and system engineering principles to design a sponge-based water disinfection technology that can identify and destroy harmful bacteria while leaving the good bacteria unharmed. By modifying the sponge surface with specialized nanomaterials, their approach will be able to kill superbugs faster and more efficiently. The sponge filters can be deployed under very low pressure, making them an affordable technology, especially in resource-constrained communities.In addition to the 10 seed grant projects, J-WAFS will also fund a research initiative led by Greg Sixt. Sixt is the research manager for climate and food systems at J-WAFS, and the director of the J-WAFS-led Food and Climate Systems Transformation (FACT) Alliance. His project focuses on the Lake Victoria Basin (LVB) of East Africa. The second-largest freshwater lake in the world, Lake Victoria straddles three countries (Uganda, Tanzania, and Kenya) and has a catchment area that encompasses two more (Rwanda and Burundi). Sixt will collaborate with Michael Hauser of the University of Natural Resources and Life Sciences, Vienna, and Paul Kariuki, of the Lake Victoria Basin Commission.The group will study how to adapt food systems to climate change in the Lake Victoria Basin. The basin is facing a range of climate threats that could significantly impact livelihoods and food systems in the expansive region. For example, extreme weather events like droughts and floods are negatively affecting agricultural production and freshwater resources. Across the LVB, current approaches to land and water management are unsustainable and threaten future food and water security. The Lake Victoria Basin Commission (LVBC), a specialized institution of the East African Community, wants to play a more vital role in coordinating transboundary land and water management to support transitions toward more resilient, sustainable, and equitable food systems. The primary goal of this research will be to support the LVBC’s transboundary land and water management efforts, specifically as they relate to sustainability and climate change adaptation in food systems. The research team will work with key stakeholders in Kenya, Uganda, and Tanzania to identify specific capacity needs to facilitate land and water management transitions. The two-year project will produce actionable recommendations to the LVBC. More

  • in

    Martin Wainwright named director of the Institute for Data, Systems, and Society

    Martin Wainwright, the Cecil H. Green Professor in MIT’s departments of Electrical Engineering and Computer Science (EECS) and Mathematics, has been named the new director of the Institute for Data, Systems, and Society (IDSS), effective July 1.

    “Martin is a widely recognized leader in statistics and machine learning — both in research and in education. In taking on this leadership role in the college, Martin will work to build up the human and institutional behavior component of IDSS, while strengthening initiatives in both policy and statistics, and collaborations within the institute, across MIT, and beyond,” says Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science. “I look forward to working with him and supporting his efforts in this next chapter for IDSS.”

    “Martin holds a strong belief in the value of theoretical, experimental, and computational approaches to research and in facilitating connections between them. He also places much importance in having practical, as well as academic, impact,” says Asu Ozdaglar, deputy dean of academics for the MIT Schwarzman College of Computing, department head of EECS, and the MathWorks Professor of Electrical Engineering and Computer Science. “As the new director of IDSS, he will undoubtedly bring these tenets to the role in advancing the mission of IDSS and helping to shape its future.”

    A principal investigator in the Laboratory for Information and Decision Systems and the Statistics and Data Science Center, Wainwright joined the MIT faculty in July 2022 from the University of California at Berkeley, where he held the Howard Friesen Chair with a joint appointment between the departments of Electrical Engineering and Computer Science and Statistics.

    Wainwright received his bachelor’s degree in mathematics from the University of Waterloo, Canada, and doctoral degree in electrical engineering and computer science from MIT. He has received a number of awards and recognition, including an Alfred P. Sloan Foundation Fellowship, and best paper awards from the IEEE Signal Processing Society, IEEE Communications Society, and IEEE Information Theory and Communication Societies. He has also been honored with the Medallion Lectureship and Award from the Institute of Mathematical Statistics, and the COPSS Presidents’ Award from the Joint Statistical Societies. He was a section lecturer with the International Congress of Mathematicians in 2014 and received the Blackwell Award from the Institute of Mathematical Statistics in 2017.

    He is the author of “High-dimensional Statistics: A Non-Asymptotic Viewpoint” (Cambridge University Press, 2019), and is coauthor on several books, including on graphical models and on sparse statistical modeling.

    Wainwright succeeds Munther Dahleh, the William A. Coolidge Professor in EECS, who has helmed IDSS since its founding in 2015.

    “I am grateful to Munther and thank him for his leadership of IDSS. As the founding director, he has led the creation of a remarkable new part of MIT,” says Huttenlocher. More