More stories

  • in

    Learning how to learn

    Suppose you need to be on today’s only ferry to Martha’s Vineyard, which leaves at 2 p.m. It takes about 30 minutes (on average) to drive from where you are to the terminal. What time should you leave?

    This is one of many common real-life examples used by Richard “Dick” Larson, a post-tenure professor in the MIT Institute for Data, Systems, and Society (IDSS), to explore exemplary problem-solving in his new book “Model Thinking for Everyday Life: How to Make Smarter Decisions.”

    Larson’s book synthesizes a lifelong career as an MIT professor and researcher, highlighting crucial skills underpinning all empirical, rational, and critical thinking. “Critical thinkers are energetic detectives … always seeking the facts,” he says. “Additional facts may surface that can result in modified conclusions … A critical thinker is aware of the pitfalls of human intuition.”

    For Larson, “model” thinking means not only thinking aided by conceptual and/or mathematical models, but a broader mode of critical thought that is informed by STEM concepts and worthy of emulation.

    In the ferry example, a key concept at play is uncertainty. Accounting for uncertainty is a core challenge faced by systems engineers, operations researchers, and modelers of complex networks — all hats Larson has worn in over half a century at MIT. 

    Uncertainty complicates all prediction and decision-making, and while statistics offers tactics for managing uncertainty, “Model Thinking” is not a math textbook. There are equations for the math-curious, but it doesn’t take a degree from MIT to understand that

    an average of 30 minutes would cover a range of times, some shorter, some longer;
    outliers can exist in the data, like the time construction traffic added an additional 30 minutes
    “about 30 minutes” is a prediction based on past experience, not current information (road closures, accidents, etc.); and
    the consequence for missing the ferry is not a delay of hours, but a full day — which might completely disrupt the trip or its purpose.
    And so, without doing much explicit math, you calculate variables, weigh the likelihood of different outcomes against the consequences of failure, and choose a departure time. Larson’s conclusion is one championed by dads everywhere: Leave on the earlier side, just in case. 

    “The world’s most important, invisible profession”

    Throughout Larson’s career at MIT, he has focused on the science of solving problems and making better decisions. “Faced with a new problem, people often lack the ability to frame and formulate it using basic principles,” argues Larson. “Our emphasis is on problem framing and formulation, with mathematics and physics playing supporting roles.”

    This is operations research, which Larson calls “the world’s most important invisible profession.” Formalized as a field during World War II, operations researchers use data and models to try to derive the “physics” of complex systems. The goal is typically optimizing things like scheduling, routing, simulation, prediction, planning, logistics, and queueing, for which Larson is especially well-known. A frequent media expert on the subject, he earned the moniker “Dr. Q” — and his research has led to new approaches for easing congestion in urban traffic, fast-food lines, and banks.

    Larson’s experience with complex systems provides a wealth of examples to draw on, but he is keen to demonstrate that his purview includes everyday decisions, and that “Model Thinking” is a book for everyone. 

    “Everybody uses models, whether they realize it or not,” he says. “If you have a bunch of errands to do, and you try to plan out the order to do them so you don’t have to drive as much, that’s more or less the ‘traveling salesman’ problem, a classic from operations research. Or when someone is shopping for groceries and thinking about how much of each product they need — they’re basically using an inventory management model of their pantry.”

    Larson’s takeaway is that since we all use conceptual models for thinking, planning, and decision-making, then understanding how our minds use models, and learning to use them more intentionally, can lead to clearer thinking, better planning, and smarter decision-making — especially when they are grounded in principles drawn from math and physics.

    Passion for the process

    Teaching STEM principles has long been a mission of Larson’s, who co-founded MIT BLOSSOMS (Blended Learning Open Source Science or Math Studies) with his late wife, Mary Elizabeth Murray. BLOSSOMS provides free, interactive STEM lessons and videos for primary school students around the world. Some of the exercises in “Model Thinking” refer to these videos as well.

    “A child’s educational opportunities shouldn’t be limited by where they were born or the wealth of their parents,” says Larson of the enterprise. 

    It was also Murray who encouraged Larson to write “Model Thinking.” “She saw how excited I was about it,” he says. “I had the choice of writing a textbook on queuing, say, or something else. It didn’t excite me at all.”

    Larson’s passion is for the process, not the answer. Throughout the book, he marks off opportunities for active learning with an icon showing the two tools necessary to complete each task: a sharpened pencil and a blank sheet of paper. 

    “Many of us in the age of instant Google searches have lost the ability — or perhaps the patience — to undertake multistep problems,” he argues.

    Model thinkers, on the other hand, understand and remember solutions better for having thought through the steps, and can better apply what they’ve learned to future problems. Larson’s “homework” is to do critical thinking, not just read about it. By working through thought experiments and scenarios, readers can achieve a deeper understanding of concepts like selection bias, random incidence, and orders of magnitude, all of which can present counterintuitive examples to the uninitiated.

    For Larson, who jokes that he is “an evangelist for models,” there is no better way to learn than by doing — except perhaps to teach. “Teaching a difficult topic is our best way to learn it ourselves, is an unselfish act, and bonds the teacher and learner,” he writes.

    In his long career as an educator and education advocate, Larson says he has always remained a learner himself. His love for learning illuminates every page of “Model Thinking,” which he hopes will provide others with the enjoyment and satisfaction that comes from learning new things and solving complex problems.

    “You will learn how to learn,” Larson says. “And you will enjoy it!” More

  • in

    A more effective experimental design for engineering a cell into a new state

    A strategy for cellular reprogramming involves using targeted genetic interventions to engineer a cell into a new state. The technique holds great promise in immunotherapy, for instance, where researchers could reprogram a patient’s T-cells so they are more potent cancer killers. Someday, the approach could also help identify life-saving cancer treatments or regenerative therapies that repair disease-ravaged organs.

    But the human body has about 20,000 genes, and a genetic perturbation could be on a combination of genes or on any of the over 1,000 transcription factors that regulate the genes. Because the search space is vast and genetic experiments are costly, scientists often struggle to find the ideal perturbation for their particular application.   

    Researchers from MIT and Harvard University developed a new, computational approach that can efficiently identify optimal genetic perturbations based on a much smaller number of experiments than traditional methods.

    Their algorithmic technique leverages the cause-and-effect relationship between factors in a complex system, such as genome regulation, to prioritize the best intervention in each round of sequential experiments.

    The researchers conducted a rigorous theoretical analysis to determine that their technique did, indeed, identify optimal interventions. With that theoretical framework in place, they applied the algorithms to real biological data designed to mimic a cellular reprogramming experiment. Their algorithms were the most efficient and effective.

    “Too often, large-scale experiments are designed empirically. A careful causal framework for sequential experimentation may allow identifying optimal interventions with fewer trials, thereby reducing experimental costs,” says co-senior author Caroline Uhler, a professor in the Department of Electrical Engineering and Computer Science (EECS) who is also co-director of the Eric and Wendy Schmidt Center at the Broad Institute of MIT and Harvard, and a researcher at MIT’s Laboratory for Information and Decision Systems (LIDS) and Institute for Data, Systems and Society (IDSS).

    Joining Uhler on the paper, which appears today in Nature Machine Intelligence, are lead author Jiaqi Zhang, a graduate student and Eric and Wendy Schmidt Center Fellow; co-senior author Themistoklis P. Sapsis, professor of mechanical and ocean engineering at MIT and a member of IDSS; and others at Harvard and MIT.

    Active learning

    When scientists try to design an effective intervention for a complex system, like in cellular reprogramming, they often perform experiments sequentially. Such settings are ideally suited for the use of a machine-learning approach called active learning. Data samples are collected and used to learn a model of the system that incorporates the knowledge gathered so far. From this model, an acquisition function is designed — an equation that evaluates all potential interventions and picks the best one to test in the next trial.

    This process is repeated until an optimal intervention is identified (or resources to fund subsequent experiments run out).

    “While there are several generic acquisition functions to sequentially design experiments, these are not effective for problems of such complexity, leading to very slow convergence,” Sapsis explains.

    Acquisition functions typically consider correlation between factors, such as which genes are co-expressed. But focusing only on correlation ignores the regulatory relationships or causal structure of the system. For instance, a genetic intervention can only affect the expression of downstream genes, but a correlation-based approach would not be able to distinguish between genes that are upstream or downstream.

    “You can learn some of this causal knowledge from the data and use that to design an intervention more efficiently,” Zhang explains.

    The MIT and Harvard researchers leveraged this underlying causal structure for their technique. First, they carefully constructed an algorithm so it can only learn models of the system that account for causal relationships.

    Then the researchers designed the acquisition function so it automatically evaluates interventions using information on these causal relationships. They crafted this function so it prioritizes the most informative interventions, meaning those most likely to lead to the optimal intervention in subsequent experiments.

    “By considering causal models instead of correlation-based models, we can already rule out certain interventions. Then, whenever you get new data, you can learn a more accurate causal model and thereby further shrink the space of interventions,” Uhler explains.

    This smaller search space, coupled with the acquisition function’s special focus on the most informative interventions, is what makes their approach so efficient.

    The researchers further improved their acquisition function using a technique known as output weighting, inspired by the study of extreme events in complex systems. This method carefully emphasizes interventions that are likely to be closer to the optimal intervention.

    “Essentially, we view an optimal intervention as an ‘extreme event’ within the space of all possible, suboptimal interventions and use some of the ideas we have developed for these problems,” Sapsis says.    

    Enhanced efficiency

    They tested their algorithms using real biological data in a simulated cellular reprogramming experiment. For this test, they sought a genetic perturbation that would result in a desired shift in average gene expression. Their acquisition functions consistently identified better interventions than baseline methods through every step in the multi-stage experiment.

    “If you cut the experiment off at any stage, ours would still be more efficient than the baselines. This means you could run fewer experiments and get the same or better results,” Zhang says.

    The researchers are currently working with experimentalists to apply their technique toward cellular reprogramming in the lab.

    Their approach could also be applied to problems outside genomics, such as identifying optimal prices for consumer products or enabling optimal feedback control in fluid mechanics applications.

    In the future, they plan to enhance their technique for optimizations beyond those that seek to match a desired mean. In addition, their method assumes that scientists already understand the causal relationships in their system, but future work could explore how to use AI to learn that information, as well.

    This work was funded, in part, by the Office of Naval Research, the MIT-IBM Watson AI Lab, the MIT J-Clinic for Machine Learning and Health, the Eric and Wendy Schmidt Center at the Broad Institute, a Simons Investigator Award, the Air Force Office of Scientific Research, and a National Science Foundation Graduate Fellowship. More

  • in

    MIT welcomes nine MLK Visiting Professors and Scholars for 2023-24

    Established in 1990, the MLK Visiting Professors and Scholars Program at MIT welcomes outstanding scholars to the Institute for visiting appointments. MIT aspires to attract candidates who are, in the words of Martin Luther King Jr., “trailblazers in human, academic, scientific and religious freedom.” The program honors King’s life and legacy by expanding and extending the reach of our community. 

    The MLK Scholars Program has welcomed more than 140 professors, practitioners, and professionals at the forefront of their respective fields to MIT. They contribute to the growth and enrichment of the community through their interactions with students, staff, and faculty. They pay tribute to Martin Luther King Jr.’s life and legacy of service and social justice, and they embody MIT’s values: excellence and curiosity, openness and respect, and belonging and community.  

    Each new cohort of scholars actively participates in community engagement and supports MIT’s mission of “advancing knowledge and educating students in science, technology, and other areas of scholarship that will best serve the nation and the world in the 21st century.” 

    The 2023-2024 MLK Scholars:

    Tawanna Dillahunt is an associate professor at the University of Michigan’s School of Information with a joint appointment in their electrical engineering and computer science department. She is joining MIT at the end of a one-year visiting appointment as a Harvard Radcliffe Fellow. Her faculty hosts at the Institute are Catherine D’Ignazio in the Department of Urban Studies and Planning and Fotini Christia in the Institute for Data, Systems, and Society (IDSS). Dillahunt’s research focuses on equitable and inclusive computing. During her appointment, she will host a podcast to explore ethical and socially responsible ways to engage with communities, with a special emphasis on technology. 

    Kwabena Donkor is an assistant professor of marketing at Stanford Graduate School of Business; he is hosted by Dean Eckles, an associate professor of marketing at MIT Sloan School of Management. Donkor’s work bridges economics, psychology, and marketing. His scholarship combines insights from behavioral economics with data and field experiments to study social norms, identity, and how these constructs interact with policy in the marketplace.

    Denise Frazier joins MIT from Tulane University, where she is an assistant director in the New Orleans Center for the Gulf South. She is a researcher and performer and brings a unique interdisciplinary approach to her work at the intersection of cultural studies, environmental justice, and music. Frazier is hosted by Christine Ortiz, the Morris Cohen Professor in the Department of Materials Science and Engineering. 

    Wasalu Jaco, an accomplished performer and artist, is renewing his appointment at MIT for a second year; he is hosted jointly by Nick Montfort, a professor of digital media in the Comparative Media Studies Program/Writing, and Mary Fuller, a professor in the Literature Section and the current chair of the MIT faculty. In his second year, Jaco will work on Cyber/Cypher Rapper, a research project to develop a computational system that participates in responsive and improvisational rap.

    Morgane Konig first joined the Center for Theoretical Physics at MIT in December 2021 as a postdoc. Now a member of the 2023–24 MLK Visiting Scholars Program cohort, she will deepen her ties with scholars and research groups working in cosmology, primarily on early-universe inflation and late-universe signatures that could enable the scientific community to learn more about the mysterious nature of dark matter and dark energy. Her faculty hosts are David Kaiser, the Germeshausen Professor of the History of Science and professor of physics, and Alan Guth, the Victor F. Weisskopf Professor of Physics, both from the Department of Physics.

    The former minister of culture for Colombia and a transformational leader dedicated to environmental protection, Angelica Mayolo-Obregon joins MIT from Buenaventura, Colombia. During her time at MIT, she will serve as an advisor and guest speaker, and help MIT facilitate gatherings of environmental leaders committed to addressing climate action and conserving biodiversity across the Americas, with a special emphasis on Afro-descendant communities. Mayolo-Obregon is hosted by John Fernandez, a professor of building technology in the Department of Architecture and director of MIT’s Environmental Solutions Initiative, and by J. Phillip Thompson, an associate professor in the Department of Urban Studies and Planning (and a former MLK Scholar).

    Jean-Luc Pierite is a member of the Tunica-Biloxi Tribe of Louisiana and the president of the board of directors of North American Indian Center of Boston. While at MIT, Pierite will build connections between MIT and the local Indigenous communities. His research focuses on enhancing climate resilience planning by infusing Indigenous knowledge and ecological practices into scientific and other disciplines. His faculty host is Janelle Knox-Hayes, the Lister Brothers Professor of Economic Geography and Planning in the Department of Urban Studies and Planning.

    Christine Taylor-Butler ’81 is a children’s book author who has written over 90 books; she is hosted by Graham Jones, an associate professor of anthropology. An advocate for literacy and STEAM education in underserved urban and rural schools, Taylor-Butler will partner with community organizations in the Boston area. She is also completing the fourth installment of her middle-grade series, “The Lost Tribe.” These books follow a team of five kids as they use science and technology to crack codes and solve mysteries.

    Angelino Viceisza, a professor of economics at Spelman College, joins MIT Sloan as an MLK Visiting Professor and the Phyllis Wallace Visiting Professor; he is hosted by Robert Gibbons, Sloan Distinguished Professor of Management, and Ray Reagans, Alfred P. Sloan Professor of Management, professor of organization studies, and associate dean for diversity, equity, and inclusion at MIT Sloan. Viceisza has strong, ongoing connections with MIT. His research focuses on remittances, retirement, and household finance in low-income countries and is relevant to public finance and financial economics, as well as the development and organizational economics communities at MIT. 

    Javit Drake, Moriba Jah, and Louis Massiah, members of last year’s cohort of MLK Scholars, will remain at MIT through the end of 2023.

    There are multiple opportunities throughout the year to meet our MLK Visiting Scholars and learn more about their research projects and their social impact. 

    For more information about the MLK Visiting Professors and Scholars Program and upcoming events, visit the website. More

  • in

    Improving US air quality, equitably

    Decarbonization of national economies will be key to achieving global net-zero emissions by 2050, a major stepping stone to the Paris Agreement’s long-term goal of keeping global warming well below 2 degrees Celsius (and ideally 1.5 C), and thereby averting the worst consequences of climate change. Toward that end, the United States has pledged to reduce its greenhouse gas emissions by 50-52 percent from 2005 levels by 2030, backed by its implementation of the 2022 Inflation Reduction Act. This strategy is consistent with a 50-percent reduction in carbon dioxide (CO2) by the end of the decade.

    If U.S. federal carbon policy is successful, the nation’s overall air quality will also improve. Cutting CO2 emissions reduces atmospheric concentrations of air pollutants that lead to the formation of fine particulate matter (PM2.5), which causes more than 200,000 premature deaths in the United States each year. But an average nationwide improvement in air quality will not be felt equally; air pollution exposure disproportionately harms people of color and lower-income populations.

    How effective are current federal decarbonization policies in reducing U.S. racial and economic disparities in PM2.5 exposure, and what changes will be needed to improve their performance? To answer that question, researchers at MIT and Stanford University recently evaluated a range of policies which, like current U.S. federal carbon policies, reduce economy-wide CO2 emissions by 40-60 percent from 2005 levels by 2030. Their findings appear in an open-access article in the journal Nature Communications.

    First, they show that a carbon-pricing policy, while effective in reducing PM2.5 exposure for all racial/ethnic groups, does not significantly mitigate relative disparities in exposure. On average, the white population undergoes far less exposure than Black, Hispanic, and Asian populations. This policy does little to reduce exposure disparities because the CO2 emissions reductions that it achieves primarily occur in the coal-fired electricity sector. Other sectors, such as industry and heavy-duty diesel transportation, contribute far more PM2.5-related emissions.

    The researchers then examine thousands of different reduction options through an optimization approach to identify whether any possible combination of carbon dioxide reductions in the range of 40-60 percent can mitigate disparities. They find that that no policy scenario aligned with current U.S. carbon dioxide emissions targets is likely to significantly reduce current PM2.5 exposure disparities.

    “Policies that address only about 50 percent of CO2 emissions leave many polluting sources in place, and those that prioritize reductions for minorities tend to benefit the entire population,” says Noelle Selin, supervising author of the study and a professor at MIT’s Institute for Data, Systems and Society and Department of Earth, Atmospheric and Planetary Sciences. “This means that a large range of policies that reduce CO2 can improve air quality overall, but can’t address long-standing inequities in air pollution exposure.”

    So if climate policy alone cannot adequately achieve equitable air quality results, what viable options remain? The researchers suggest that more ambitious carbon policies could narrow racial and economic PM2.5 exposure disparities in the long term, but not within the next decade. To make a near-term difference, they recommend interventions designed to reduce PM2.5 emissions resulting from non-CO2 sources, ideally at the economic sector or community level.

    “Achieving improved PM2.5 exposure for populations that are disproportionately exposed across the United States will require thinking that goes beyond current CO2 policy strategies, most likely involving large-scale structural changes,” says Selin. “This could involve changes in local and regional transportation and housing planning, together with accelerated efforts towards decarbonization.” More

  • in

    3 Questions: A new PhD program from the Center for Computational Science and Engineering

    This fall, the Center for Computational Science and Engineering (CCSE), an academic unit in the MIT Schwarzman College of Computing, is introducing a new standalone PhD degree program that will enable students to pursue research in cross-cutting methodological aspects of computational science and engineering. The launch follows approval of the center’s degree program proposal at the May 2023 Institute faculty meeting.

    Doctoral-level graduate study in computational science and engineering (CSE) at MIT has, for the past decade, been offered through an interdisciplinary program in which CSE students are admitted to one of eight participating academic departments in the School of Engineering or School of Science. While this model adds a strong disciplinary component to students’ education, the rapid growth of the CSE field and the establishment of the MIT Schwarzman College of Computing have prompted an exciting expansion of MIT’s graduate-level offerings in computation.

    The new degree, offered by the college, will run alongside MIT’s existing interdisciplinary offerings in CSE, complementing these doctoral training programs and preparing students to contribute to the leading edge of the field. Here, CCSE co-directors Youssef Marzouk and Nicolas Hadjiconstantinou discuss the standalone program and how they expect it to elevate the visibility and impact of CSE research and education at MIT.

    Q: What is computational science and engineering?

    Marzouk: Computational science and engineering focuses on the development and analysis of state-of-the-art methods for computation and their innovative application to problems of science and engineering interest. It has intellectual foundations in applied mathematics, statistics, and computer science, and touches the full range of science and engineering disciplines. Yet, it synthesizes these foundations into a discipline of its own — one that links the digital and physical worlds. It’s an exciting and evolving multidisciplinary field.

    Hadjiconstantinou: Examples of CSE research happening at MIT include modeling and simulation techniques, the underlying computational mathematics, and data-driven modeling of physical systems. Computational statistics and scientific machine learning have become prominent threads within CSE, joining high-performance computing, mathematically-oriented programming languages, and their broader links to algorithms and software. Application domains include energy, environment and climate, materials, health, transportation, autonomy, and aerospace, among others. Some of our researchers focus on general and widely applicable methodology, while others choose to focus on methods and algorithms motivated by a specific domain of application.

    Q: What was the motivation behind creating a standalone PhD program?

    Marzouk: The new degree focuses on a particular class of students whose background and interests are primarily in CSE methodology, in a manner that cuts across the disciplinary research structure represented by our current “with-departments” degree program. There is a strong research demand for such methodologically-focused students among CCSE faculty and MIT faculty in general. Our objective is to create a targeted, coherent degree program in this field that, alongside our other thriving CSE offerings, will create the leading environment for top CSE students worldwide.

    Hadjiconstantinou: One of CCSE’s most important functions is to recruit exceptional students who are trained in and want to work in computational science and engineering. Experience with our CSE master’s program suggests that students with a strong background and interests in the discipline prefer to apply to a pure CSE program for their graduate studies. The standalone degree aims to bring these students to MIT and make them available to faculty across the Institute.

    Q: How will this impact computing education and research at MIT? 

    Hadjiconstantinou: We believe that offering a standalone PhD program in CSE alongside the existing “with-departments” programs will significantly strengthen MIT’s graduate programs in computing. In particular, it will strengthen the methodological core of CSE research and education at MIT, while continuing to support the disciplinary-flavored CSE work taking place in our participating departments, which include Aeronautics and Astronautics; Chemical Engineering; Civil and Environmental Engineering; Materials Science and Engineering; Mechanical Engineering; Nuclear Science and Engineering; Earth, Atmospheric and Planetary Sciences; and Mathematics. Together, these programs will create a stronger CSE student cohort and facilitate deeper exchanges between the college and other units at MIT.

    Marzouk: In a broader sense, the new program is designed to help realize one of the key opportunities presented by the college, which is to create a richer variety of graduate degrees in computation and to involve as many faculty and units in these educational endeavors as possible. The standalone CSE PhD will join other distinguished doctoral programs of the college — such as the Department of Electrical Engineering and Computer Science PhD; the Operations Research Center PhD; and the Interdisciplinary Doctoral Program in Statistics and the Social and Engineering Systems PhD within the Institute for Data, Systems, and Society — and grow in a way that is informed by them. The confluence of these academic programs, and natural synergies among them, will make MIT quite unique. More

  • in

    Advancing social studies at MIT Sloan

    Around 2010, Facebook was a relatively small company with about 2,000 employees. So, when a PhD student named Dean Eckles showed up to serve an intership at the firm, he landed in a position with some real duties.

    Eckles essentially became the primary data scientist for the product manager who was overseeing the platform’s news feeds. That manager would pepper Eckles with questions. How exactly do people influence each other online? If Facebook tweaked its content-ranking algorithms, what would happen? What occurs when you show people more photos?

    As a doctoral candidate already studying social influence, Eckles was well-equipped to think about such questions, and being at Facebook gave him a lot of data to study them. 

    “If you show people more photos, they post more photos themselves,” Eckles says. “In turn, that affects the experience of all their friends. Plus they’re getting more likes and more comments. It affects everybody’s experience. But can you account for all of these compounding effects across the network?”

    Eckles, now an associate professor in the MIT Sloan School of Management and an affiliate faculty member of the Institute for Data, Systems, and Society, has made a career out of thinking carefully about that last question. Studying social networks allows Eckles to tackle significant questions involving, for example, the economic and political effects of social networks, the spread of misinformation, vaccine uptake during the Covid-19 crisis, and other aspects of the formation and shape of social networks. For instance, one study he co-authored this summer shows that people who either move between U.S. states, change high schools, or attend college out of state, wind up with more robust social networks, which are strongly associated with greater economic success.

    Eckles maintains another research channel focused on what scholars call “causal inference,” the methods and techniques that allow researchers to identify cause-and-effect connections in the world.

    “Learning about cause-and-effect relationships is core to so much science,” Eckles says. “In behavioral, social, economic, or biomedical science, it’s going to be hard. When you start thinking about humans, causality gets difficult. People do things strategically, and they’re electing into situations based on their own goals, so that complicates a lot of cause-and-effect relationships.”

    Eckles has now published dozens of papers in each of his different areas of work; for his research and teaching, Eckles received tenure from MIT last year.

    Five degrees and a job

    Eckles grew up in California, mostly near the Lake Tahoe area. He attended Stanford University as an undergraduate, arriving on campus in fall 2002 — and didn’t really leave for about a decade. Eckles has five degrees from Stanford. As an undergrad, he received a BA in philosophy and a BS in symbolic systems, an interdisciplinary major combining computer science, philosophy, psychology, and more. Eckles was set to attend Oxford University for graduate work in philosophy but changed his mind and stayed at Stanford for an MS in symbolic systems too. 

    “[Oxford] might have been a great experience, but I decided to focus more on the tech side of things,” he says.

    After receiving his first master’s degree, Eckles did take a year off from school and worked for Nokia, although the firm’s offices were adjacent to the Stanford campus and Eckles would sometimes stop and talk to faculty during the workday. Soon he was enrolled at Stanford again, this time earning his PhD in communication, in 2012, while receiving an MA in statistics the year before. His doctoral dissertation wound up being about peer influence in networks. PhD in hand, Eckles promptly headed back to Facebook, this time for three years as a full-time researcher.

     “They were really supportive of the work I was doing,” Eckles says.

    Still, Eckles remained interested in moving into academia, and joined the MIT faculty in 2017 with a position in MIT Sloan’s Marketing Group. The group consists of a set of scholars with far-ranging interests, from cognitive science to advertising to social network dynamics.

    “Our group reflects something deeper about the Sloan school and about MIT as well, an openness to doing things differently and not having to fit into narrowly defined tracks,” Eckles says.

    For that matter, MIT has many faculty in different domains who work on causal inference, and whose work Eckles quickly cites — including economists Victor Chernozhukov and Alberto Abadie, and Joshua Angrist, whose book “Mostly Harmless Econometrics” Eckles name-checks as an influence.

    “I’ve been fortunate in my career that causal inference turned out to be a hot area,” Eckles says. “But I think it’s hot for good reasons. People started to realize that, yes, causal inference is really important. There are economists, computer scientists, statisticians, and epidemiologists who are going to the same conferences and citing each other’s papers. There’s a lot happening.”

    How do networks form?

    These days, Eckles is interested in expanding the questions he works on. In the past, he has often studied existing social networks and looked at their effects. For instance: One study Eckles co-authored, examining the 2012 U.S. elections, found that get-out-the-vote messages work very well, especially when relayed via friends.

    That kind of study takes the existence of the network as a given, though. Another kind of research question is, as Eckles puts it, “How do social networks form and evolve? And what are the consequences of these network structures?” His recent study about social networks expanding as people move around and change schools is one example of research that digs into the core life experiences underlying social networks.

    “I’m excited about doing more on how these networks arise and what factors, including everything from personality to public transit, affect their formation,” Eckles says.

    Understanding more about how social networks form gets at key questions about social life and civic structure. Suppose research shows how some people develop and maintain beneficial connections in life; it’s possible that those insights could be applied to programs helping people in more disadvantaged situations realize some of the same opportunities.

    “We want to act on things,” Eckles says. “Sometimes people say, ‘We care about prediction.’ I would say, ‘We care about prediction under intervention.’ We want to predict what’s going to happen if we try different things.”

    Ultimately, Eckles reflects, “Trying to reason about the origins and maintenance of social networks, and the effects of networks, is interesting substantively and methodologically. Networks are super-high-dimensional objects, even just a single person’s network and all its connections. You have to summarize it, so for instance we talk about weak ties or strong ties, but do we have the correct description? There are fascinating questions that require development, and I’m eager to keep working on them.”   More

  • in

    Supporting sustainability, digital health, and the future of work

    The MIT and Accenture Convergence Initiative for Industry and Technology has selected three new research projects that will receive support from the initiative. The research projects aim to accelerate progress in meeting complex societal needs through new business convergence insights in technology and innovation.

    Established in MIT’s School of Engineering and now in its third year, the MIT and Accenture Convergence Initiative is furthering its mission to bring together technological experts from across business and academia to share insights and learn from one another. Recently, Thomas W. Malone, the Patrick J. McGovern (1959) Professor of Management, joined the initiative as its first-ever faculty lead. The research projects relate to three of the initiative’s key focus areas: sustainability, digital health, and the future of work.

    “The solutions these research teams are developing have the potential to have tremendous impact,” says Anantha Chandrakasan, dean of the School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science. “They embody the initiative’s focus on advancing data-driven research that addresses technology and industry convergence.”

    “The convergence of science and technology driven by advancements in generative AI, digital twins, quantum computing, and other technologies makes this an especially exciting time for Accenture and MIT to be undertaking this joint research,” says Kenneth Munie, senior managing director at Accenture Strategy, Life Sciences. “Our three new research projects focusing on sustainability, digital health, and the future of work have the potential to help guide and shape future innovations that will benefit the way we work and live.”

    The MIT and Accenture Convergence Initiative charter project researchers are described below.

    Accelerating the journey to net zero with industrial clusters

    Jessika Trancik is a professor at the Institute for Data, Systems, and Society (IDSS). Trancik’s research examines the dynamic costs, performance, and environmental impacts of energy systems to inform climate policy and accelerate beneficial and equitable technology innovation. Trancik’s project aims to identify how industrial clusters can enable companies to derive greater value from decarbonization, potentially making companies more willing to invest in the clean energy transition.

    To meet the ambitious climate goals that have been set by countries around the world, rising greenhouse gas emissions trends must be rapidly reversed. Industrial clusters — geographically co-located or otherwise-aligned groups of companies representing one or more industries — account for a significant portion of greenhouse gas emissions globally. With major energy consumers “clustered” in proximity, industrial clusters provide a potential platform to scale low-carbon solutions by enabling the aggregation of demand and the coordinated investment in physical energy supply infrastructure.

    In addition to Trancik, the research team working on this project will include Aliza Khurram, a postdoc in IDSS; Micah Ziegler, an IDSS research scientist; Melissa Stark, global energy transition services lead at Accenture; Laura Sanderfer, strategy consulting manager at Accenture; and Maria De Miguel, strategy senior analyst at Accenture.

    Eliminating childhood obesity

    Anette “Peko” Hosoi is the Neil and Jane Pappalardo Professor of Mechanical Engineering. A common theme in her work is the fundamental study of shape, kinematic, and rheological optimization of biological systems with applications to the emergent field of soft robotics. Her project will use both data from existing studies and synthetic data to create a return-on-investment (ROI) calculator for childhood obesity interventions so that companies can identify earlier returns on their investment beyond reduced health-care costs.

    Childhood obesity is too prevalent to be solved by a single company, industry, drug, application, or program. In addition to the physical and emotional impact on children, society bears a cost through excess health care spending, lost workforce productivity, poor school performance, and increased family trauma. Meaningful solutions require multiple organizations, representing different parts of society, working together with a common understanding of the problem, the economic benefits, and the return on investment. ROI is particularly difficult to defend for any single organization because investment and return can be separated by many years and involve asymmetric investments, returns, and allocation of risk. Hosoi’s project will consider the incentives for a particular entity to invest in programs in order to reduce childhood obesity.

    Hosoi will be joined by graduate students Pragya Neupane and Rachael Kha, both of IDSS, as well a team from Accenture that includes Kenneth Munie, senior managing director at Accenture Strategy, Life Sciences; Kaveh Safavi, senior managing director in Accenture Health Industry; and Elizabeth Naik, global health and public service research lead.

    Generating innovative organizational configurations and algorithms for dealing with the problem of post-pandemic employment

    Thomas Malone is the Patrick J. McGovern (1959) Professor of Management at the MIT Sloan School of Management and the founding director of the MIT Center for Collective Intelligence. His research focuses on how new organizations can be designed to take advantage of the possibilities provided by information technology. Malone will be joined in this project by John Horton, the Richard S. Leghorn (1939) Career Development Professor at the MIT Sloan School of Management, whose research focuses on the intersection of labor economics, market design, and information systems. Malone and Horton’s project will look to reshape the future of work with the help of lessons learned in the wake of the pandemic.

    The Covid-19 pandemic has been a major disrupter of work and employment, and it is not at all obvious how governments, businesses, and other organizations should manage the transition to a desirable state of employment as the pandemic recedes. Using natural language processing algorithms such as GPT-4, this project will look to identify new ways that companies can use AI to better match applicants to necessary jobs, create new types of jobs, assess skill training needed, and identify interventions to help include women and other groups whose employment was disproportionately affected by the pandemic.

    In addition to Malone and Horton, the research team will include Rob Laubacher, associate director and research scientist at the MIT Center for Collective Intelligence, and Kathleen Kennedy, executive director at the MIT Center for Collective Intelligence and senior director at MIT Horizon. The team will also include Nitu Nivedita, managing director of artificial intelligence at Accenture, and Thomas Hancock, data science senior manager at Accenture. More

  • in

    To improve solar and other clean energy tech, look beyond hardware

    To continue reducing the costs of solar energy and other clean energy technologies, scientists and engineers will likely need to focus, at least in part, on improving technology features that are not based on hardware, according to MIT researchers. They describe this finding and the mechanisms behind it today in Nature Energy.

    While the cost of installing a solar energy system has dropped by more than 99 percent since 1980, this new analysis shows that “soft technology” features, such as the codified permitting practices, supply chain management techniques, and system design processes that go into deploying a solar energy plant, contributed only 10 to 15 percent of total cost declines. Improvements to hardware features were responsible for the lion’s share.

    But because soft technology is increasingly dominating the total costs of installing solar energy systems, this trend threatens to slow future cost savings and hamper the global transition to clean energy, says the study’s senior author, Jessika Trancik, a professor in MIT’s Institute for Data, Systems, and Society (IDSS).

    Trancik’s co-authors include lead author Magdalena M. Klemun, a former IDSS graduate student and postdoc who is now an assistant professor at the Hong Kong University of Science and Technology; Goksin Kavlak, a former IDSS graduate student and postdoc who is now an associate at the Brattle Group; and James McNerney, a former IDSS postdoc and now senior research fellow at the Harvard Kennedy School.

    The team created a quantitative model to analyze the cost evolution of solar energy systems, which captures the contributions of both hardware technology features and soft technology features.

    The framework shows that soft technology hasn’t improved much over time — and that soft technology features contributed even less to overall cost declines than previously estimated.

    Their findings indicate that to reverse this trend and accelerate cost declines, engineers could look at making solar energy systems less reliant on soft technology to begin with, or they could tackle the problem directly by improving inefficient deployment processes.  

    “Really understanding where the efficiencies and inefficiencies are, and how to address those inefficiencies, is critical in supporting the clean energy transition. We are making huge investments of public dollars into this, and soft technology is going to be absolutely essential to making those funds count,” says Trancik.

    “However,” Klemun adds, “we haven’t been thinking about soft technology design as systematically as we have for hardware. That needs to change.”

    The hard truth about soft costs

    Researchers have observed that the so-called “soft costs” of building a solar power plant — the costs of designing and installing the plant — are becoming a much larger share of total costs. In fact, the share of soft costs now typically ranges from 35 to 64 percent.

    “We wanted to take a closer look at where these soft costs were coming from and why they weren’t coming down over time as quickly as the hardware costs,” Trancik says.

    In the past, scientists have modeled the change in solar energy costs by dividing total costs into additive components — hardware components and nonhardware components — and then tracking how these components changed over time.

    “But if you really want to understand where those rates of change are coming from, you need to go one level deeper to look at the technology features. Then things split out differently,” Trancik says.

    The researchers developed a quantitative approach that models the change in solar energy costs over time by assigning contributions to the individual technology features, including both hardware features and soft technology features.

    For instance, their framework would capture how much of the decline in system installation costs — a soft cost — is due to standardized practices of certified installers — a soft technology feature. It would also capture how that same soft cost is affected by increased photovoltaic module efficiency — a hardware technology feature.

    With this approach, the researchers saw that improvements in hardware had the greatest impacts on driving down soft costs in solar energy systems. For example, the efficiency of photovoltaic modules doubled between 1980 and 2017, reducing overall system costs by 17 percent. But about 40 percent of that overall decline could be attributed to reductions in soft costs tied to improved module efficiency.

    The framework shows that, while hardware technology features tend to improve many cost components, soft technology features affect only a few.

    “You can see this structural difference even before you collect data on how the technologies have changed over time. That’s why mapping out a technology’s network of cost dependencies is a useful first step to identify levers of change, for solar PV and for other technologies as well,” Klemun notes.  

    Static soft technology

    The researchers used their model to study several countries, since soft costs can vary widely around the world. For instance, solar energy soft costs in Germany are about 50 percent less than those in the U.S.

    The fact that hardware technology improvements are often shared globally led to dramatic declines in costs over the past few decades across locations, the analysis showed. Soft technology innovations typically aren’t shared across borders. Moreover, the team found that countries with better soft technology performance 20 years ago still have better performance today, while those with worse performance didn’t see much improvement.

    This country-by-country difference could be driven by regulation and permitting processes, cultural factors, or by market dynamics such as how firms interact with each other, Trancik says.

    “But not all soft technology variables are ones that you would want to change in a cost-reducing direction, like lower wages. So, there are other considerations, beyond just bringing the cost of the technology down, that we need to think about when interpreting these results,” she says.

    Their analysis points to two strategies for reducing soft costs. For one, scientists could focus on developing hardware improvements that make soft costs more dependent on hardware technology variables and less on soft technology variables, such as by creating simpler, more standardized equipment that could reduce on-site installation time.

    Or researchers could directly target soft technology features without changing hardware, perhaps by creating more efficient workflows for system installation or automated permitting platforms.

    “In practice, engineers will often pursue both approaches, but separating the two in a formal model makes it easier to target innovation efforts by leveraging specific relationships between technology characteristics and costs,” Klemun says.

    “Often, when we think about information processing, we are leaving out processes that still happen in a very low-tech way through people communicating with one another. But it is just as important to think about that as a technology as it is to design fancy software,” Trancik notes.

    In the future, she and her collaborators want to apply their quantitative model to study the soft costs related to other technologies, such as electrical vehicle charging and nuclear fission. They are also interested in better understanding the limits of soft technology improvement, and how one could design better soft technology from the outset.

    This research is funded by the U.S. Department of Energy Solar Energy Technologies Office. More