More stories

  • in

    MIT Schwarzman College of Computing unveils Break Through Tech AI

    Aimed at driving diversity and inclusion in artificial intelligence, the MIT Stephen A. Schwarzman College of Computing is launching Break Through Tech AI, a new program to bridge the talent gap for women and underrepresented genders in AI positions in industry.

    Break Through Tech AI will provide skills-based training, industry-relevant portfolios, and mentoring to qualified undergraduate students in the Greater Boston area in order to position them more competitively for careers in data science, machine learning, and artificial intelligence. The free, 18-month program will also provide each student with a stipend for participation to lower the barrier for those typically unable to engage in an unpaid, extra-curricular educational opportunity.

    “Helping position students from diverse backgrounds to succeed in fields such as data science, machine learning, and artificial intelligence is critical for our society’s future,” says Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing and Henry Ellis Warren Professor of Electrical Engineering and Computer Science. “We look forward to working with students from across the Greater Boston area to provide them with skills and mentorship to help them find careers in this competitive and growing industry.”

    The college is collaborating with Break Through Tech — a national initiative launched by Cornell Tech in 2016 to increase the number of women and underrepresented groups graduating with degrees in computing — to host and administer the program locally. In addition to Boston, the inaugural artificial intelligence and machine learning program will be offered in two other metropolitan areas — one based in New York hosted by Cornell Tech and another in Los Angeles hosted by the University of California at Los Angeles Samueli School of Engineering.

    “Break Through Tech’s success at diversifying who is pursuing computer science degrees and careers has transformed lives and the industry,” says Judith Spitz, executive director of Break Through Tech. “With our new collaborators, we can apply our impactful model to drive inclusion and diversity in artificial intelligence.”

    The new program will kick off this summer at MIT with an eight-week, skills-based online course and in-person lab experience that teaches industry-relevant tools to build real-world AI solutions. Students will learn how to analyze datasets and use several common machine learning libraries to build, train, and implement their own ML models in a business context.

    Following the summer course, students will be matched with machine-learning challenge projects for which they will convene monthly at MIT and work in teams to build solutions and collaborate with an industry advisor or mentor throughout the academic year, resulting in a portfolio of resume-quality work. The participants will also be paired with young professionals in the field to help build their network, prepare their portfolio, practice for interviews, and cultivate workplace skills.

    “Leveraging the college’s strong partnership with industry, Break Through AI will offer unique opportunities to students that will enhance their portfolio in machine learning and AI,” says Asu Ozdaglar, deputy dean of academics of the MIT Schwarzman College of Computing and head of the Department of Electrical Engineering and Computer Science. Ozdaglar, who will be the MIT faculty director of Break Through Tech AI, adds: “The college is committed to making computing inclusive and accessible for all. We’re thrilled to host this program at MIT for the Greater Boston area and to do what we can to help increase diversity in computing fields.”

    Break Through Tech AI is part of the MIT Schwarzman College of Computing’s focus to advance diversity, equity, and inclusion in computing. The college aims to improve and create programs and activities that broaden participation in computing classes and degree programs, increase the diversity of top faculty candidates in computing fields, and ensure that faculty search and graduate admissions processes have diverse slates of candidates and interviews.

    “By engaging in activities like Break Through Tech AI that work to improve the climate for underrepresented groups, we’re taking an important step toward creating more welcoming environments where all members can innovate and thrive,” says Alana Anderson, assistant dean for diversity, equity and inclusion for the Schwarzman College of Computing. More

  • in

    Computing our climate future

    On Monday, MIT announced five multiyear flagship projects in the first-ever Climate Grand Challenges, a new initiative to tackle complex climate problems and deliver breakthrough solutions to the world as quickly as possible. This article is the first in a five-part series highlighting the most promising concepts to emerge from the competition, and the interdisciplinary research teams behind them.

    With improvements to computer processing power and an increased understanding of the physical equations governing the Earth’s climate, scientists are continually working to refine climate models and improve their predictive power. But the tools they’re refining were originally conceived decades ago with only scientists in mind. When it comes to developing tangible climate action plans, these models remain inscrutable to the policymakers, public safety officials, civil engineers, and community organizers who need their predictive insight most.

    “What you end up having is a gap between what’s typically used in practice, and the real cutting-edge science,” says Noelle Selin, a professor in the Institute for Data, Systems and Society and the Department of Earth, Atmospheric and Planetary Sciences (EAPS), and co-lead with Professor Raffaele Ferrari on the MIT Climate Grand Challenges flagship project “Bringing Computation to the Climate Crisis.” “How can we use new computational techniques, new understandings, new ways of thinking about modeling, to really bridge that gap between state-of-the-art scientific advances and modeling, and people who are actually needing to use these models?”

    Using this as a driving question, the team won’t just be trying to refine current climate models, they’re building a new one from the ground up.

    This kind of game-changing advancement is exactly what the MIT Climate Grand Challenges is looking for, which is why the proposal has been named one of the five flagship projects in the ambitious Institute-wide program aimed at tackling the climate crisis. The proposal, which was selected from 100 submissions and was among 27 finalists, will receive additional funding and support to further their goal of reimagining the climate modeling system. It also brings together contributors from across the Institute, including the MIT Schwarzman College of Computing, the School of Engineering, and the Sloan School of Management.

    When it comes to pursuing high-impact climate solutions that communities around the world can use, “it’s great to do it at MIT,” says Ferrari, EAPS Cecil and Ida Green Professor of Oceanography. “You’re not going to find many places in the world where you have the cutting-edge climate science, the cutting-edge computer science, and the cutting-edge policy science experts that we need to work together.”

    The climate model of the future

    The proposal builds on work that Ferrari began three years ago as part of a joint project with Caltech, the Naval Postgraduate School, and NASA’s Jet Propulsion Lab. Called the Climate Modeling Alliance (CliMA), the consortium of scientists, engineers, and applied mathematicians is constructing a climate model capable of more accurately projecting future changes in critical variables, such as clouds in the atmosphere and turbulence in the ocean, with uncertainties at least half the size of those in existing models.

    To do this, however, requires a new approach. For one thing, current models are too coarse in resolution — at the 100-to-200-kilometer scale — to resolve small-scale processes like cloud cover, rainfall, and sea ice extent. But also, explains Ferrari, part of this limitation in resolution is due to the fundamental architecture of the models themselves. The languages most global climate models are coded in were first created back in the 1960s and ’70s, largely by scientists for scientists. Since then, advances in computing driven by the corporate world and computer gaming have given rise to dynamic new computer languages, powerful graphics processing units, and machine learning.

    For climate models to take full advantage of these advancements, there’s only one option: starting over with a modern, more flexible language. Written in Julia, a part of Julialab’s Scientific Machine Learning technology, and spearheaded by Alan Edelman, a professor of applied mathematics in MIT’s Department of Mathematics, CliMA will be able to harness far more data than the current models can handle.

    “It’s been real fun finally working with people in computer science here at MIT,” Ferrari says. “Before it was impossible, because traditional climate models are in a language their students can’t even read.”

    The result is what’s being called the “Earth digital twin,” a climate model that can simulate global conditions on a large scale. This on its own is an impressive feat, but the team wants to take this a step further with their proposal.

    “We want to take this large-scale model and create what we call an ‘emulator’ that is only predicting a set of variables of interest, but it’s been trained on the large-scale model,” Ferrari explains. Emulators are not new technology, but what is new is that these emulators, being referred to as the “Earth digital cousins,” will take advantage of machine learning.

    “Now we know how to train a model if we have enough data to train them on,” says Ferrari. Machine learning for projects like this has only become possible in recent years as more observational data become available, along with improved computer processing power. The goal is to create smaller, more localized models by training them using the Earth digital twin. Doing so will save time and money, which is key if the digital cousins are going to be usable for stakeholders, like local governments and private-sector developers.

    Adaptable predictions for average stakeholders

    When it comes to setting climate-informed policy, stakeholders need to understand the probability of an outcome within their own regions — in the same way that you would prepare for a hike differently if there’s a 10 percent chance of rain versus a 90 percent chance. The smaller Earth digital cousin models will be able to do things the larger model can’t do, like simulate local regions in real time and provide a wider range of probabilistic scenarios.

    “Right now, if you wanted to use output from a global climate model, you usually would have to use output that’s designed for general use,” says Selin, who is also the director of the MIT Technology and Policy Program. With the project, the team can take end-user needs into account from the very beginning while also incorporating their feedback and suggestions into the models, helping to “democratize the idea of running these climate models,” as she puts it. Doing so means building an interactive interface that eventually will give users the ability to change input values and run the new simulations in real time. The team hopes that, eventually, the Earth digital cousins could run on something as ubiquitous as a smartphone, although developments like that are currently beyond the scope of the project.

    The next thing the team will work on is building connections with stakeholders. Through participation of other MIT groups, such as the Joint Program on the Science and Policy of Global Change and the Climate and Sustainability Consortium, they hope to work closely with policymakers, public safety officials, and urban planners to give them predictive tools tailored to their needs that can provide actionable outputs important for planning. Faced with rising sea levels, for example, coastal cities could better visualize the threat and make informed decisions about infrastructure development and disaster preparedness; communities in drought-prone regions could develop long-term civil planning with an emphasis on water conservation and wildfire resistance.

    “We want to make the modeling and analysis process faster so people can get more direct and useful feedback for near-term decisions,” she says.

    The final piece of the challenge is to incentivize students now so that they can join the project and make a difference. Ferrari has already had luck garnering student interest after co-teaching a class with Edelman and seeing the enthusiasm students have about computer science and climate solutions.

    “We’re intending in this project to build a climate model of the future,” says Selin. “So it seems really appropriate that we would also train the builders of that climate model.” More

  • in

    Improving predictions of sea level rise for the next century

    When we think of climate change, one of the most dramatic images that comes to mind is the loss of glacial ice. As the Earth warms, these enormous rivers of ice become a casualty of the rising temperatures. But, as ice sheets retreat, they also become an important contributor to one the more dangerous outcomes of climate change: sea-level rise. At MIT, an interdisciplinary team of scientists is determined to improve sea level rise predictions for the next century, in part by taking a closer look at the physics of ice sheets.

    Last month, two research proposals on the topic, led by Brent Minchew, the Cecil and Ida Green Career Development Professor in the Department of Earth, Atmospheric and Planetary Sciences (EAPS), were announced as finalists in the MIT Climate Grand Challenges initiative. Launched in July 2020, Climate Grand Challenges fielded almost 100 project proposals from collaborators across the Institute who heeded the bold charge: to develop research and innovations that will deliver game-changing advances in the world’s efforts to address the climate challenge.

    As finalists, Minchew and his collaborators from the departments of Urban Studies and Planning, Economics, Civil and Environmental Engineering, the Haystack Observatory, and external partners, received $100,000 to develop their research plans. A subset of the 27 proposals tapped as finalists will be announced next month, making up a portfolio of multiyear “flagship” projects receiving additional funding and support.

    One goal of both Minchew proposals is to more fully understand the most fundamental processes that govern rapid changes in glacial ice, and to use that understanding to build next-generation models that are more predictive of ice sheet behavior as they respond to, and influence, climate change.

    “We need to develop more accurate and computationally efficient models that provide testable projections of sea-level rise over the coming decades. To do so quickly, we want to make better and more frequent observations and learn the physics of ice sheets from these data,” says Minchew. “For example, how much stress do you have to apply to ice before it breaks?”

    Currently, Minchew’s Glacier Dynamics and Remote Sensing group uses satellites to observe the ice sheets on Greenland and Antarctica primarily with interferometric synthetic aperture radar (InSAR). But the data are often collected over long intervals of time, which only gives them “before and after” snapshots of big events. By taking more frequent measurements on shorter time scales, such as hours or days, they can get a more detailed picture of what is happening in the ice.

    “Many of the key unknowns in our projections of what ice sheets are going to look like in the future, and how they’re going to evolve, involve the dynamics of glaciers, or our understanding of how the flow speed and the resistances to flow are related,” says Minchew.

    At the heart of the two proposals is the creation of SACOS, the Stratospheric Airborne Climate Observatory System. The group envisions developing solar-powered drones that can fly in the stratosphere for months at a time, taking more frequent measurements using a new lightweight, low-power radar and other high-resolution instrumentation. They also propose air-dropping sensors directly onto the ice, equipped with seismometers and GPS trackers to measure high-frequency vibrations in the ice and pinpoint the motions of its flow.

    How glaciers contribute to sea level rise

    Current climate models predict an increase in sea levels over the next century, but by just how much is still unclear. Estimates are anywhere from 20 centimeters to two meters, which is a large difference when it comes to enacting policy or mitigation. Minchew points out that response measures will be different, depending on which end of the scale it falls toward. If it’s closer to 20 centimeters, coastal barriers can be built to protect low-level areas. But with higher surges, such measures become too expensive and inefficient to be viable, as entire portions of cities and millions of people would have to be relocated.

    “If we’re looking at a future where we could get more than a meter of sea level rise by the end of the century, then we need to know about that sooner rather than later so that we can start to plan and to do our best to prepare for that scenario,” he says.

    There are two ways glaciers and ice sheets contribute to rising sea levels: direct melting of the ice and accelerated transport of ice to the oceans. In Antarctica, warming waters melt the margins of the ice sheets, which tends to reduce the resistive stresses and allow ice to flow more quickly to the ocean. This thinning can also cause the ice shelves to be more prone to fracture, facilitating the calving of icebergs — events which sometimes cause even further acceleration of ice flow.

    Using data collected by SACOS, Minchew and his group can better understand what material properties in the ice allow for fracturing and calving of icebergs, and build a more complete picture of how ice sheets respond to climate forces. 

    “What I want is to reduce and quantify the uncertainties in projections of sea level rise out to the year 2100,” he says.

    From that more complete picture, the team — which also includes economists, engineers, and urban planning specialists — can work on developing predictive models and methods to help communities and governments estimate the costs associated with sea level rise, develop sound infrastructure strategies, and spur engineering innovation.

    Understanding glacier dynamics

    More frequent radar measurements and the collection of higher-resolution seismic and GPS data will allow Minchew and the team to develop a better understanding of the broad category of glacier dynamics — including calving, an important process in setting the rate of sea level rise which is currently not well understood.  

    “Some of what we’re doing is quite similar to what seismologists do,” he says. “They measure seismic waves following an earthquake, or a volcanic eruption, or things of this nature and use those observations to better understand the mechanisms that govern these phenomena.”

    Air-droppable sensors will help them collect information about ice sheet movement, but this method comes with drawbacks — like installation and maintenance, which is difficult to do out on a massive ice sheet that is moving and melting. Also, the instruments can each only take measurements at a single location. Minchew equates it to a bobber in water: All it can tell you is how the bobber moves as the waves disturb it.

    But by also taking continuous radar measurements from the air, Minchew’s team can collect observations both in space and in time. Instead of just watching the bobber in the water, they can effectively make a movie of the waves propagating out, as well as visualize processes like iceberg calving happening in multiple dimensions.

    Once the bobbers are in place and the movies recorded, the next step is developing machine learning algorithms to help analyze all the new data being collected. While this data-driven kind of discovery has been a hot topic in other fields, this is the first time it has been applied to glacier research.

    “We’ve developed this new methodology to ingest this huge amount of data,” he says, “and from that create an entirely new way of analyzing the system to answer these fundamental and critically important questions.”  More

  • in

    Transforming the travel experience for the Hong Kong airport

    MIT Hong Kong Innovation Node welcomed 33 students to its flagship program, MIT Entrepreneurship and Maker Skills Integrator (MEMSI). Designed to develop entrepreneurial prowess through exposure to industry-driven challenges, MIT students joined forces with Hong Kong peers in this two-week hybrid bootcamp, developing unique proposals for the Airport Authority of Hong Kong.

    Many airports across the world continue to be affected by the broader impact of Covid-19 with reduced air travel, prompting airlines to cut capacity. The result is a need for new business opportunities to propel economic development. For Hong Kong, the expansion toward non-aeronautical activities to boost regional consumption is therefore crucial, and included as part of the blueprint to transform the city’s airport into an airport city — characterized by capacity expansion, commercial developments, air cargo leadership, an autonomous transport system, connectivity to neighboring cities in mainland China, and evolution into a smart airport guided by sustainable practices. To enhance the customer experience, a key focus is capturing business opportunities at the nexus of digital and physical interactions. 

    These challenges “bring ideas and talent together to tackle real-world problems in the areas of digital service creation for the airport and engaging regional customers to experience the new airport city,” says Charles Sodini, the LeBel Professor of Electrical Engineering at MIT and faculty director at the Node. 

    The new travel standard

    Businesses are exploring new digital technologies, both to drive bookings and to facilitate safe travel. Developments such as Hong Kong airport’s Flight Token, a biometric technology using facial recognition to enable contactless check-ins and boarding at airports, unlock enormous potential that speeds up the departure journey of passengers. Seamless virtual experiences are not going to disappear.

    “What we may see could be a strong rebounce especially for travelers after the travel ban lifts … an opportunity to make travel easier, flying as simple as riding the bus,” says Chris Au Young, general manager of smart airport and general manager of data analytics at the Airport Authority of Hong Kong. 

    The passenger experience of the future will be “enabled by mobile technology, internet of things, and digital platforms,” he explains, adding that in the aviation community, “international organizations have already stipulated that biometric technology will be the new standard for the future … the next question is how this can be connected across airports.”  

    This extends further beyond travel, where Au Young illustrates, “If you go to a concert at Asia World Expo, which is the airport’s new arena in the future, you might just simply show your face rather than queue up in a long line waiting to show your tickets.”

    Accelerating the learning curve with industry support

    Working closely with industry mentors involved in the airport city’s development, students dived deep into discussions on the future of adapted travel, interviewed and surveyed travelers, and plowed through a range of airport data to uncover business insights.

    “With the large amount of data provided, my teammates and I worked hard to identify modeling opportunities that were both theoretically feasible and valuable in a business sense,” says Sean Mann, a junior at MIT studying computer science.

    Mann and his team applied geolocation data to inform machine learning predictions on a passenger’s journey once they enter the airside area. Coupled with biometric technology, passengers can receive personalized recommendations with improved accuracy via the airport’s bespoke passenger app, powered by data collected through thousands of iBeacons dispersed across the vicinity. Armed with these insights, the aim is to enhance the user experience by driving meaningful footfall to retail shops, restaurants, and other airport amenities.

    The support of industry partners inspired his team “with their deep understanding of the aviation industry,” he added. “In a short period of two weeks, we built a proof-of-concept and a rudimentary business plan — the latter of which was very new to me.”

    Collaborating across time zones, Rumen Dangovski, a PhD candidate in electrical engineering and computer science at MIT, joined MEMSI from his home in Bulgaria. For him, learning “how to continually revisit ideas to discover important problems and meaningful solutions for a large and complex real-world system” was a key takeaway. The iterative process helped his team overcome the obstacle of narrowing down the scope of their proposal, with the help of industry mentors and advisors. 

    “Without the feedback from industry partners, we would not have been able to formulate a concrete solution that is actually helpful to the airport,” says Dangovski.  

    Beyond valuable mentorship, he adds, “there was incredible energy in our team, consisting of diverse talent, grit, discipline and organization. I was positively surprised how MEMSI can form quickly and give continual support to our team. The overall experience was very fun.“

    A sustainable future

    Mrigi Munjal, a PhD candidate studying materials science and engineering at MIT, had just taken a long-haul flight from Boston to Delhi prior to the program, and “was beginning to fully appreciate the scale of carbon emissions from aviation.” For her, “that one journey basically overshadowed all of my conscious pro-sustainability lifestyle changes,” she says.

    Knowing that international flights constitute the largest part of an individual’s carbon footprint, Munjal and her team wanted “to make flying more sustainable with an idea that is economically viable for all of the stakeholders involved.” 

    They proposed a carbon offset API that integrates into an airline’s ticket payment system, empowering individuals to take action to offset their carbon footprint, track their personal carbon history, and pick and monitor green projects. The advocacy extends to a digital display of interactive art featured in physical installations across the airport city. The intent is to raise community awareness about one’s impact on the environment and making carbon offsetting accessible. 

    Shaping the travel narrative

    Six teams of students created innovative solutions for the Hong Kong airport which they presented in hybrid format to a panel of judges on Showcase Day. The diverse ideas included an app-based airport retail recommendations supported by iBeacons; a platform that empowers customers to offset their carbon footprint; an app that connects fellow travelers for social and incentive-driven retail experiences; a travel membership exchange platform offering added flexibility to earn and redeem loyalty rewards; an interactive and gamified location-based retail experience using augmented reality; and a digital companion avatar to increase adoption of the airport’s Flight Token and improve airside passenger experience.

    Among the judges was Julian Lee ’97, former president of the MIT Club of Hong Kong and current executive director of finance at the Airport Authority of Hong Kong, who commended the students for demonstrably having “worked very thoroughly and thinking through the specific challenges,” addressing the real pain points that the airport is experiencing.

    “The ideas were very thoughtful and very unique to us. Some of you defined transit passengers as a sub-segment of the market that works. It only happens at the airport and you’ve been able to leverage this transit time in between,” remarked Lee. 

    Strong solutions include an implementation plan to see a path for execution and a viable future. Among the solutions proposed, Au Young was impressed by teams for “paying a lot of attention to the business model … a very important aspect in all the ideas generated.”  

    Addressing the students, Au Young says, “What we love is the way you reinvent the airport business and partnerships, presenting a new way of attracting people to engage more in new services and experiences — not just returning for a flight or just shopping with us, but innovating beyond the airport and using emerging technologies, using location data, using the retailer’s capability and adding some social activities in your solutions.”

    Despite today’s rapidly evolving travel industry, what remains unchanged is a focus on the customer. In the end, “it’s still about the passengers,” added Au Young.  More

  • in

    Unlocking new doors to artificial intelligence

    Artificial intelligence research is constantly developing new hypotheses that have the potential to benefit society and industry; however, sometimes these benefits are not fully realized due to a lack of engineering tools. To help bridge this gap, graduate students in the MIT Department of Electrical Engineering and Computer Science’s 6-A Master of Engineering (MEng) Thesis Program work with some of the most innovative companies in the world and collaborate on cutting-edge projects, while contributing to and completing their MEng thesis.

    During a portion of the last year, four 6-A MEng students teamed up and completed an internship with IBM Research’s advanced prototyping team through the MIT-IBM Watson AI Lab on AI projects, often developing web applications to solve a real-world issue or business use cases. Here, the students worked alongside AI engineers, user experience engineers, full-stack researchers, and generalists to accommodate project requests and receive thesis advice, says Lee Martie, IBM research staff member and 6-A manager. The students’ projects ranged from generating synthetic data to allow for privacy-sensitive data analysis to using computer vision to identify actions in video that allows for monitoring human safety and tracking build progress on a construction site.

    “I appreciated all of the expertise from the team and the feedback,” says 6-A graduate Violetta Jusiega ’21, who participated in the program. “I think that working in industry gives the lens of making sure that the project’s needs are satisfied and [provides the opportunity] to ground research and make sure that it is helpful for some use case in the future.”

    Jusiega’s research intersected the fields of computer vision and design to focus on data visualization and user interfaces for the medical field. Working with IBM, she built an application programming interface (API) that let clinicians interact with a medical treatment strategy AI model, which was deployed in the cloud. Her interface provided a medical decision tree, as well as some prescribed treatment plans. After receiving feedback on her design from physicians at a local hospital, Jusiega developed iterations of the API and how the results where displayed, visually, so that it would be user-friendly and understandable for clinicians, who don’t usually code. She says that, “these tools are often not acquired into the field because they lack some of these API principles which become more important in an industry where everything is already very fast paced, so there’s little time to incorporate a new technology.” But this project might eventually allow for industry deployment. “I think this application has a bunch of potential, whether it does get picked up by clinicians or whether it’s simply used in research. It’s very promising and very exciting to see how technology can help us modify, or I can improve, the health-care field to be even more custom-tailored towards patients and giving them the best care possible,” she says.

    Another 6-A graduate student, Spencer Compton, was also considering aiding professionals to make more informed decisions, for use in settings including health care, but he was tackling it from a causal perspective. When given a set of related variables, Compton was investigating if there was a way to determine not just correlation, but the cause-and-effect relationship between them (the direction of the interaction) from the data alone. For this, he and his collaborators from IBM Research and Purdue University turned to a field of math called information theory. With the goal of designing an algorithm to learn complex networks of causal relationships, Compton used ideas relating to entropy, the randomness in a system, to help determine if a causal relationship is present and how variables might be interacting. “When judging an explanation, people often default to Occam’s razor” says Compton. “We’re more inclined to believe a simpler explanation than a more complex one.” In many cases, he says, it seemed to perform well. For instance, they were able to consider variables such as lung cancer, pollution, and X-ray findings. He was pleased that his research allowed him to help create a framework of “entropic causal inference” that could aid in safe and smart decisions in the future, in a satisfying way. “The math is really surprisingly deep, interesting, and complex,” says Compton. “We’re basically asking, ‘when is the simplest explanation correct?’ but as a math question.”

    Determining relationships within data can sometimes require large volumes of it to suss out patterns, but for data that may contain sensitive information, this may not be available. For her master’s work, Ivy Huang worked with IBM Research to generate synthetic tabular data using a natural language processing tool called a transformer model, which can learn and predict future values from past values. Trained on real data, the model can produce new data with similar patterns, properties, and relationships without restrictions like privacy, availability, and access that might come with real data in financial transactions and electronic medical records. Further, she created an API and deployed the model in an IBM cluster, which allowed users increased access to the model and abilities to query it without compromising the original data.

    Working with the advanced prototyping team, MEng candidate Brandon Perez also considered how to gather and investigate data with restrictions, but in his case it was to use computer vision frameworks, centered on an action recognition model, to identify construction site happenings. The team based their work on the Moments in Time dataset, which contains over a million three-second video clips with about 300 attached classification labels, and has performed well during AI training. However, the group needed more construction-based video data. For this, they used YouTube-8M. Perez built a framework for testing and fine-tuning existing object detection models and action recognition models that could plug into an automatic spatial and temporal localization tool — how they would identify and label particular actions in a video timeline. “I was satisfied that I was able to explore what made me curious, and I was grateful for the autonomy that I was given with this project,” says Perez. “I felt like I was always supported, and my mentor was a great support to the project.”

    “The kind of collaborations that we have seen between our MEng students and IBM researchers are exactly what the 6-A MEng Thesis program at MIT is all about,” says Tomas Palacios, professor of electrical engineering and faculty director of the MIT 6-A MEng Thesis program. “For more than 100 years, 6-A has been connecting MIT students with industry to solve together some of the most important problems in the world.” More

  • in

    Research aims to mitigate chemical and biological airborne threats

    When the air harbors harmful matter, such as a virus or toxic chemical, it’s not always easy to promptly detect this danger. Whether spread maliciously or accidentally, how fast and how far could hazardous plumes travel through a city? What could emergency managers do in response?

    These were questions that scientists, public health officials, and government agencies probed with an air flow study conducted recently in New York City. At 120 locations across all five boroughs of the city, a team led by MIT Lincoln Laboratory collected safe test particles and gases released earlier in subway stations and on streets, tracking their journeys. The exercise measured how far the materials traveled and what their concentrations were when detected.

    The results are expected to improve air dispersion models, and in turn, help emergency planners improve response protocols if a real chemical or biological event were to take place. 

    The study was performed under the Department of Homeland Security (DHS) Science and Technology Directorate’s (S&T) Urban Threat Dispersion Project. The project is largely driven by Lincoln Laboratory’s Counter–Weapons of Mass Destruction (CWMD) Systems Group to improve homeland defenses against airborne threats. This exercise followed a similar, though much smaller, study in 2016 that focused mainly on the subway system within Manhattan.

    “The idea was to look at how particles and gases move through urban environments, starting with a focus on subways,” says Mandeep Virdi, a researcher in the CWMD Systems Group who helped lead both studies.

    The particles and gases used in the study are safe to disperse. The particulates are primarily composed of maltodextrin sugar, and have been used in prior public safety exercises. To enable researchers to track the particles, the particles are modified with small amounts of synthetic DNA that acts as a unique “barcode.” This barcode corresponds to the location from which the particle was released and the day of release. When these particles are later collected and analyzed, researchers can know exactly where they came from.

    The laboratory’s team led the process of releasing the particles and collecting the particle samples for analysis. A small sprayer is used to aerosolize the particles into the air. As the particles flow throughout the city, some get trapped in filters set up at the many dispersed collection sites. 

    To make processes more efficient for this large study, the team built special filter heads that rotated through multiple filters, saving time spent revisiting a collection site. They also developed a system using NFC (near-field communication) tags to simplify the cataloging and tracking of samples and equipment through a mobile app. 

    The researchers are still processing the approximately 5,000 samples that were collected over the five-day measurement campaign. The data will feed into existing particle dispersion models to improve simulations. One of these models, from Argonne National Laboratory, focuses on subway environments, and another model from Los Alamos National Laboratory simulates above-ground city environments, taking into account buildings and urban canyon air flows.

    Together, these models can show how a plume would travel from the subway to the streets, for example. These insights will enable emergency managers in New York City to develop more informed response strategies, as they did following the 2016 subway study.

    “The big question has always been, if there is a release and law enforcement can detect it in time, what do you actually do? Do you shut down the subway system? What can you do to mitigate those effects? Knowing that is the end goal,” Virdi says. 

    A new program, called the Chemical and Biological Defense Testbed, has just kicked off to further investigate those questions. Trina Vian at Lincoln Laboratory is leading this program, also under S&T funding.

    “Now that we’ve learned more about how material transports through the subway system, this test bed is looking at ways that we can mitigate that transport in a low-regret way,” Vian says.

    According to Vian, emergency managers don’t have many options other than to evacuate the area when a biological or chemical sensor is triggered. Yet current sensors tend to have high false-alarm rates, particularly in dirty environments. “You really can’t afford to make that evacuation call in error. Not only do you undermine people’s trust in the system, but also people can become injured, and it may actually be a non-threatening situation.”

    The goal of this test bed is to develop architectures and technologies that could allow for a range of appropriate response activities. For example, the team will be looking at ways through which air flow could be constrained or filtered in place, without disrupting traffic, while responders validate an alarm. They’ll also be testing the performance of new chemical and biological sensor technologies.

    Both Vian and Virdi stress the importance of collaboration for carrying out these large-scale studies, and in tackling the problem of airborne dangers in general. The test bed program is already benefiting by using equipment provided through the CWMD Alliance, a partnership of DHS and the Joint Program Executive Office for Chemical, Biological, Radiological and Nuclear Defense.

    A team of nearly 175 personnel worked together on the air flow exercise, spanning the Metropolitan Transportation Authority, New York City Transit, New York City Police Department, Port Authority of New York and New Jersey, New Jersey Transit, New York City Department of Environmental Protection, the New York City Department of Health and Mental Hygiene, the National Guard Weapons of Mass Destruction Civil Support Teams, the Environmental Protection Agency, and Department of Energy National Laboratories, in addition to S&T and Lincoln Laboratory.

    “It really was all about teamwork,” Virdi reflects. “Programs like this are why I came to Lincoln Laboratory. Seeing how the science is applied in a way that has real actionable results and how appreciative agencies are of what we’re doing has been rewarding. It’s exciting to see your program through, especially one as intense as this.” More

  • in

    Professor Emery Brown has big plans for anesthesiology

    Emery N. Brown — the Edward Hood Taplin Professor of Medical Engineering and of Computational Neuroscience at MIT, an MIT professor of health sciences and technology, an investigator with The Picower Institute for Learning and Memory at MIT, and the Warren M. Zapol Professor of Anaesthesia at Harvard Medical School and Massachusetts General Hospital (MGH) — clearly excels at many roles. Renowned internationally for his anesthesia and neuroscience research, he embodies a unique blend of anesthesiologist, statistician, neuroscientist, educator, and mentor to both students and colleagues. Notably, Brown is one of the most decorated clinician-scientists in the country; he is one of only 25 people — and the first African-American, statistician, and anesthesiologist — to be elected to all three National Academies (Science, Engineering, and Medicine).

    Now, he is handing off one of his many key roles and responsibilities. After almost 10 years, Brown is stepping down as co-director of the Harvard-MIT Program in Health Sciences and Technology (HST). He will turn his energies toward working to develop a new joint center between MIT and MGH that uses the study of anesthesia to design novel approaches to controlling brain states. While a goal of the new center will be to improve anesthesia and intensive care unit management, according to Brown, it will also study related problems such as treating depression, insomnia, and epilepsy, as well as enhancing coma recovery.

    Founded in 1970, HST is one of the oldest interdisciplinary educational programs focused on training the next generation of clinician-scientists and engineers, who learn to translate science, engineering, and medical research into clinical practice, with the aim of improving human health. The MIT Institute for Medical Engineering and Science (IMES), where Brown is associate director, is HST’s home at MIT. Brown was the first HST co-director after the establishment of IMES in 2012; Wolfram Goessling is the Harvard University co-director of HST.

    “Emery has been an exemplary leader for HST during his tenure, and has helped it become a hub for the training of world-class scientists, engineers, and clinicians,” says Anantha Chandrakasan, dean of the MIT School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science. “I am deeply grateful for his many years of service and wish him well as he moves on to new endeavors.”

    Elazer R. Edelman, director of IMES, calls Brown “a phenom who has been dedicated to our programs for years.”

    “With his thoughtful leadership and understated style, Emery made many contributions to the HST community,” Edelman continues. “On a personal note, this is bittersweet for me, as Emery has been a partner and mentor in my role as IMES director. And while I know that he will always be there for me, as he has been for all of us at IMES and HST, I will miss our late-night calls and midday conferences on matters of import for MIT, IMES, and HST.”

    Brown says “it was an honor and a privilege to co-direct HST with Wolfram.”

    “The students, staff, and faculty are simply amazing,” Brown continues. “Although, now more than 50 years old, HST remains at the vanguard for training PhD and MD students to work at the intersection between engineering, science, and medicine.”

    Goessling also thanks Brown for his leadership: “I truly valued Emery’s partnership and friendship, working together to deepen ties between the MIT and Harvard sides of HST. I am particularly grateful for working with Emery on our combined diversity efforts, leading to the HST Diversity Ambassadors initiative that made HST a better and stronger program.”

    According to Edelman, Brown was instrumental in the transition to new paradigms and relationships with HMS in the context of IMES. In 2014, he led the establishment of clear criteria for HST faculty membership, thereby strengthening the community of faculty experts who train students and provide research opportunities. More recently, he provided guidance through the turmoil of the ongoing Covid-19 pandemic, including the transition to online instruction and the return to the classroom. And Brown has always been a strong supporter of student diversity efforts, serving as an advocate and advisor to HST students.

    Brown holds BA, MA, and PhD degrees from Harvard University, and an MD from Harvard Medical School. He has been recognized with many awards, including the 2020 Swartz Prize in Theoretical and Computational Neuroscience, the 2018 Dickson Prize in Science, and an NIH Director’s Pioneer Award. Brown also served on President Barack Obama’s BRAIN Initiative Working Group. Among his many accomplishments, he has been cited for developing neural signal processing algorithms to characterize how neural systems represent and transmit information, and for unlocking the neurophysiology of how anesthetics produce the states of general anesthesia.

    Edelman says the process is underway to name a successor to Brown as co-director of HST at MIT. More

  • in

    Community policing in the Global South

    Community policing is meant to combat citizen mistrust of the police force. The concept was developed in the mid-20th century to help officers become part of the communities they are responsible for. The hope was that such presence would create a partnership between citizens and the police force, leading to reduced crime and increased trust. Studies in the 1990s from the United States, United Kingdom, and Australia showed that these goals can be achieved in certain circumstances. Many metropolitan areas in the Global North have since included community policing in their techniques.

    But a recently published study of six different sites in the Global South showed no significant positive effect associated with community policing across a range of countries.

    “We found no reduction in crime or insecurity in these communities, and no increase in trust in the police,” says Fotini Christia, an author of the paper, which was published in Science. Christia is the Ford International Professor in the Social Sciences at MIT and the director of the Sociotechnical Systems Research Center (SSRC) within the Institute for Data, Systems, and Society (IDSS). She was one of three on the steering committee for the research, which also included lead author Graeme Blair at the University of California at Los Angeles and Jeremy Weinstein at Stanford University. Fellow MIT political scientist Lily Tsai was also a co-author on the paper.

    In this study, randomized-control trials of community policing initiatives were implemented at sites in Santa Catarina State, Brazil; Medellín, Colombia; Monrovia, Liberia; Sorsogon Province, Philippines; Ugandan rural areas; and two Punjab Province districts in Pakistan. Each suite of interventions was developed based on the needs of the area but consisted of core elements of community policing such as officer recruitment and training, foot patrols, town hall meetings, and problem-oriented policing. The work was done by a collaboration of several social scientists in the United States and abroad. Major funding for this project was provided by the UK Foreign, Commonwealth and Development Office, awarded through the Evidence in Governance and Politics network.

    The null results were determined after interviewing 18,382 citizens and 874 police officers involved in the experiment over six years.

    The strength of these results lies in the size of the collaboration and the care taken in the research design. Input from researchers representing 22 different departments from universities around the world allowed for a broad diversity of study sites across the Global South. And the study was preregistered to establish a common approach to measurement and indicate exactly which effects the researchers were tracking, to avoid any chance of mining the data to find positive effects.

    “This is a pathbreaking study across a diverse set of sites that provides a new understanding about community policing outside of the Western world” says Christopher Winship, the Diker-Tishman Professor of Sociology at Harvard University, who was not an author on the paper.

    Structural overhaul

    The reasons for the failure of community policing to elicit positive results were as varied as the sites themselves, but an important commonality was difficulties in implementation.

    “We saw three common problems: limited resources, a lack of prioritization of the reform, and rapid rotation of officers,” says Blair. “These challenges lead to weaker implementation of community policing than we’ve seen in ‘success stories’ in the U.S. and may explain why community policing didn’t deliver the same results in these Global South contexts.”

    Citizen attendance at community meetings was variable. And then, resources dedicated to following up on problems identified by citizens were scarce. Police officers in the countries represented in the study are often over-stretched, leaving them unable to adequately follow up on their community policing duties.

    For example, Ugandan police stations averaged one motorbike per whole station, and outposts averaged less than one. At the study sites in Pakistan, fewer than 25 percent of issues that arose in community meetings were followed up on. The police officers tried to push the problems through to other agencies that could assist, but those agencies were also underresourced.            

    There was also significant officer turnover. “In many places, we started with and trained one group of officers and ended with a completely different set of folks,” says Christia.

    In the Philippines, only 25 percent of officers were still in the same post 11 months after the start of the study. Not only is it difficult to train new recruits in the methods of community policing with that rate of turnover, it also makes it extremely difficult to build community respect and familiarity with officers.

    Even in the Global North, the success of community policing can vary. As part of their study, the researchers conducted a review of 43 existing randomized trials conducted since the 1970s to determine the success rate of community policing endeavors already in place.

    They found that in these initiatives, problem-oriented policing reduces crime and likely improves perceptions of safety in a community, but there is mixed-to-negative evidence on the benefits of police presence on crime and perceptions of police. 

    That these initiatives struggle to achieve consistently positive results in countries with better resources indicates there is significant work to be done before success can be achieved in the Global South. Improvements in policing in the Global South may require major structural overhauls of the systems to ensure resource availability, encourage community engagement, and enhance officers’ abilities to follow up on issues of concern.

    “Issues of crime and violence are at the top of the policy agenda in the Global South, and this research demonstrates how universities and government partners can work together to identify the most effective strategies from improving people’s sense of safety,” says Weinstein. “While community policing strategies didn’t deliver the anticipated results on their own, the challenges in implementation point to the need for more systemic reforms that provide the necessary resources and align incentives for police to respond to citizens’ primary concerns.” More