More stories

  • in

    “They can see themselves shaping the world they live in”

    During the journey from the suburbs to the city, the tree canopy often dwindles down as skyscrapers rise up. A group of New England Innovation Academy students wondered why that is.“Our friend Victoria noticed that where we live in Marlborough there are lots of trees in our own backyards. But if you drive just 30 minutes to Boston, there are almost no trees,” said high school junior Ileana Fournier. “We were struck by that duality.”This inspired Fournier and her classmates Victoria Leeth and Jessie Magenyi to prototype a mobile app that illustrates Massachusetts deforestation trends for Day of AI, a free, hands-on curriculum developed by the MIT Responsible AI for Social Empowerment and Education (RAISE) initiative, headquartered in the MIT Media Lab and in collaboration with the MIT Schwarzman College of Computing and MIT Open Learning. They were among a group of 20 students from New England Innovation Academy who shared their projects during the 2024 Day of AI global celebration hosted with the Museum of Science.The Day of AI curriculum introduces K-12 students to artificial intelligence. Now in its third year, Day of AI enables students to improve their communities and collaborate on larger global challenges using AI. Fournier, Leeth, and Magenyi’s TreeSavers app falls under the Telling Climate Stories with Data module, one of four new climate-change-focused lessons.“We want you to be able to express yourselves creatively to use AI to solve problems with critical-thinking skills,” Cynthia Breazeal, director of MIT RAISE, dean for digital learning at MIT Open Learning, and professor of media arts and sciences, said during this year’s Day of AI global celebration at the Museum of Science. “We want you to have an ethical and responsible way to think about this really powerful, cool, and exciting technology.”Moving from understanding to actionDay of AI invites students to examine the intersection of AI and various disciplines, such as history, civics, computer science, math, and climate change. With the curriculum available year-round, more than 10,000 educators across 114 countries have brought Day of AI activities to their classrooms and homes.The curriculum gives students the agency to evaluate local issues and invent meaningful solutions. “We’re thinking about how to create tools that will allow kids to have direct access to data and have a personal connection that intersects with their lived experiences,” Robert Parks, curriculum developer at MIT RAISE, said at the Day of AI global celebration.Before this year, first-year Jeremie Kwapong said he knew very little about AI. “I was very intrigued,” he said. “I started to experiment with ChatGPT to see how it reacts. How close can I get this to human emotion? What is AI’s knowledge compared to a human’s knowledge?”In addition to helping students spark an interest in AI literacy, teachers around the world have told MIT RAISE that they want to use data science lessons to engage students in conversations about climate change. Therefore, Day of AI’s new hands-on projects use weather and climate change to show students why it’s important to develop a critical understanding of dataset design and collection when observing the world around them.“There is a lag between cause and effect in everyday lives,” said Parks. “Our goal is to demystify that, and allow kids to access data so they can see a long view of things.”Tools like MIT App Inventor — which allows anyone to create a mobile application — help students make sense of what they can learn from data. Fournier, Leeth, and Magenyi programmed TreeSavers in App Inventor to chart regional deforestation rates across Massachusetts, identify ongoing trends through statistical models, and predict environmental impact. The students put that “long view” of climate change into practice when developing TreeSavers’ interactive maps. Users can toggle between Massachusetts’s current tree cover, historical data, and future high-risk areas.Although AI provides fast answers, it doesn’t necessarily offer equitable solutions, said David Sittenfeld, director of the Center for the Environment at the Museum of Science. The Day of AI curriculum asks students to make decisions on sourcing data, ensuring unbiased data, and thinking responsibly about how findings could be used.“There’s an ethical concern about tracking people’s data,” said Ethan Jorda, a New England Innovation Academy student. His group used open-source data to program an app that helps users track and reduce their carbon footprint.Christine Cunningham, senior vice president of STEM Learning at the Museum of Science, believes students are prepared to use AI responsibly to make the world a better place. “They can see themselves shaping the world they live in,” said Cunningham. “Moving through from understanding to action, kids will never look at a bridge or a piece of plastic lying on the ground in the same way again.”Deepening collaboration on earth and beyondThe 2024 Day of AI speakers emphasized collaborative problem solving at the local, national, and global levels.“Through different ideas and different perspectives, we’re going to get better solutions,” said Cunningham. “How do we start young enough that every child has a chance to both understand the world around them but also to move toward shaping the future?”Presenters from MIT, the Museum of Science, and NASA approached this question with a common goal — expanding STEM education to learners of all ages and backgrounds.“We have been delighted to collaborate with the MIT RAISE team to bring this year’s Day of AI celebration to the Museum of Science,” says Meg Rosenburg, manager of operations at the Museum of Science Centers for Public Science Learning. “This opportunity to highlight the new climate modules for the curriculum not only perfectly aligns with the museum’s goals to focus on climate and active hope throughout our Year of the Earthshot initiative, but it has also allowed us to bring our teams together and grow a relationship that we are very excited to build upon in the future.”Rachel Connolly, systems integration and analysis lead for NASA’s Science Activation Program, showed the power of collaboration with the example of how human comprehension of Saturn’s appearance has evolved. From Galileo’s early telescope to the Cassini space probe, modern imaging of Saturn represents 400 years of science, technology, and math working together to further knowledge.“Technologies, and the engineers who built them, advance the questions we’re able to ask and therefore what we’re able to understand,” said Connolly, research scientist at MIT Media Lab.New England Innovation Academy students saw an opportunity for collaboration a little closer to home. Emmett Buck-Thompson, Jeff Cheng, and Max Hunt envisioned a social media app to connect volunteers with local charities. Their project was inspired by Buck-Thompson’s father’s difficulties finding volunteering opportunities, Hunt’s role as the president of the school’s Community Impact Club, and Cheng’s aspiration to reduce screen time for social media users. Using MIT App Inventor, ​their combined ideas led to a prototype with the potential to make a real-world impact in their community.The Day of AI curriculum teaches the mechanics of AI, ethical considerations and responsible uses, and interdisciplinary applications for different fields. It also empowers students to become creative problem solvers and engaged citizens in their communities and online. From supporting volunteer efforts to encouraging action for the state’s forests to tackling the global challenge of climate change, today’s students are becoming tomorrow’s leaders with Day of AI.“We want to empower you to know that this is a tool you can use to make your community better, to help people around you with this technology,” said Breazeal.Other Day of AI speakers included Tim Ritchie, president of the Museum of Science; Michael Lawrence Evans, program director of the Boston Mayor’s Office of New Urban Mechanics; Dava Newman, director of the MIT Media Lab; and Natalie Lao, executive director of the App Inventor Foundation. More

  • in

    Fotini Christia named director of the Institute for Data, Systems, and Society

    Fotini Christia, the Ford International Professor of Social Sciences in the Department of Political Science, has been named the new director of the Institute for Data, Systems, and Society (IDSS), effective July 1.“Fotini is well-positioned to guide IDSS into the next chapter. With her tenure as the director of the Sociotechnical Systems Research Center and as an associate director of IDSS since 2020, she has actively forged connections between the social sciences, data science, and computation,” says Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science. “I eagerly anticipate the ways in which she will advance and champion IDSS in alignment with the spirit and mission of the Schwarzman College of Computing.”“Fotini’s profound expertise as a social scientist and her adept use of data science, computational tools, and novel methodologies to grasp the dynamics of societal evolution across diverse fields, makes her a natural fit to lead IDSS,” says Asu Ozdaglar, deputy dean of the MIT Schwarzman College of Computing and head of the Department of Electrical Engineering and Computer Science.Christia’s research has focused on issues of conflict and cooperation in the Muslim world, for which she has conducted fieldwork in Afghanistan, Bosnia, Iraq, the Palestinian Territories, and Yemen, among others. More recently, her research has been directed at examining how to effectively integrate artificial intelligence tools in public policy.She was appointed the director of the Sociotechnical Systems Research Center (SSRC) and an associate director of IDSS in October 2020. SSRC, an interdisciplinary center housed within IDSS in the MIT Schwarzman College of Computing, focuses on the study of high-impact, complex societal challenges that shape our world.As part of IDSS, she is co-organizer of a cross-disciplinary research effort, the Initiative on Combatting Systemic Racism. Bringing together faculty and researchers from all of MIT’s five schools and the college, the initiative builds on extensive social science literature on systemic racism and uses big data to develop and harness computational tools that can help effect structural and normative change toward racial equity across housing, health care, policing, and social media. Christia is also chair of IDSS’s doctoral program in Social and Engineering Systems.Christia is the author of “Alliance Formation in Civil War” (Cambridge University Press, 2012), which was awarded the Luebbert Award for Best Book in Comparative Politics, the Lepgold Prize for Best Book in International Relations, and a Distinguished Book Award from the International Studies Association. She is co-editor with Graeme Blair (University of California, Los Angeles) and Jeremy Weinstein (incoming dean at Harvard Kennedy School) of “Crime, Insecurity, and Community Policing: Experiments on Building Trust,” forthcoming in August 2024 with Cambridge University Press.Her research has also appeared in Science, Nature Human Behavior, Review of Economic Studies, American Economic Journal: Applied Economics, NeurIPs, Communications Medicine, IEEE Transactions on Network Science and Engineering, American Political Science Review, and Annual Review of Political Science, among other journals. Her opinion pieces have been published in Foreign Affairs, The New York Times, The Washington Post, and The Boston Globe, among other outlets.A native of Greece, where she grew up in the port city of Salonika, Christia moved to the United States to attend college at Columbia University. She graduated magna cum laude in 2001 with a joint BA in economics–operations research and an MA in international affairs. She joined the MIT faculty in 2008 after receiving her PhD in public policy from Harvard University.Christia succeeds Noelle Selin, a professor in IDSS and the Department of Earth, Atmospheric, and Planetary Sciences. Selin has led IDSS as interim director for the 2023-24 academic year since July 2023, following Professor Martin Wainwright.“I am incredibly grateful to Noelle for serving as interim director this year. Her contributions in this role, as well as her time leading the Technology and Policy Program, have been invaluable. I’m delighted she will remain part of the IDSS community as a faculty member,” says Huttenlocher. More

  • in

    An AI dataset carves new paths to tornado detection

    The return of spring in the Northern Hemisphere touches off tornado season. A tornado’s twisting funnel of dust and debris seems an unmistakable sight. But that sight can be obscured to radar, the tool of meteorologists. It’s hard to know exactly when a tornado has formed, or even why.

    A new dataset could hold answers. It contains radar returns from thousands of tornadoes that have hit the United States in the past 10 years. Storms that spawned tornadoes are flanked by other severe storms, some with nearly identical conditions, that never did. MIT Lincoln Laboratory researchers who curated the dataset, called TorNet, have now released it open source. They hope to enable breakthroughs in detecting one of nature’s most mysterious and violent phenomena.

    “A lot of progress is driven by easily available, benchmark datasets. We hope TorNet will lay a foundation for machine learning algorithms to both detect and predict tornadoes,” says Mark Veillette, the project’s co-principal investigator with James Kurdzo. Both researchers work in the Air Traffic Control Systems Group. 

    Along with the dataset, the team is releasing models trained on it. The models show promise for machine learning’s ability to spot a twister. Building on this work could open new frontiers for forecasters, helping them provide more accurate warnings that might save lives. 

    Swirling uncertainty

    About 1,200 tornadoes occur in the United States every year, causing millions to billions of dollars in economic damage and claiming 71 lives on average. Last year, one unusually long-lasting tornado killed 17 people and injured at least 165 others along a 59-mile path in Mississippi.  

    Yet tornadoes are notoriously difficult to forecast because scientists don’t have a clear picture of why they form. “We can see two storms that look identical, and one will produce a tornado and one won’t. We don’t fully understand it,” Kurdzo says.

    A tornado’s basic ingredients are thunderstorms with instability caused by rapidly rising warm air and wind shear that causes rotation. Weather radar is the primary tool used to monitor these conditions. But tornadoes lay too low to be detected, even when moderately close to the radar. As the radar beam with a given tilt angle travels further from the antenna, it gets higher above the ground, mostly seeing reflections from rain and hail carried in the “mesocyclone,” the storm’s broad, rotating updraft. A mesocyclone doesn’t always produce a tornado.

    With this limited view, forecasters must decide whether or not to issue a tornado warning. They often err on the side of caution. As a result, the rate of false alarms for tornado warnings is more than 70 percent. “That can lead to boy-who-cried-wolf syndrome,” Kurdzo says.  

    In recent years, researchers have turned to machine learning to better detect and predict tornadoes. However, raw datasets and models have not always been accessible to the broader community, stifling progress. TorNet is filling this gap.

    The dataset contains more than 200,000 radar images, 13,587 of which depict tornadoes. The rest of the images are non-tornadic, taken from storms in one of two categories: randomly selected severe storms or false-alarm storms (those that led a forecaster to issue a warning but that didn’t produce a tornado).

    Each sample of a storm or tornado comprises two sets of six radar images. The two sets correspond to different radar sweep angles. The six images portray different radar data products, such as reflectivity (showing precipitation intensity) or radial velocity (indicating if winds are moving toward or away from the radar).

    A challenge in curating the dataset was first finding tornadoes. Within the corpus of weather radar data, tornadoes are extremely rare events. The team then had to balance those tornado samples with difficult non-tornado samples. If the dataset were too easy, say by comparing tornadoes to snowstorms, an algorithm trained on the data would likely over-classify storms as tornadic.

    “What’s beautiful about a true benchmark dataset is that we’re all working with the same data, with the same level of difficulty, and can compare results,” Veillette says. “It also makes meteorology more accessible to data scientists, and vice versa. It becomes easier for these two parties to work on a common problem.”

    Both researchers represent the progress that can come from cross-collaboration. Veillette is a mathematician and algorithm developer who has long been fascinated by tornadoes. Kurdzo is a meteorologist by training and a signal processing expert. In grad school, he chased tornadoes with custom-built mobile radars, collecting data to analyze in new ways.

    “This dataset also means that a grad student doesn’t have to spend a year or two building a dataset. They can jump right into their research,” Kurdzo says.

    This project was funded by Lincoln Laboratory’s Climate Change Initiative, which aims to leverage the laboratory’s diverse technical strengths to help address climate problems threatening human health and global security.

    Chasing answers with deep learning

    Using the dataset, the researchers developed baseline artificial intelligence (AI) models. They were particularly eager to apply deep learning, a form of machine learning that excels at processing visual data. On its own, deep learning can extract features (key observations that an algorithm uses to make a decision) from images across a dataset. Other machine learning approaches require humans to first manually label features. 

    “We wanted to see if deep learning could rediscover what people normally look for in tornadoes and even identify new things that typically aren’t searched for by forecasters,” Veillette says.

    The results are promising. Their deep learning model performed similar to or better than all tornado-detecting algorithms known in literature. The trained algorithm correctly classified 50 percent of weaker EF-1 tornadoes and over 85 percent of tornadoes rated EF-2 or higher, which make up the most devastating and costly occurrences of these storms.

    They also evaluated two other types of machine-learning models, and one traditional model to compare against. The source code and parameters of all these models are freely available. The models and dataset are also described in a paper submitted to a journal of the American Meteorological Society (AMS). Veillette presented this work at the AMS Annual Meeting in January.

    “The biggest reason for putting our models out there is for the community to improve upon them and do other great things,” Kurdzo says. “The best solution could be a deep learning model, or someone might find that a non-deep learning model is actually better.”

    TorNet could be useful in the weather community for others uses too, such as for conducting large-scale case studies on storms. It could also be augmented with other data sources, like satellite imagery or lightning maps. Fusing multiple types of data could improve the accuracy of machine learning models.

    Taking steps toward operations

    On top of detecting tornadoes, Kurdzo hopes that models might help unravel the science of why they form.

    “As scientists, we see all these precursors to tornadoes — an increase in low-level rotation, a hook echo in reflectivity data, specific differential phase (KDP) foot and differential reflectivity (ZDR) arcs. But how do they all go together? And are there physical manifestations we don’t know about?” he asks.

    Teasing out those answers might be possible with explainable AI. Explainable AI refers to methods that allow a model to provide its reasoning, in a format understandable to humans, of why it came to a certain decision. In this case, these explanations might reveal physical processes that happen before tornadoes. This knowledge could help train forecasters, and models, to recognize the signs sooner. 

    “None of this technology is ever meant to replace a forecaster. But perhaps someday it could guide forecasters’ eyes in complex situations, and give a visual warning to an area predicted to have tornadic activity,” Kurdzo says.

    Such assistance could be especially useful as radar technology improves and future networks potentially grow denser. Data refresh rates in a next-generation radar network are expected to increase from every five minutes to approximately one minute, perhaps faster than forecasters can interpret the new information. Because deep learning can process huge amounts of data quickly, it could be well-suited for monitoring radar returns in real time, alongside humans. Tornadoes can form and disappear in minutes.

    But the path to an operational algorithm is a long road, especially in safety-critical situations, Veillette says. “I think the forecaster community is still, understandably, skeptical of machine learning. One way to establish trust and transparency is to have public benchmark datasets like this one. It’s a first step.”

    The next steps, the team hopes, will be taken by researchers across the world who are inspired by the dataset and energized to build their own algorithms. Those algorithms will in turn go into test beds, where they’ll eventually be shown to forecasters, to start a process of transitioning into operations.

    In the end, the path could circle back to trust.

    “We may never get more than a 10- to 15-minute tornado warning using these tools. But if we could lower the false-alarm rate, we could start to make headway with public perception,” Kurdzo says. “People are going to use those warnings to take the action they need to save their lives.” More

  • in

    Growing our donated organ supply

    For those in need of one, an organ transplant is a matter of life and death. 

    Every year, the medical procedure gives thousands of people with advanced or end-stage diseases extended life. This “second chance” is heavily dependent on the availability, compatibility, and proximity of a precious resource that can’t be simply bought, grown, or manufactured — at least not yet.

    Instead, organs must be given — cut from one body and implanted into another. And because living organ donation is only viable in certain cases, many organs are only available for donation after the donor’s death.

    Unsurprisingly, the logistical and ethical complexity of distributing a limited number of transplant organs to a growing wait list of patients has received much attention. There’s an important part of the process that has received less focus, however, and which may hold significant untapped potential: organ procurement itself.

    “If you have a donated organ, who should you give it to? This question has been extensively studied in operations research, economics, and even applied computer science,” says Hammaad Adam, a graduate student in the Social and Engineering Systems (SES) doctoral program at the MIT Institute for Data, Systems, and Society (IDSS). “But there’s been a lot less research on where that organ comes from in the first place.”

    In the United States, nonprofits called organ procurement organizations, or OPOs, are responsible for finding and evaluating potential donors, interacting with grieving families and hospital administrations, and recovering and delivering organs — all while following the federal laws that serve as both their mandate and guardrails. Recent studies estimate that obstacles and inefficiencies lead to thousands of organs going uncollected every year, even as the demand for transplants continues to grow.

    “There’s been little transparent data on organ procurement,” argues Adam. Working with MIT computer science professors Marzyeh Ghassemi and Ashia Wilson, and in collaboration with stakeholders in organ procurement, Adam led a project to create a dataset called ORCHID: Organ Retrieval and Collection of Health Information for Donation. ORCHID contains a decade of clinical, financial, and administrative data from six OPOs.

    “Our goal is for the ORCHID database to have an impact in how organ procurement is understood, internally and externally,” says Ghassemi.

    Efficiency and equity 

    It was looking to make an impact that drew Adam to SES and MIT. With a background in applied math and experience in strategy consulting, solving problems with technical components sits right in his wheelhouse.

    “I really missed challenging technical problems from a statistics and machine learning standpoint,” he says of his time in consulting. “So I went back and got a master’s in data science, and over the course of my master’s got involved in a bunch of academic research projects in a few different fields, including biology, management science, and public policy. What I enjoyed most were some of the more social science-focused projects that had immediate impact.”

    As a grad student in SES, Adam’s research focuses on using statistical tools to uncover health-care inequities, and developing machine learning approaches to address them. “Part of my dissertation research focuses on building tools that can improve equity in clinical trials and other randomized experiments,” he explains.

    One recent example of Adam’s work: developing a novel method to stop clinical trials early if the treatment has an unintended harmful effect for a minority group of participants. “I’ve also been thinking about ways to increase minority representation in clinical trials through improved patient recruitment,” he adds.

    Racial inequities in health care extend into organ transplantation, where a majority of wait-listed patients are not white — far in excess of their demographic groups’ proportion to the overall population. There are fewer organ donations from many of these communities, due to various obstacles in need of better understanding if they are to be overcome. 

    “My work in organ transplantation began on the allocation side,” explains Adam. “In work under review, we examined the role of race in the acceptance of heart, liver, and lung transplant offers by physicians on behalf of their patients. We found that Black race of the patient was associated with significantly lower odds of organ offer acceptance — in other words, transplant doctors seemed more likely to turn down organs offered to Black patients. This trend may have multiple explanations, but it is nevertheless concerning.”

    Adam’s research has also found that donor-candidate race match was associated with significantly higher odds of offer acceptance, an association that Adam says “highlights the importance of organ donation from racial minority communities, and has motivated our work on equitable organ procurement.”

    Working with Ghassemi through the IDSS Initiative on Combatting Systemic Racism, Adam was introduced to OPO stakeholders looking to collaborate. “It’s this opportunity to impact not only health-care efficiency, but also health-care equity, that really got me interested in this research,” says Adam.

    Play video

    MIT Initiative on Combatting Systemic Racism – HealthcareVideo: IDSS

    Making an impact

    Creating a database like ORCHID means solving problems in multiple domains, from the technical to the political. Some efforts never overcome the first step: getting data in the first place. Thankfully, several OPOs were already seeking collaborations and looking to improve their performance.

    “We have been lucky to have a strong partnership with the OPOs, and we hope to work together to find important insights to improve efficiency and equity,” says Ghassemi.

    The value of a database like ORCHID is in its potential for generating new insights, especially through quantitative analysis with statistics and computing tools like machine learning. The potential value in ORCHID was recognized with an MIT Prize for Open Data, an MIT Libraries award highlighting the importance and impact of research data that is openly shared.

    “It’s nice that the work got some recognition,” says Adam of the prize. “And it was cool to see some of the other great open data work that’s happening at MIT. I think there’s real impact in releasing publicly available data in an important and understudied domain.”

    All the same, Adam knows that building the database is only the first step.

    “I’m very interested in understanding the bottlenecks in the organ procurement process,” he explains. “As part of my thesis research, I’m exploring this by modeling OPO decision-making using causal inference and structural econometrics.”

    Using insights from this research, Adam also aims to evaluate policy changes that can improve both equity and efficiency in organ procurement. “And we’re hoping to recruit more OPOs, and increase the amount of data we’re releasing,” he says. “The dream state is every OPO joins our collaboration and provides updated data every year.”

    Adam is excited to see how other researchers might use the data to address inefficiencies in organ procurement. “Every organ donor saves between three and four lives,” he says. “So every research project that comes out of this dataset could make a real impact.” More

  • in

    New software enables blind and low-vision users to create interactive, accessible charts

    A growing number of tools enable users to make online data representations, like charts, that are accessible for people who are blind or have low vision. However, most tools require an existing visual chart that can then be converted into an accessible format.

    This creates barriers that prevent blind and low-vision users from building their own custom data representations, and it can limit their ability to explore and analyze important information.

    A team of researchers from MIT and University College London (UCL) wants to change the way people think about accessible data representations.

    They created a software system called Umwelt (which means “environment” in German) that can enable blind and low-vision users to build customized, multimodal data representations without needing an initial visual chart.

    Umwelt, an authoring environment designed for screen-reader users, incorporates an editor that allows someone to upload a dataset and create a customized representation, such as a scatterplot, that can include three modalities: visualization, textual description, and sonification. Sonification involves converting data into nonspeech audio.

    The system, which can represent a variety of data types, includes a viewer that enables a blind or low-vision user to interactively explore a data representation, seamlessly switching between each modality to interact with data in a different way.

    The researchers conducted a study with five expert screen-reader users who found Umwelt to be useful and easy to learn. In addition to offering an interface that empowered them to create data representations — something they said was sorely lacking — the users said Umwelt could facilitate communication between people who rely on different senses.

    “We have to remember that blind and low-vision people aren’t isolated. They exist in these contexts where they want to talk to other people about data,” says Jonathan Zong, an electrical engineering and computer science (EECS) graduate student and lead author of a paper introducing Umwelt. “I am hopeful that Umwelt helps shift the way that researchers think about accessible data analysis. Enabling the full participation of blind and low-vision people in data analysis involves seeing visualization as just one piece of this bigger, multisensory puzzle.”

    Joining Zong on the paper are fellow EECS graduate students Isabella Pedraza Pineros and Mengzhu “Katie” Chen; Daniel Hajas, a UCL researcher who works with the Global Disability Innovation Hub; and senior author Arvind Satyanarayan, associate professor of computer science at MIT who leads the Visualization Group in the Computer Science and Artificial Intelligence Laboratory. The paper will be presented at the ACM Conference on Human Factors in Computing.

    De-centering visualization

    The researchers previously developed interactive interfaces that provide a richer experience for screen reader users as they explore accessible data representations. Through that work, they realized most tools for creating such representations involve converting existing visual charts.

    Aiming to decenter visual representations in data analysis, Zong and Hajas, who lost his sight at age 16, began co-designing Umwelt more than a year ago.

    At the outset, they realized they would need to rethink how to represent the same data using visual, auditory, and textual forms.

    “We had to put a common denominator behind the three modalities. By creating this new language for representations, and making the output and input accessible, the whole is greater than the sum of its parts,” says Hajas.

    To build Umwelt, they first considered what is unique about the way people use each sense.

    For instance, a sighted user can see the overall pattern of a scatterplot and, at the same time, move their eyes to focus on different data points. But for someone listening to a sonification, the experience is linear since data are converted into tones that must be played back one at a time.

    “If you are only thinking about directly translating visual features into nonvisual features, then you miss out on the unique strengths and weaknesses of each modality,” Zong adds.

    They designed Umwelt to offer flexibility, enabling a user to switch between modalities easily when one would better suit their task at a given time.

    To use the editor, one uploads a dataset to Umwelt, which employs heuristics to automatically creates default representations in each modality.

    If the dataset contains stock prices for companies, Umwelt might generate a multiseries line chart, a textual structure that groups data by ticker symbol and date, and a sonification that uses tone length to represent the price for each date, arranged by ticker symbol.

    The default heuristics are intended to help the user get started.

    “In any kind of creative tool, you have a blank-slate effect where it is hard to know how to begin. That is compounded in a multimodal tool because you have to specify things in three different representations,” Zong says.

    The editor links interactions across modalities, so if a user changes the textual description, that information is adjusted in the corresponding sonification. Someone could utilize the editor to build a multimodal representation, switch to the viewer for an initial exploration, then return to the editor to make adjustments.

    Helping users communicate about data

    To test Umwelt, they created a diverse set of multimodal representations, from scatterplots to multiview charts, to ensure the system could effectively represent different data types. Then they put the tool in the hands of five expert screen reader users.

    Study participants mostly found Umwelt to be useful for creating, exploring, and discussing data representations. One user said Umwelt was like an “enabler” that decreased the time it took them to analyze data. The users agreed that Umwelt could help them communicate about data more easily with sighted colleagues.

    “What stands out about Umwelt is its core philosophy of de-emphasizing the visual in favor of a balanced, multisensory data experience. Often, nonvisual data representations are relegated to the status of secondary considerations, mere add-ons to their visual counterparts. However, visualization is merely one aspect of data representation. I appreciate their efforts in shifting this perception and embracing a more inclusive approach to data science,” says JooYoung Seo, an assistant professor in the School of Information Sciences at the University of Illinois at Urbana-Champagne, who was not involved with this work.

    Moving forward, the researchers plan to create an open-source version of Umwelt that others can build upon. They also want to integrate tactile sensing into the software system as an additional modality, enabling the use of tools like refreshable tactile graphics displays.

    “In addition to its impact on end users, I am hoping that Umwelt can be a platform for asking scientific questions around how people use and perceive multimodal representations, and how we can improve the design beyond this initial step,” says Zong.

    This work was supported, in part, by the National Science Foundation and the MIT Morningside Academy for Design Fellowship. More

  • in

    Q&A: How refusal can be an act of design

    This month in the ACM Journal on Responsible Computing, MIT graduate student Jonathan Zong SM ’20 and co-author J. Nathan Matias SM ’13, PhD ’17 of the Cornell Citizens and Technology Lab examine how the notion of refusal can open new avenues in the field of data ethics. In their open-access report, “Data Refusal From Below: A Framework for Understanding, Evaluating, and Envisioning Refusal as Design,” the pair proposes a framework in four dimensions to map how individuals can say “no” to technology misuses. At the same time, the researchers argue that just like design, refusal is generative, and has the potential to create alternate futures.

    Zong, a PhD candidate in electrical engineering and computer science, 2022-23 MIT Morningside Academy for Design Design Fellow, and member of the MIT Visualization Group, describes his latest work in this Q&A.

    Q: How do you define the concept of “refusal,” and where does it come from?

    A: Refusal was developed in feminist and Indigenous studies. It’s this idea of saying “no,” without being given permission to say “no.” Scholars like Ruha Benjamin write about refusal in the context of surveillance, race, and bioethics, and talk about it as a necessary counterpart to consent. Others, like the authors of the “Feminist Data Manifest-No,” think of refusal as something that can help us commit to building better futures.

    Benjamin illustrates cases where the choice to refuse is not equally possible for everyone, citing examples involving genetic data and refugee screenings in the U.K. The imbalance of power in these situations underscores the broader concept of refusal, extending beyond rejecting specific options to challenging the entire set of choices presented.

    Q: What inspired you to work on the notion of refusal as an act of design?

    A: In my work on data ethics, I’ve been thinking about how to incorporate processes into research data collection, particularly around consent and opt-out, with a focus on individual autonomy and the idea of giving people choices about the way that their data is used. But when it comes to data privacy, simply making choices available is not enough. Choices can be unequally available, or create no-win situations where all options are bad. This led me to the concept of refusal: questioning the authority of data collectors and challenging their legitimacy.

    The key idea of my work is that refusal is an act of design. I think of refusal as deliberate actions to redesign our socio-technical landscape by exerting some sort of influence. Like design, refusal is generative. Like design, it’s oriented towards creating alternate possibilities and alternate futures. Design is a process of exploring or traversing a space of possibility. Applying a design framework to cases of refusal drawn from scholarly and journalistic sources allowed me to establish a common language for talking about refusal and to imagine refusals that haven’t been explored yet.

    Q: What are the stakes around data privacy and data collection?

    A: The use of data for facial recognition surveillance in the U.S. is a big example we use in the paper. When people do everyday things like post on social media or walk past cameras in public spaces, they might be contributing their data to training facial recognition systems. For instance, a tech company may take photos from a social media site and build facial recognition that they then sell to the government. In the U.S., these systems are disproportionately used by police to surveil communities of color. It is difficult to apply concepts like consent and opt out of these processes, because they happen over time and involve multiple kinds of institutions. It’s also not clear that individual opt-out would do anything to change the overall situation. Refusal then becomes a crucial avenue, at both individual and community levels, to think more broadly of how affected people still exert some kind of voice or agency, without necessarily having an official channel to do so.

    Q: Why do you think these issues are more particularly affecting disempowered communities?

    A: People who are affected by technologies are not always included in the design process for those technologies. Refusal then becomes a meaningful expression of values and priorities for those who were not part of the early design conversations. Actions taken against technologies like face surveillance — be it legal battles against companies, advocacy for stricter regulations, or even direct action like disabling security cameras — may not fit the conventional notion of participating in a design process. And yet, these are the actions available to refusers who may be excluded from other forms of participation.

    I’m particularly inspired by the movement around Indigenous data sovereignty. Organizations like the First Nations Information Governance Centre work towards prioritizing Indigenous communities’ perspectives in data collection, and refuse inadequate representation in official health data from the Canadian government. I think this is a movement that exemplifies the potential of refusal, not only as a way to reject what’s being offered, but also as a means to propose a constructive alternative, very much like design. Refusal is not merely a negation, but a pathway to different futures.

    Q: Can you elaborate on the design framework you propose?

    A: Refusals vary widely across contexts and scales. Developing a framework for refusal is about helping people see actions that are seemingly very different as instances of the same broader idea. Our framework consists of four facets: autonomy, time, power, and cost.

    Consider the case of IBM creating a facial recognition dataset using people’s photos without consent. We saw multiple forms of refusal emerge in response. IBM allowed individuals to opt out by withdrawing their photos. People collectively refused by creating a class-action lawsuit against IBM. Around the same time, many U.S. cities started passing local legislation banning the government use of facial recognition. Evaluating these cases through the framework highlights commonalities and differences. The framework highlights varied approaches to autonomy, like individual opt-out and collective action. Regarding time, opt-outs and lawsuits react to past harm, while legislation might proactively prevent future harm. Power dynamics differ; withdrawing individual photos minimally influences IBM, while legislation could potentially cause longer-term change. And as for cost, individual opt-out seems less demanding, while other approaches require more time and effort, balanced against potential benefits.

    The framework facilitates case description and comparison across these dimensions. I think its generative nature encourages exploration of novel forms of refusal as well. By identifying the characteristics we want to see in future refusal strategies — collective, proactive, powerful, low-cost… — we can aspire to shape future approaches and change the behavior of data collectors. We may not always be able to combine all these criteria, but the framework provides a means to articulate our aspirational goals in this context.

    Q: What impact do you hope this research will have?

    A: I hope to expand the notion of who can participate in design, and whose actions are seen as legitimate expressions of design input. I think a lot of work so far in the conversation around data ethics prioritizes the perspective of computer scientists who are trying to design better systems, at the expense of the perspective of people for whom the systems are not currently working. So, I hope designers and computer scientists can embrace the concept of refusal as a legitimate form of design, and a source of inspiration. There’s a vital conversation happening, one that should influence the design of future systems, even if expressed through unconventional means.

    One of the things I want to underscore in the paper is that design extends beyond software. Taking a socio-technical perspective, the act of designing encompasses software, institutions, relationships, and governance structures surrounding data use. I want people who aren’t software engineers, like policymakers or activists, to view themselves as integral to the technology design process. More

  • in

    “We offer another place for knowledge”

    In the Dzaleka Refugee Camp in Malawi, Jospin Hassan didn’t have access to the education opportunities he sought. So, he decided to create his own. 

    Hassan knew the booming fields of data science and artificial intelligence could bring job opportunities to his community and help solve local challenges. After earning a spot in the 2020-21 cohort of the Certificate Program in Computer and Data Science from MIT Refugee Action Hub (ReACT), Hassan started sharing MIT knowledge and skills with other motivated learners in Dzaleka.

    MIT ReACT is now Emerging Talent, part of the Jameel World Education Lab (J-WEL) at MIT Open Learning. Currently serving its fifth cohort of global learners, Emerging Talent’s year-long certificate program incorporates high-quality computer science and data analysis coursework from MITx, professional skill building, experiential learning, apprenticeship work, and opportunities for networking with MIT’s global community of innovators. Hassan’s cohort honed their leadership skills through interactive online workshops with J-WEL and the 10-week online MIT Innovation Leadership Bootcamp. 

    “My biggest takeaway was networking, collaboration, and learning from each other,” Hassan says.

    Today, Hassan’s organization ADAI Circle offers mentorship and education programs for youth and other job seekers in the Dzaleka Refugee Camp. The curriculum encourages hands-on learning and collaboration.

    Launched in 2020, ADAI Circle aims to foster job creation and reduce poverty in Malawi through technology and innovation. In addition to their classes in data science, AI, software development, and hardware design, their Innovation Hub offers internet access to anyone in need. 

    Doing something different in the community

    Hassan first had the idea for his organization in 2018 when he reached a barrier in his own education journey. There were several programs in the Dzaleka Refugee Camp teaching learners how to code websites and mobile apps, but Hassan felt that they were limited in scope. 

    “We had good devices and internet access,” he says, “but I wanted to learn something new.” 

    Teaming up with co-founder Patrick Byamasu, Hassan and Byamasu set their sights on the longevity of AI and how that might create more jobs for people in their community. “The world is changing every day, and data scientists are in a higher demand today in various companies,” Hassan says. “For this reason, I decided to expand and share the knowledge that I acquired with my fellow refugees and the surrounding villages.”

    ADAI Circle draws inspiration from Hassan’s own experience with MIT Emerging Talent coursework, community, and training opportunities. For example, the MIT Bootcamps model is now standard practice for ADAI Circle’s annual hackathon. Hassan first introduced the hackathon to ADAI Circle students as part of his final experiential learning project of the Emerging Talent certificate program. 

    ADAI Circle’s annual hackathon is now an interactive — and effective — way to select students who will most benefit from its programs. The local schools’ curricula, Hassan says, might not provide enough of an academic challenge. “We can’t teach everyone and accommodate everyone because there are a lot of schools,” Hassan says, “but we offer another place for knowledge.” 

    The hackathon helps students develop data science and robotics skills. Before they start coding, students have to convince ADAI Circle teachers that their designs are viable, answering questions like, “What problem are you solving?” and “How will this help the community?” A community-oriented mindset is just as important to the curriculum.

    In addition to the practical skills Hassan gained from Emerging Talent, he leveraged the program’s network to help his community. Thanks to a social media connection Hassan made with the nongovernmental organization Give Internet after one of Emerging Talent’s virtual events, Give Internet brought internet access to ADAI Circle.

    Bridging the AI gap to unmet communities

    In 2023, ADAI Circle connected with another MIT Open Learning program, Responsible AI for Social Empowerment and Education (RAISE), which led to a pilot test of a project-based AI curriculum for middle school students. The Responsible AI for Computational Action (RAICA) curriculum equipped ADAI Circle students with AI skills for chatbots and natural language processing. 

    “I liked that program because it was based on what we’re teaching at the center,” Hassan says, speaking of his organization’s mission of bridging the AI gap to reach unmet communities.

    The RAICA curriculum was designed by education experts at MIT Scheller Teacher Education Program (STEP Lab) and AI experts from MIT Personal Robots group and MIT App Inventor. ADAI Circle teachers gave detailed feedback about the pilot to the RAICA team. During weekly meetings with Glenda Stump, education research scientist for RAICA and J-WEL, and Angela Daniel, teacher development specialist for RAICA, the teachers discussed their experiences, prepared for upcoming lessons, and translated the learning materials in real time. 

    “We are trying to create a curriculum that’s accessible worldwide and to students who typically have little or no access to technology,” says Mary Cate Gustafson-Quiett, curriculum design manager at STEP Lab and project manager for RAICA. “Working with ADAI and students in a refugee camp challenged us to design in more culturally and technologically inclusive ways.”

    Gustafson-Quiett says the curriculum feedback from ADAI Circle helped inform how RAICA delivers teacher development resources to accommodate learning environments with limited internet access. “They also exposed places where our team’s western ideals, specifically around individualism, crept into activities in the lesson and contrasted with their more communal cultural beliefs,” she says.

    Eager to introduce more MIT-developed AI resources, Hassan also shared MIT RAISE’s Day of AI curricula with ADAI Circle teachers. The new ChatGPT module gave students the chance to level up their chatbot programming skills that they gained from the RAICA module. Some of the advanced students are taking initiative to use ChatGPT API to create their own projects in education.

    “We don’t want to tell them what to do, we want them to come up with their own ideas,” Hassan says.

    Although ADAI Circle faces many challenges, Hassan says his team is addressing them one by one. Last year, they didn’t have electricity in their Innovation Hub, but they solved that. This year, they achieved a stable internet connection that’s one of the fastest in Malawi. Next up, they are hoping to secure more devices for their students, create more jobs, and add additional hubs throughout the community. The work is never done, but Hassan is starting to see the impact that ADAI Circle is making. 

    “For those who want to learn data science, let’s let them learn,” Hassan says. More

  • in

    New hope for early pancreatic cancer intervention via AI-based risk prediction

    The first documented case of pancreatic cancer dates back to the 18th century. Since then, researchers have undertaken a protracted and challenging odyssey to understand the elusive and deadly disease. To date, there is no better cancer treatment than early intervention. Unfortunately, the pancreas, nestled deep within the abdomen, is particularly elusive for early detection. 

    MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) scientists, alongside Limor Appelbaum, a staff scientist in the Department of Radiation Oncology at Beth Israel Deaconess Medical Center (BIDMC), were eager to better identify potential high-risk patients. They set out to develop two machine-learning models for early detection of pancreatic ductal adenocarcinoma (PDAC), the most common form of the cancer. To access a broad and diverse database, the team synced up with a federated network company, using electronic health record data from various institutions across the United States. This vast pool of data helped ensure the models’ reliability and generalizability, making them applicable across a wide range of populations, geographical locations, and demographic groups.

    The two models — the “PRISM” neural network, and the logistic regression model (a statistical technique for probability), outperformed current methods. The team’s comparison showed that while standard screening criteria identify about 10 percent of PDAC cases using a five-times higher relative risk threshold, Prism can detect 35 percent of PDAC cases at this same threshold. 

    Using AI to detect cancer risk is not a new phenomena — algorithms analyze mammograms, CT scans for lung cancer, and assist in the analysis of Pap smear tests and HPV testing, to name a few applications. “The PRISM models stand out for their development and validation on an extensive database of over 5 million patients, surpassing the scale of most prior research in the field,” says Kai Jia, an MIT PhD student in electrical engineering and computer science (EECS), MIT CSAIL affiliate, and first author on an open-access paper in eBioMedicine outlining the new work. “The model uses routine clinical and lab data to make its predictions, and the diversity of the U.S. population is a significant advancement over other PDAC models, which are usually confined to specific geographic regions, like a few health-care centers in the U.S. Additionally, using a unique regularization technique in the training process enhanced the models’ generalizability and interpretability.” 

    “This report outlines a powerful approach to use big data and artificial intelligence algorithms to refine our approach to identifying risk profiles for cancer,” says David Avigan, a Harvard Medical School professor and the cancer center director and chief of hematology and hematologic malignancies at BIDMC, who was not involved in the study. “This approach may lead to novel strategies to identify patients with high risk for malignancy that may benefit from focused screening with the potential for early intervention.” 

    Prismatic perspectives

    The journey toward the development of PRISM began over six years ago, fueled by firsthand experiences with the limitations of current diagnostic practices. “Approximately 80-85 percent of pancreatic cancer patients are diagnosed at advanced stages, where cure is no longer an option,” says senior author Appelbaum, who is also a Harvard Medical School instructor as well as radiation oncologist. “This clinical frustration sparked the idea to delve into the wealth of data available in electronic health records (EHRs).”The CSAIL group’s close collaboration with Appelbaum made it possible to understand the combined medical and machine learning aspects of the problem better, eventually leading to a much more accurate and transparent model. “The hypothesis was that these records contained hidden clues — subtle signs and symptoms that could act as early warning signals of pancreatic cancer,” she adds. “This guided our use of federated EHR networks in developing these models, for a scalable approach for deploying risk prediction tools in health care.”Both PrismNN and PrismLR models analyze EHR data, including patient demographics, diagnoses, medications, and lab results, to assess PDAC risk. PrismNN uses artificial neural networks to detect intricate patterns in data features like age, medical history, and lab results, yielding a risk score for PDAC likelihood. PrismLR uses logistic regression for a simpler analysis, generating a probability score of PDAC based on these features. Together, the models offer a thorough evaluation of different approaches in predicting PDAC risk from the same EHR data.

    One paramount point for gaining the trust of physicians, the team notes, is better understanding how the models work, known in the field as interpretability. The scientists pointed out that while logistic regression models are inherently easier to interpret, recent advancements have made deep neural networks somewhat more transparent. This helped the team to refine the thousands of potentially predictive features derived from EHR of a single patient to approximately 85 critical indicators. These indicators, which include patient age, diabetes diagnosis, and an increased frequency of visits to physicians, are automatically discovered by the model but match physicians’ understanding of risk factors associated with pancreatic cancer. 

    The path forward

    Despite the promise of the PRISM models, as with all research, some parts are still a work in progress. U.S. data alone are the current diet for the models, necessitating testing and adaptation for global use. The path forward, the team notes, includes expanding the model’s applicability to international datasets and integrating additional biomarkers for more refined risk assessment.

    “A subsequent aim for us is to facilitate the models’ implementation in routine health care settings. The vision is to have these models function seamlessly in the background of health care systems, automatically analyzing patient data and alerting physicians to high-risk cases without adding to their workload,” says Jia. “A machine-learning model integrated with the EHR system could empower physicians with early alerts for high-risk patients, potentially enabling interventions well before symptoms manifest. We are eager to deploy our techniques in the real world to help all individuals enjoy longer, healthier lives.” 

    Jia wrote the paper alongside Applebaum and MIT EECS Professor and CSAIL Principal Investigator Martin Rinard, who are both senior authors of the paper. Researchers on the paper were supported during their time at MIT CSAIL, in part, by the Defense Advanced Research Projects Agency, Boeing, the National Science Foundation, and Aarno Labs. TriNetX provided resources for the project, and the Prevent Cancer Foundation also supported the team. More