More stories

  • in

    Q&A: How refusal can be an act of design

    This month in the ACM Journal on Responsible Computing, MIT graduate student Jonathan Zong SM ’20 and co-author J. Nathan Matias SM ’13, PhD ’17 of the Cornell Citizens and Technology Lab examine how the notion of refusal can open new avenues in the field of data ethics. In their open-access report, “Data Refusal From Below: A Framework for Understanding, Evaluating, and Envisioning Refusal as Design,” the pair proposes a framework in four dimensions to map how individuals can say “no” to technology misuses. At the same time, the researchers argue that just like design, refusal is generative, and has the potential to create alternate futures.

    Zong, a PhD candidate in electrical engineering and computer science, 2022-23 MIT Morningside Academy for Design Design Fellow, and member of the MIT Visualization Group, describes his latest work in this Q&A.

    Q: How do you define the concept of “refusal,” and where does it come from?

    A: Refusal was developed in feminist and Indigenous studies. It’s this idea of saying “no,” without being given permission to say “no.” Scholars like Ruha Benjamin write about refusal in the context of surveillance, race, and bioethics, and talk about it as a necessary counterpart to consent. Others, like the authors of the “Feminist Data Manifest-No,” think of refusal as something that can help us commit to building better futures.

    Benjamin illustrates cases where the choice to refuse is not equally possible for everyone, citing examples involving genetic data and refugee screenings in the U.K. The imbalance of power in these situations underscores the broader concept of refusal, extending beyond rejecting specific options to challenging the entire set of choices presented.

    Q: What inspired you to work on the notion of refusal as an act of design?

    A: In my work on data ethics, I’ve been thinking about how to incorporate processes into research data collection, particularly around consent and opt-out, with a focus on individual autonomy and the idea of giving people choices about the way that their data is used. But when it comes to data privacy, simply making choices available is not enough. Choices can be unequally available, or create no-win situations where all options are bad. This led me to the concept of refusal: questioning the authority of data collectors and challenging their legitimacy.

    The key idea of my work is that refusal is an act of design. I think of refusal as deliberate actions to redesign our socio-technical landscape by exerting some sort of influence. Like design, refusal is generative. Like design, it’s oriented towards creating alternate possibilities and alternate futures. Design is a process of exploring or traversing a space of possibility. Applying a design framework to cases of refusal drawn from scholarly and journalistic sources allowed me to establish a common language for talking about refusal and to imagine refusals that haven’t been explored yet.

    Q: What are the stakes around data privacy and data collection?

    A: The use of data for facial recognition surveillance in the U.S. is a big example we use in the paper. When people do everyday things like post on social media or walk past cameras in public spaces, they might be contributing their data to training facial recognition systems. For instance, a tech company may take photos from a social media site and build facial recognition that they then sell to the government. In the U.S., these systems are disproportionately used by police to surveil communities of color. It is difficult to apply concepts like consent and opt out of these processes, because they happen over time and involve multiple kinds of institutions. It’s also not clear that individual opt-out would do anything to change the overall situation. Refusal then becomes a crucial avenue, at both individual and community levels, to think more broadly of how affected people still exert some kind of voice or agency, without necessarily having an official channel to do so.

    Q: Why do you think these issues are more particularly affecting disempowered communities?

    A: People who are affected by technologies are not always included in the design process for those technologies. Refusal then becomes a meaningful expression of values and priorities for those who were not part of the early design conversations. Actions taken against technologies like face surveillance — be it legal battles against companies, advocacy for stricter regulations, or even direct action like disabling security cameras — may not fit the conventional notion of participating in a design process. And yet, these are the actions available to refusers who may be excluded from other forms of participation.

    I’m particularly inspired by the movement around Indigenous data sovereignty. Organizations like the First Nations Information Governance Centre work towards prioritizing Indigenous communities’ perspectives in data collection, and refuse inadequate representation in official health data from the Canadian government. I think this is a movement that exemplifies the potential of refusal, not only as a way to reject what’s being offered, but also as a means to propose a constructive alternative, very much like design. Refusal is not merely a negation, but a pathway to different futures.

    Q: Can you elaborate on the design framework you propose?

    A: Refusals vary widely across contexts and scales. Developing a framework for refusal is about helping people see actions that are seemingly very different as instances of the same broader idea. Our framework consists of four facets: autonomy, time, power, and cost.

    Consider the case of IBM creating a facial recognition dataset using people’s photos without consent. We saw multiple forms of refusal emerge in response. IBM allowed individuals to opt out by withdrawing their photos. People collectively refused by creating a class-action lawsuit against IBM. Around the same time, many U.S. cities started passing local legislation banning the government use of facial recognition. Evaluating these cases through the framework highlights commonalities and differences. The framework highlights varied approaches to autonomy, like individual opt-out and collective action. Regarding time, opt-outs and lawsuits react to past harm, while legislation might proactively prevent future harm. Power dynamics differ; withdrawing individual photos minimally influences IBM, while legislation could potentially cause longer-term change. And as for cost, individual opt-out seems less demanding, while other approaches require more time and effort, balanced against potential benefits.

    The framework facilitates case description and comparison across these dimensions. I think its generative nature encourages exploration of novel forms of refusal as well. By identifying the characteristics we want to see in future refusal strategies — collective, proactive, powerful, low-cost… — we can aspire to shape future approaches and change the behavior of data collectors. We may not always be able to combine all these criteria, but the framework provides a means to articulate our aspirational goals in this context.

    Q: What impact do you hope this research will have?

    A: I hope to expand the notion of who can participate in design, and whose actions are seen as legitimate expressions of design input. I think a lot of work so far in the conversation around data ethics prioritizes the perspective of computer scientists who are trying to design better systems, at the expense of the perspective of people for whom the systems are not currently working. So, I hope designers and computer scientists can embrace the concept of refusal as a legitimate form of design, and a source of inspiration. There’s a vital conversation happening, one that should influence the design of future systems, even if expressed through unconventional means.

    One of the things I want to underscore in the paper is that design extends beyond software. Taking a socio-technical perspective, the act of designing encompasses software, institutions, relationships, and governance structures surrounding data use. I want people who aren’t software engineers, like policymakers or activists, to view themselves as integral to the technology design process. More

  • in

    Six MIT students selected as spring 2024 MIT-Pillar AI Collective Fellows

    The MIT-Pillar AI Collective has announced six fellows for the spring 2024 semester. With support from the program, the graduate students, who are in their final year of a master’s or PhD program, will conduct research in the areas of AI, machine learning, and data science with the aim of commercializing their innovations.

    Launched by MIT’s School of Engineering and Pillar VC in 2022, the MIT-Pillar AI Collective supports faculty, postdocs, and students conducting research on AI, machine learning, and data science. Supported by a gift from Pillar VC and administered by the MIT Deshpande Center for Technological Innovation, the mission of the program is to advance research toward commercialization.

    The spring 2024 MIT-Pillar AI Collective Fellows are:

    Yasmeen AlFaraj

    Yasmeen AlFaraj is a PhD candidate in chemistry whose interest is in the application of data science and machine learning to soft materials design to enable next-generation, sustainable plastics, rubber, and composite materials. More specifically, she is applying machine learning to the design of novel molecular additives to enable the low-cost manufacturing of chemically deconstructable thermosets and composites. AlFaraj’s work has led to the discovery of scalable, translatable new materials that could address thermoset plastic waste. As a Pillar Fellow, she will pursue bringing this technology to market, initially focusing on wind turbine blade manufacturing and conformal coatings. Through the Deshpande Center for Technological Innovation, AlFaraj serves as a lead for a team developing a spinout focused on recyclable versions of existing high-performance thermosets by incorporating small quantities of a degradable co-monomer. In addition, she participated in the National Science Foundation Innovation Corps program and recently graduated from the Clean Tech Open, where she focused on enhancing her business plan, analyzing potential markets, ensuring a complete IP portfolio, and connecting with potential funders. AlFaraj earned a BS in chemistry from University of California at Berkeley.

    Ruben Castro Ornelas

    Ruben Castro Ornelas is a PhD student in mechanical engineering who is passionate about the future of multipurpose robots and designing the hardware to use them with AI control solutions. Combining his expertise in programming, embedded systems, machine design, reinforcement learning, and AI, he designed a dexterous robotic hand capable of carrying out useful everyday tasks without sacrificing size, durability, complexity, or simulatability. Ornelas’s innovative design holds significant commercial potential in domestic, industrial, and health-care applications because it could be adapted to hold everything from kitchenware to delicate objects. As a Pillar Fellow, he will focus on identifying potential commercial markets, determining the optimal approach for business-to-business sales, and identifying critical advisors. Ornelas served as co-director of StartLabs, an undergraduate entrepreneurship club at MIT, where he earned an BS in mechanical engineering.

    Keeley Erhardt

    Keeley Erhardt is a PhD candidate in media arts and sciences whose research interests lie in the transformative potential of AI in network analysis, particularly for entity correlation and hidden link detection within and across domains. She has designed machine learning algorithms to identify and track temporal correlations and hidden signals in large-scale networks, uncovering online influence campaigns originating from multiple countries. She has similarly demonstrated the use of graph neural networks to identify coordinated cryptocurrency accounts by analyzing financial time series data and transaction dynamics. As a Pillar Fellow, Erhardt will pursue the potential commercial applications of her work, such as detecting fraud, propaganda, money laundering, and other covert activity in the finance, energy, and national security sectors. She has had internships at Google, Facebook, and Apple and held software engineering roles at multiple tech unicorns. Erhardt earned an MEng in electrical engineering and computer science and a BS in computer science, both from MIT.

    Vineet Jagadeesan Nair

    Vineet Jagadeesan Nair is a PhD candidate in mechanical engineering whose research focuses on modeling power grids and designing electricity markets to integrate renewables, batteries, and electric vehicles. He is broadly interested in developing computational tools to tackle climate change. As a Pillar Fellow, Nair will explore the application of machine learning and data science to power systems. Specifically, he will experiment with approaches to improve the accuracy of forecasting electricity demand and supply with high spatial-temporal resolution. In collaboration with Project Tapestry @ Google X, he is also working on fusing physics-informed machine learning with conventional numerical methods to increase the speed and accuracy of high-fidelity simulations. Nair’s work could help realize future grids with high penetrations of renewables and other clean, distributed energy resources. Outside academics, Nair is active in entrepreneurship, most recently helping to organize the 2023 MIT Global Startup Workshop in Greece. He earned an MS in computational science and engineering from MIT, an MPhil in energy technologies from Cambridge University as a Gates Scholar, and a BS in mechanical engineering and a BA in economics from University of California at Berkeley.

    Mahdi Ramadan

    Mahdi Ramadan is a PhD candidate in brain and cognitive sciences whose research interests lie at the intersection of cognitive science, computational modeling, and neural technologies. His work uses novel unsupervised methods for learning and generating interpretable representations of neural dynamics, capitalizing on recent advances in AI, specifically contrastive and geometric deep learning techniques capable of uncovering the latent dynamics underlying neural processes with high fidelity. As a Pillar Fellow, he will leverage these methods to gain a better understanding of dynamical models of muscle signals for generative motor control. By supplementing current spinal prosthetics with generative AI motor models that can streamline, speed up, and correct limb muscle activations in real time, as well as potentially using multimodal vision-language models to infer the patients’ high-level intentions, Ramadan aspires to build truly scalable, accessible, and capable commercial neuroprosthetics. Ramadan’s entrepreneurial experience includes being the co-founder of UltraNeuro, a neurotechnology startup, and co-founder of Presizely, a computer vision startup. He earned a BS in neurobiology from University of Washington.

    Rui (Raymond) Zhou

    Rui (Raymond) Zhou is a PhD candidate in mechanical engineering whose research focuses on multimodal AI for engineering design. As a Pillar Fellow, he will advance models that could enable designers to translate information in any modality or combination of modalities into comprehensive 2D and 3D designs, including parametric data, component visuals, assembly graphs, and sketches. These models could also optimize existing human designs to accomplish goals such as improving ergonomics or reducing drag coefficient. Ultimately, Zhou aims to translate his work into a software-as-a-service platform that redefines product design across various sectors, from automotive to consumer electronics. His efforts have the potential to not only accelerate the design process but also reduce costs, opening the door to unprecedented levels of customization, idea generation, and rapid prototyping. Beyond his academic pursuits, Zhou founded UrsaTech, a startup that integrates AI into education and engineering design. He earned a BS in electrical engineering and computer sciences from University of California at Berkeley. More

  • in

    Generating the policy of tomorrow

    As first-year students in the Social and Engineering Systems (SES) doctoral program within the MIT Institute for Data, Systems, and Society (IDSS), Eric Liu and Ashely Peake share an interest in investigating housing inequality issues.

    They also share a desire to dive head-first into their research.

    “In the first year of your PhD, you’re taking classes and still getting adjusted, but we came in very eager to start doing research,” Liu says.

    Liu, Peake, and many others found an opportunity to do hands-on research on real-world problems at the MIT Policy Hackathon, an initiative organized by students in IDSS, including the Technology and Policy Program (TPP). The weekend-long, interdisciplinary event — now in its sixth year — continues to gather hundreds of participants from around the globe to explore potential solutions to some of society’s greatest challenges.

    This year’s theme, “Hack-GPT: Generating the Policy of Tomorrow,” sought to capitalize on the popularity of generative AI (like the chatbot ChatGPT) and the ways it is changing how we think about technical and policy-based challenges, according to Dansil Green, a second-year TPP master’s student and co-chair of the event.

    “We encouraged our teams to utilize and cite these tools, thinking about the implications that generative AI tools have on their different challenge categories,” Green says.

    After 2022’s hybrid event, this year’s organizers pivoted back to a virtual-only approach, allowing them to increase the overall number of participants in addition to increasing the number of teams per challenge by 20 percent.

    “Virtual allows you to reach more people — we had a high number of international participants this year — and it helps reduce some of the costs,” Green says. “I think going forward we are going to try and switch back and forth between virtual and in-person because there are different benefits to each.”

    “When the magic hits”

    Liu and Peake competed in the housing challenge category, where they could gain research experience in their actual field of study. 

    “While I am doing housing research, I haven’t necessarily had a lot of opportunities to work with actual housing data before,” says Peake, who recently joined the SES doctoral program after completing an undergraduate degree in applied math last year. “It was a really good experience to get involved with an actual data problem, working closer with Eric, who’s also in my lab group, in addition to meeting people from MIT and around the world who are interested in tackling similar questions and seeing how they think about things differently.”

    Joined by Adrian Butterton, a Boston-based paralegal, as well as Hudson Yuen and Ian Chan, two software engineers from Canada, Liu and Peake formed what would end up being the winning team in their category: “Team Ctrl+Alt+Defeat.” They quickly began organizing a plan to address the eviction crisis in the United States.

    “I think we were kind of surprised by the scope of the question,” Peake laughs. “In the end, I think having such a large scope motivated us to think about it in a more realistic kind of way — how could we come up with a solution that was adaptable and therefore could be replicated to tackle different kinds of problems.”

    Watching the challenge on the livestream together on campus, Liu says they immediately went to work, and could not believe how quickly things came together.

    “We got our challenge description in the evening, came out to the purple common area in the IDSS building and literally it took maybe an hour and we drafted up the entire project from start to finish,” Liu says. “Then our software engineer partners had a dashboard built by 1 a.m. — I feel like the hackathon really promotes that really fast dynamic work stream.”

    “People always talk about the grind or applying for funding — but when that magic hits, it just reminds you of the part of research that people don’t talk about, and it was really a great experience to have,” Liu adds.

    A fresh perspective

    “We’ve organized hackathons internally at our company and they are great for fostering innovation and creativity,” says Letizia Bordoli, senior AI product manager at Veridos, a German-based identity solutions company that provided this year’s challenge in Data Systems for Human Rights. “It is a great opportunity to connect with talented individuals and explore new ideas and solutions that we might not have thought about.”

    The challenge provided by Veridos was focused on finding innovative solutions to universal birth registration, something Bordoli says only benefited from the fact that the hackathon participants were from all over the world.

    “Many had local and firsthand knowledge about certain realities and challenges [posed by the lack of] birth registration,” Bordoli says. “It brings fresh perspectives to existing challenges, and it gave us an energy boost to try to bring innovative solutions that we may not have considered before.”

    New frontiers

    Alongside the housing and data systems for human rights challenges was a challenge in health, as well as a first-time opportunity to tackle an aerospace challenge in the area of space for environmental justice.

    “Space can be a very hard challenge category to do data-wise since a lot of data is proprietary, so this really developed over the last few months with us having to think about how we could do more with open-source data,” Green explains. “But I am glad we went the environmental route because it opened the challenge up to not only space enthusiasts, but also environment and climate people.”

    One of the participants to tackle this new challenge category was Yassine Elhallaoui, a system test engineer from Norway who specializes in AI solutions and has 16 years of experience working in the oil and gas fields. Elhallaoui was a member of Team EcoEquity, which proposed an increase in policies supporting the use of satellite data to ensure proper evaluation and increase water resiliency for vulnerable communities.

    “The hackathons I have participated in in the past were more technical,” Elhallaoui says. “Starting with [MIT Science and Technology Policy Institute Director Kristen Kulinowski’s] workshop about policy writers and the solutions they came up with, and the analysis they had to do … it really changed my perspective on what a hackathon can do.”

    “A policy hackathon is something that can make real changes in the world,” she adds. More

  • in

    Leveraging language to understand machines

    Natural language conveys ideas, actions, information, and intent through context and syntax; further, there are volumes of it contained in databases. This makes it an excellent source of data to train machine-learning systems on. Two master’s of engineering students in the 6A MEng Thesis Program at MIT, Irene Terpstra ’23 and Rujul Gandhi ’22, are working with mentors in the MIT-IBM Watson AI Lab to use this power of natural language to build AI systems.

    As computing is becoming more advanced, researchers are looking to improve the hardware that they run on; this means innovating to create new computer chips. And, since there is literature already available on modifications that can be made to achieve certain parameters and performance, Terpstra and her mentors and advisors Anantha Chandrakasan, MIT School of Engineering dean and the Vannevar Bush Professor of Electrical Engineering and Computer Science, and IBM’s researcher Xin Zhang, are developing an AI algorithm that assists in chip design.

    “I’m creating a workflow to systematically analyze how these language models can help the circuit design process. What reasoning powers do they have, and how can it be integrated into the chip design process?” says Terpstra. “And then on the other side, if that proves to be useful enough, [we’ll] see if they can automatically design the chips themselves, attaching it to a reinforcement learning algorithm.”

    To do this, Terpstra’s team is creating an AI system that can iterate on different designs. It means experimenting with various pre-trained large language models (like ChatGPT, Llama 2, and Bard), using an open-source circuit simulator language called NGspice, which has the parameters of the chip in code form, and a reinforcement learning algorithm. With text prompts, researchers will be able to query how the physical chip should be modified to achieve a certain goal in the language model and produced guidance for adjustments. This is then transferred into a reinforcement learning algorithm that updates the circuit design and outputs new physical parameters of the chip.

    “The final goal would be to combine the reasoning powers and the knowledge base that is baked into these large language models and combine that with the optimization power of the reinforcement learning algorithms and have that design the chip itself,” says Terpstra.

    Rujul Gandhi works with the raw language itself. As an undergraduate at MIT, Gandhi explored linguistics and computer sciences, putting them together in her MEng work. “I’ve been interested in communication, both between just humans and between humans and computers,” Gandhi says.

    Robots or other interactive AI systems are one area where communication needs to be understood by both humans and machines. Researchers often write instructions for robots using formal logic. This helps ensure that commands are being followed safely and as intended, but formal logic can be difficult for users to understand, while natural language comes easily. To ensure this smooth communication, Gandhi and her advisors Yang Zhang of IBM and MIT assistant professor Chuchu Fan are building a parser that converts natural language instructions into a machine-friendly form. Leveraging the linguistic structure encoded by the pre-trained encoder-decoder model T5, and a dataset of annotated, basic English commands for performing certain tasks, Gandhi’s system identifies the smallest logical units, or atomic propositions, which are present in a given instruction.

    “Once you’ve given your instruction, the model identifies all the smaller sub-tasks you want it to carry out,” Gandhi says. “Then, using a large language model, each sub-task can be compared against the available actions and objects in the robot’s world, and if any sub-task can’t be carried out because a certain object is not recognized, or an action is not possible, the system can stop right there to ask the user for help.”

    This approach of breaking instructions into sub-tasks also allows her system to understand logical dependencies expressed in English, like, “do task X until event Y happens.” Gandhi uses a dataset of step-by-step instructions across robot task domains like navigation and manipulation, with a focus on household tasks. Using data that are written just the way humans would talk to each other has many advantages, she says, because it means a user can be more flexible about how they phrase their instructions.

    Another of Gandhi’s projects involves developing speech models. In the context of speech recognition, some languages are considered “low resource” since they might not have a lot of transcribed speech available, or might not have a written form at all. “One of the reasons I applied to this internship at the MIT-IBM Watson AI Lab was an interest in language processing for low-resource languages,” she says. “A lot of language models today are very data-driven, and when it’s not that easy to acquire all of that data, that’s when you need to use the limited data efficiently.” 

    Speech is just a stream of sound waves, but humans having a conversation can easily figure out where words and thoughts start and end. In speech processing, both humans and language models use their existing vocabulary to recognize word boundaries and understand the meaning. In low- or no-resource languages, a written vocabulary might not exist at all, so researchers can’t provide one to the model. Instead, the model can make note of what sound sequences occur together more frequently than others, and infer that those might be individual words or concepts. In Gandhi’s research group, these inferred words are then collected into a pseudo-vocabulary that serves as a labeling method for the low-resource language, creating labeled data for further applications.

    The applications for language technology are “pretty much everywhere,” Gandhi says. “You could imagine people being able to interact with software and devices in their native language, their native dialect. You could imagine improving all the voice assistants that we use. You could imagine it being used for translation or interpretation.” More

  • in

    Three MIT students selected as inaugural MIT-Pillar AI Collective Fellows

    MIT-Pillar AI Collective has announced three inaugural fellows for the fall 2023 semester. With support from the program, the graduate students, who are in their final year of a master’s or PhD program, will conduct research in the areas of artificial intelligence, machine learning, and data science with the aim of commercializing their innovations.

    Launched by MIT’s School of Engineering and Pillar VC in 2022, the MIT-Pillar AI Collective supports faculty, postdocs, and students conducting research on AI, machine learning, and data science. Supported by a gift from Pillar VC and administered by the MIT Deshpande Center for Technological Innovation, the mission of the program is to advance research toward commercialization.

    The fall 2023 MIT-Pillar AI Collective Fellows are:

    Alexander Andonian SM ’21 is a PhD candidate in electrical engineering and computer science whose research interests lie in computer vision, deep learning, and artificial intelligence. More specifically, he is focused on building a generalist, multimodal AI scientist driven by generative vision-language model agents capable of proposing scientific hypotheses, running computational experiments, evaluating supporting evidence, and verifying conclusions in the same way as a human researcher or reviewer. Such an agent could be trained to optimally distill and communicate its findings for human consumption and comprehension. Andonian’s work holds the promise of creating a concrete foundation for rigorously building and holistically testing the next-generation autonomous AI agent for science. In addition to his research, Andonian is the CEO and co-founder of Reelize, a startup that offers a generative AI video tool that effortlessly turns long videos into short clips — and originated from his business coursework and was supported by MIT Sandbox. Andonian is also a founding AI researcher at Poly AI, an early-stage YC-backed startup building AI design tools. Andonian earned an SM from MIT and a BS in neuroscience, physics, and mathematics from Bates College.

    Daniel Magley is a PhD candidate in the Harvard-MIT Program in Health Sciences and Technology who is passionate about making a healthy, fully functioning mind and body a reality for all. His leading-edge research is focused on developing a swallowable wireless thermal imaging capsule that could be used in treating and monitoring inflammatory bowel diseases and their manifestations, such as Crohn’s disease. Providing increased sensitivity and eliminating the need for bowel preparation, the capsule has the potential to vastly improve treatment efficacy and overall patient experience in routine monitoring. The capsule has completed animal studies and is entering human studies at Mass General Brigham, where Magley leads a team of engineers in the hospital’s largest translational research lab, the Tearney Lab. Following the human pilot studies, the largest technological and regulatory risks will be cleared for translation. Magley will then begin focusing on a multi-site study to get the device into clinics, with the promise of benefiting patients across the country. Magley earned a BS in electrical engineering from Caltech.

    Madhumitha Ravichandra is a PhD candidate interested in advancing heat transfer and surface engineering techniques to enhance the safety and performance of nuclear energy systems and reduce their environmental impacts. Leveraging her deep knowledge of the integration of explainable AI with high-throughput autonomous experimentation, she seeks to transform the development of radiation-hardened (rad-hard) sensors, which could potentially withstand and function amidst radiation levels that would render conventional sensors useless. By integrating explainable AI with high-throughput autonomous experimentation, she aims to rapidly iterate designs, test under varied conditions, and ensure that the final product is both robust and transparent in its operations. Her work in this space could shift the paradigm in rad-hard sensor development, addressing a glaring void in the market and redefining standards, ensuring that nuclear and space applications are safer, more efficient, and at the cutting edge of technological progress. Ravichandran earned a BTech in mechanical engineering from SASTRA University, India. More

  • in

    2023-24 Takeda Fellows: Advancing research at the intersection of AI and health

    The School of Engineering has selected 13 new Takeda Fellows for the 2023-24 academic year. With support from Takeda, the graduate students will conduct pathbreaking research ranging from remote health monitoring for virtual clinical trials to ingestible devices for at-home, long-term diagnostics.

    Now in its fourth year, the MIT-Takeda Program, a collaboration between MIT’s School of Engineering and Takeda, fuels the development and application of artificial intelligence capabilities to benefit human health and drug development. Part of the Abdul Latif Jameel Clinic for Machine Learning in Health, the program coalesces disparate disciplines, merges theory and practical implementation, combines algorithm and hardware innovations, and creates multidimensional collaborations between academia and industry.

    The 2023-24 Takeda Fellows are:

    Adam Gierlach

    Adam Gierlach is a PhD candidate in the Department of Electrical Engineering and Computer Science. Gierlach’s work combines innovative biotechnology with machine learning to create ingestible devices for advanced diagnostics and delivery of therapeutics. In his previous work, Gierlach developed a non-invasive, ingestible device for long-term gastric recordings in free-moving patients. With the support of a Takeda Fellowship, he will build on this pathbreaking work by developing smart, energy-efficient, ingestible devices powered by application-specific integrated circuits for at-home, long-term diagnostics. These revolutionary devices — capable of identifying, characterizing, and even correcting gastrointestinal diseases — represent the leading edge of biotechnology. Gierlach’s innovative contributions will help to advance fundamental research on the enteric nervous system and help develop a better understanding of gut-brain axis dysfunctions in Parkinson’s disease, autism spectrum disorder, and other prevalent disorders and conditions.

    Vivek Gopalakrishnan

    Vivek Gopalakrishnan is a PhD candidate in the Harvard-MIT Program in Health Sciences and Technology. Gopalakrishnan’s goal is to develop biomedical machine-learning methods to improve the study and treatment of human disease. Specifically, he employs computational modeling to advance new approaches for minimally invasive, image-guided neurosurgery, offering a safe alternative to open brain and spinal procedures. With the support of a Takeda Fellowship, Gopalakrishnan will develop real-time computer vision algorithms that deliver high-quality, 3D intraoperative image guidance by extracting and fusing information from multimodal neuroimaging data. These algorithms could allow surgeons to reconstruct 3D neurovasculature from X-ray angiography, thereby enhancing the precision of device deployment and enabling more accurate localization of healthy versus pathologic anatomy.

    Hao He

    Hao He is a PhD candidate in the Department of Electrical Engineering and Computer Science. His research interests lie at the intersection of generative AI, machine learning, and their applications in medicine and human health, with a particular emphasis on passive, continuous, remote health monitoring to support virtual clinical trials and health-care management. More specifically, He aims to develop trustworthy AI models that promote equitable access and deliver fair performance independent of race, gender, and age. In his past work, He has developed monitoring systems applied in clinical studies of Parkinson’s disease, Alzheimer’s disease, and epilepsy. Supported by a Takeda Fellowship, He will develop a novel technology for the passive monitoring of sleep stages (using radio signaling) that seeks to address existing gaps in performance across different demographic groups. His project will tackle the problem of imbalance in available datasets and account for intrinsic differences across subpopulations, using generative AI and multi-modality/multi-domain learning, with the goal of learning robust features that are invariant to different subpopulations. He’s work holds great promise for delivering advanced, equitable health-care services to all people and could significantly impact health care and AI.

    Chengyi Long

    Chengyi Long is a PhD candidate in the Department of Civil and Environmental Engineering. Long’s interdisciplinary research integrates the methodology of physics, mathematics, and computer science to investigate questions in ecology. Specifically, Long is developing a series of potentially groundbreaking techniques to explain and predict the temporal dynamics of ecological systems, including human microbiota, which are essential subjects in health and medical research. His current work, supported by a Takeda Fellowship, is focused on developing a conceptual, mathematical, and practical framework to understand the interplay between external perturbations and internal community dynamics in microbial systems, which may serve as a key step toward finding bio solutions to health management. A broader perspective of his research is to develop AI-assisted platforms to anticipate the changing behavior of microbial systems, which may help to differentiate between healthy and unhealthy hosts and design probiotics for the prevention and mitigation of pathogen infections. By creating novel methods to address these issues, Long’s research has the potential to offer powerful contributions to medicine and global health.

    Omar Mohd

    Omar Mohd is a PhD candidate in the Department of Electrical Engineering and Computer Science. Mohd’s research is focused on developing new technologies for the spatial profiling of microRNAs, with potentially important applications in cancer research. Through innovative combinations of micro-technologies and AI-enabled image analysis to measure the spatial variations of microRNAs within tissue samples, Mohd hopes to gain new insights into drug resistance in cancer. This work, supported by a Takeda Fellowship, falls within the emerging field of spatial transcriptomics, which seeks to understand cancer and other diseases by examining the relative locations of cells and their contents within tissues. The ultimate goal of Mohd’s current project is to find multidimensional patterns in tissues that may have prognostic value for cancer patients. One valuable component of his work is an open-source AI program developed with collaborators at Beth Israel Deaconess Medical Center and Harvard Medical School to auto-detect cancer epithelial cells from other cell types in a tissue sample and to correlate their abundance with the spatial variations of microRNAs. Through his research, Mohd is making innovative contributions at the interface of microsystem technology, AI-based image analysis, and cancer treatment, which could significantly impact medicine and human health.

    Sanghyun Park

    Sanghyun Park is a PhD candidate in the Department of Mechanical Engineering. Park specializes in the integration of AI and biomedical engineering to address complex challenges in human health. Drawing on his expertise in polymer physics, drug delivery, and rheology, his research focuses on the pioneering field of in-situ forming implants (ISFIs) for drug delivery. Supported by a Takeda Fellowship, Park is currently developing an injectable formulation designed for long-term drug delivery. The primary goal of his research is to unravel the compaction mechanism of drug particles in ISFI formulations through comprehensive modeling and in-vitro characterization studies utilizing advanced AI tools. He aims to gain a thorough understanding of this unique compaction mechanism and apply it to drug microcrystals to achieve properties optimal for long-term drug delivery. Beyond these fundamental studies, Park’s research also focuses on translating this knowledge into practical applications in a clinical setting through animal studies specifically aimed at extending drug release duration and improving mechanical properties. The innovative use of AI in developing advanced drug delivery systems, coupled with Park’s valuable insights into the compaction mechanism, could contribute to improving long-term drug delivery. This work has the potential to pave the way for effective management of chronic diseases, benefiting patients, clinicians, and the pharmaceutical industry.

    Huaiyao Peng

    Huaiyao Peng is a PhD candidate in the Department of Biological Engineering. Peng’s research interests are focused on engineered tissue, microfabrication platforms, cancer metastasis, and the tumor microenvironment. Specifically, she is advancing novel AI techniques for the development of pre-cancer organoid models of high-grade serous ovarian cancer (HGSOC), an especially lethal and difficult-to-treat cancer, with the goal of gaining new insights into progression and effective treatments. Peng’s project, supported by a Takeda Fellowship, will be one of the first to use cells from serous tubal intraepithelial carcinoma lesions found in the fallopian tubes of many HGSOC patients. By examining the cellular and molecular changes that occur in response to treatment with small molecule inhibitors, she hopes to identify potential biomarkers and promising therapeutic targets for HGSOC, including personalized treatment options for HGSOC patients, ultimately improving their clinical outcomes. Peng’s work has the potential to bring about important advances in cancer treatment and spur innovative new applications of AI in health care. 

    Priyanka Raghavan

    Priyanka Raghavan is a PhD candidate in the Department of Chemical Engineering. Raghavan’s research interests lie at the frontier of predictive chemistry, integrating computational and experimental approaches to build powerful new predictive tools for societally important applications, including drug discovery. Specifically, Raghavan is developing novel models to predict small-molecule substrate reactivity and compatibility in regimes where little data is available (the most realistic regimes). A Takeda Fellowship will enable Raghavan to push the boundaries of her research, making innovative use of low-data and multi-task machine learning approaches, synthetic chemistry, and robotic laboratory automation, with the goal of creating an autonomous, closed-loop system for the discovery of high-yielding organic small molecules in the context of underexplored reactions. Raghavan’s work aims to identify new, versatile reactions to broaden a chemist’s synthetic toolbox with novel scaffolds and substrates that could form the basis of essential drugs. Her work has the potential for far-reaching impacts in early-stage, small-molecule discovery and could help make the lengthy drug-discovery process significantly faster and cheaper.

    Zhiye Song

    Zhiye “Zoey” Song is a PhD candidate in the Department of Electrical Engineering and Computer Science. Song’s research integrates cutting-edge approaches in machine learning (ML) and hardware optimization to create next-generation, wearable medical devices. Specifically, Song is developing novel approaches for the energy-efficient implementation of ML computation in low-power medical devices, including a wearable ultrasound “patch” that captures and processes images for real-time decision-making capabilities. Her recent work, conducted in collaboration with clinicians, has centered on bladder volume monitoring; other potential applications include blood pressure monitoring, muscle diagnosis, and neuromodulation. With the support of a Takeda Fellowship, Song will build on that promising work and pursue key improvements to existing wearable device technologies, including developing low-compute and low-memory ML algorithms and low-power chips to enable ML on smart wearable devices. The technologies emerging from Song’s research could offer exciting new capabilities in health care, enabling powerful and cost-effective point-of-care diagnostics and expanding individual access to autonomous and continuous medical monitoring.

    Peiqi Wang

    Peiqi Wang is a PhD candidate in the Department of Electrical Engineering and Computer Science. Wang’s research aims to develop machine learning methods for learning and interpretation from medical images and associated clinical data to support clinical decision-making. He is developing a multimodal representation learning approach that aligns knowledge captured in large amounts of medical image and text data to transfer this knowledge to new tasks and applications. Supported by a Takeda Fellowship, Wang will advance this promising line of work to build robust tools that interpret images, learn from sparse human feedback, and reason like doctors, with potentially major benefits to important stakeholders in health care.

    Oscar Wu

    Haoyang “Oscar” Wu is a PhD candidate in the Department of Chemical Engineering. Wu’s research integrates quantum chemistry and deep learning methods to accelerate the process of small-molecule screening in the development of new drugs. By identifying and automating reliable methods for finding transition state geometries and calculating barrier heights for new reactions, Wu’s work could make it possible to conduct the high-throughput ab initio calculations of reaction rates needed to screen the reactivity of large numbers of active pharmaceutical ingredients (APIs). A Takeda Fellowship will support his current project to: (1) develop open-source software for high-throughput quantum chemistry calculations, focusing on the reactivity of drug-like molecules, and (2) develop deep learning models that can quantitatively predict the oxidative stability of APIs. The tools and insights resulting from Wu’s research could help to transform and accelerate the drug-discovery process, offering significant benefits to the pharmaceutical and medical fields and to patients.

    Soojung Yang

    Soojung Yang is a PhD candidate in the Department of Materials Science and Engineering. Yang’s research applies cutting-edge methods in geometric deep learning and generative modeling, along with atomistic simulations, to better understand and model protein dynamics. Specifically, Yang is developing novel tools in generative AI to explore protein conformational landscapes that offer greater speed and detail than physics-based simulations at a substantially lower cost. With the support of a Takeda Fellowship, she will build upon her successful work on the reverse transformation of coarse-grained proteins to the all-atom resolution, aiming to build machine-learning models that bridge multiple size scales of protein conformation diversity (all-atom, residue-level, and domain-level). Yang’s research holds the potential to provide a powerful and widely applicable new tool for researchers who seek to understand the complex protein functions at work in human diseases and to design drugs to treat and cure those diseases.

    Yuzhe Yang

    Yuzhe Yang is a PhD candidate in the Department of Electrical Engineering and Computer Science. Yang’s research interests lie at the intersection of machine learning and health care. In his past and current work, Yang has developed and applied innovative machine-learning models that address key challenges in disease diagnosis and tracking. His many notable achievements include the creation of one of the first machine learning-based solutions using nocturnal breathing signals to detect Parkinson’s disease (PD), estimate disease severity, and track PD progression. With the support of a Takeda Fellowship, Yang will expand this promising work to develop an AI-based diagnosis model for Alzheimer’s disease (AD) using sleep-breathing data that is significantly more reliable, flexible, and economical than current diagnostic tools. This passive, in-home, contactless monitoring system — resembling a simple home Wi-Fi router — will also enable remote disease assessment and continuous progression tracking. Yang’s groundbreaking work has the potential to advance the diagnosis and treatment of prevalent diseases like PD and AD, and it offers exciting possibilities for addressing many health challenges with reliable, affordable machine-learning tools.  More

  • in

    Forging climate connections across the Institute

    Climate change is the ultimate cross-cutting issue: Not limited to any one discipline, it ranges across science, technology, policy, culture, human behavior, and well beyond. The response to it likewise requires an all-of-MIT effort.

    Now, to strengthen such an effort, a new grant program spearheaded by the Climate Nucleus, the faculty committee charged with the oversight and implementation of Fast Forward: MIT’s Climate Action Plan for the Decade, aims to build up MIT’s climate leadership capacity while also supporting innovative scholarship on diverse climate-related topics and forging new connections across the Institute.

    Called the Fast Forward Faculty Fund (F^4 for short), the program has named its first cohort of six faculty members after issuing its inaugural call for proposals in April 2023. The cohort will come together throughout the year for climate leadership development programming and networking. The program provides financial support for graduate students who will work with the faculty members on the projects — the students will also participate in leadership-building activities — as well as $50,000 in flexible, discretionary funding to be used to support related activities. 

    “Climate change is a crisis that truly touches every single person on the planet,” says Noelle Selin, co-chair of the nucleus and interim director of the Institute for Data, Systems, and Society. “It’s therefore essential that we build capacity for every member of the MIT community to make sense of the problem and help address it. Through the Fast Forward Faculty Fund, our aim is to have a cohort of climate ambassadors who can embed climate everywhere at the Institute.”

    F^4 supports both faculty who would like to begin doing climate-related work, as well as faculty members who are interested in deepening their work on climate. The program has the core goal of developing cohorts of F^4 faculty and graduate students who, in addition to conducting their own research, will become climate leaders at MIT, proactively looking for ways to forge new climate connections across schools, departments, and disciplines.

    One of the projects, “Climate Crisis and Real Estate: Science-based Mitigation and Adaptation Strategies,” led by Professor Siqi Zheng of the MIT Center for Real Estate in collaboration with colleagues from the MIT Sloan School of Management, focuses on the roughly 40 percent of carbon dioxide emissions that come from the buildings and real estate sector. Zheng notes that this sector has been slow to respond to climate change, but says that is starting to change, thanks in part to the rising awareness of climate risks and new local regulations aimed at reducing emissions from buildings.

    Using a data-driven approach, the project seeks to understand the efficient and equitable market incentives, technology solutions, and public policies that are most effective at transforming the real estate industry. Johnattan Ontiveros, a graduate student in the Technology and Policy Program, is working with Zheng on the project.

    “We were thrilled at the incredible response we received from the MIT faculty to our call for proposals, which speaks volumes about the depth and breadth of interest in climate at MIT,” says Anne White, nucleus co-chair and vice provost and associate vice president for research. “This program makes good on key commitments of the Fast Forward plan, supporting cutting-edge new work by faculty and graduate students while helping to deepen the bench of climate leaders at MIT.”

    During the 2023-24 academic year, the F^4 faculty and graduate student cohorts will come together to discuss their projects, explore opportunities for collaboration, participate in climate leadership development, and think proactively about how to deepen interdisciplinary connections among MIT community members interested in climate change.

    The six inaugural F^4 awardees are:

    Professor Tristan Brown, History Section: Humanistic Approaches to the Climate Crisis  

    With this project, Brown aims to create a new community of practice around narrative-centric approaches to environmental and climate issues. Part of a broader humanities initiative at MIT, it brings together a global working group of interdisciplinary scholars, including Serguei Saavedra (Department of Civil and Environmental Engineering) and Or Porath (Tel Aviv University; Religion), collectively focused on examining the historical and present links between sacred places and biodiversity for the purposes of helping governments and nongovernmental organizations formulate better sustainability goals. Boyd Ruamcharoen, a PhD student in the History, Anthropology, and Science, Technology, and Society (HASTS) program, will work with Brown on this project.

    Professor Kerri Cahoy, departments of Aeronautics and Astronautics and Earth, Atmospheric, and Planetary Sciences (AeroAstro): Onboard Autonomous AI-driven Satellite Sensor Fusion for Coastal Region Monitoring

    The motivation for this project is the need for much better data collection from satellites, where technology can be “20 years behind,” says Cahoy. As part of this project, Cahoy will pursue research in the area of autonomous artificial intelligence-enabled rapid sensor fusion (which combines data from different sensors, such as radar and cameras) onboard satellites to improve understanding of the impacts of climate change, specifically sea-level rise and hurricanes and flooding in coastal regions. Graduate students Madeline Anderson, a PhD student in electrical engineering and computer science (EECS), and Mary Dahl, a PhD student in AeroAstro, will work with Cahoy on this project.

    Professor Priya Donti, Department of Electrical Engineering and Computer Science: Robust Reinforcement Learning for High-Renewables Power Grids 

    With renewables like wind and solar making up a growing share of electricity generation on power grids, Donti’s project focuses on improving control methods for these distributed sources of electricity. The research will aim to create a realistic representation of the characteristics of power grid operations, and eventually inform scalable operational improvements in power systems. It will “give power systems operators faith that, OK, this conceptually is good, but it also actually works on this grid,” says Donti. PhD candidate Ana Rivera from EECS is the F^4 graduate student on the project.

    Professor Jason Jackson, Department of Urban Studies and Planning (DUSP): Political Economy of the Climate Crisis: Institutions, Power and Global Governance

    This project takes a political economy approach to the climate crisis, offering a distinct lens to examine, first, the political governance challenge of mobilizing climate action and designing new institutional mechanisms to address the global and intergenerational distributional aspects of climate change; second, the economic challenge of devising new institutional approaches to equitably finance climate action; and third, the cultural challenge — and opportunity — of empowering an adaptive socio-cultural ecology through traditional knowledge and local-level social networks to achieve environmental resilience. Graduate students Chen Chu and Mrinalini Penumaka, both PhD students in DUSP, are working with Jackson on the project.

    Professor Haruko Wainwright, departments of Nuclear Science and Engineering (NSE) and Civil and Environmental Engineering: Low-cost Environmental Monitoring Network Technologies in Rural Communities for Addressing Climate Justice 

    This project will establish a community-based climate and environmental monitoring network in addition to a data visualization and analysis infrastructure in rural marginalized communities to better understand and address climate justice issues. The project team plans to work with rural communities in Alaska to install low-cost air and water quality, weather, and soil sensors. Graduate students Kay Whiteaker, an MS candidate in NSE, and Amandeep Singh, and MS candidate in System Design and Management at Sloan, are working with Wainwright on the project, as is David McGee, professor in earth, atmospheric, and planetary sciences.

    Professor Siqi Zheng, MIT Center for Real Estate and DUSP: Climate Crisis and Real Estate: Science-based Mitigation and Adaptation Strategies 

    See the text above for the details on this project. More

  • in

    Improving accessibility of online graphics for blind users

    The beauty of a nice infographic published alongside a news or magazine story is that it makes numeric data more accessible to the average reader. But for blind and visually impaired users, such graphics often have the opposite effect.

    For visually impaired users — who frequently rely on screen-reading software that speaks words or numbers aloud as the user moves a cursor across the screen — a graphic may be nothing more than a few words of alt text, such as a chart’s title. For instance, a map of the United States displaying population rates by county might have alt text in the HTML that says simply, “A map of the United States with population rates by county.” The data has been buried in an image, making it entirely inaccessible.

    “Charts have these various visual features that, as a [sighted] reader, you can shift your attention around, look at high-level patterns, look at individual data points, and you can do this on the fly,” says Jonathan Zong, a 2022 MIT Morningside Academy for Design (MAD) Fellow and PhD student in computer science, who points out that even when a graphic includes alt text that interprets the data, the visually impaired user must accept the findings as presented.

    “If you’re [blind and] using a screen reader, the text description imposes a linear predefined reading order. So, you’re beholden to the decisions that the person who wrote the text made about what information was important to include.”

    While some graphics do include data tables that a screen reader can read, it requires the user to remember all the data from each row and column as they move on to the next one. According to the National Federation of the Blind, Zong says, there are 7 million people living in the United States with visual disabilities, and nearly 97 percent of top-level pages on the internet are not accessible to screen readers. The problem, he points out, is an especially difficult one for blind researchers to get around. Some researchers with visual impairments rely on a sighted collaborator to read and help interpret graphics in peer-reviewed research.

    Working with the Visualization Group at the Computer Science and Artificial Intelligence Lab (CSAIL) on a project led by Associate Professor Arvind Satyanarayan that includes Daniel Hajas, a blind researcher and innovation manager at the Global Disability Innovation Hub in England, Zong and others have written an open-source Javascript software program named Olli that solves this problem when it’s included on a website. Olli is able to go from big-picture analysis of a chart to the finest grain of detail to give the user the ability to select the degree of granularity that interests them.

    “We want to design richer screen-reader experiences for visualization with a hierarchical structure, multiple ways to navigate, and descriptions at varying levels of granularity to provide self-guided, open-ended exploration for the user.”

    Next steps with Olli are incorporating multi-sensory software to integrate text and visuals with sound, such as having a musical note that moves up or down the harmonic scale to indicate the direction of data on a linear graph, and possibly even developing tactile interpretations of data. Like most of the MAD Fellows, Zong integrates his science and engineering skills with design and art to create solutions to real-world problems affecting individuals. He’s been recognized for his work in both the visual arts and computer science. He holds undergraduate degrees in computer science and visual arts with a focus on graphic design from Princeton University, where his research was on the ethics of data collection.

    “The throughline is the idea that design can help us make progress on really tough social and ethical questions,” Zong says, calling software for accessible data visualization an “intellectually rich area for design.” “We’re thinking about ways to translate charts and graphs into text descriptions that can get read aloud as speech, or thinking about other kinds of audio mappings to sonify data, and we’re even exploring some tactile methods to understand data,” he says.

    “I get really excited about design when it’s a way to both create things that are useful to people in everyday life and also make progress on larger conversations about technology and society. I think working in accessibility is a great way to do that.”

    Another problem at the intersection of technology and society is the ethics of taking user data from social media for large-scale studies without the users’ awareness. While working as a summer graduate research fellow at Cornell’s Citizens and Technology Lab, Zong helped create an open-source software called Bartleby that can be used in large anonymous data research studies. After researchers collect data, but before analysis, Bartleby would automatically send an email message to every user whose data was included, alert them to that fact and offer them the choice to review the resulting data table and opt out of the study. Bartleby was honored in the student category of Fast Company’s Innovation by Design Awards for 2022. In November the same year, Forbes magazine named Jonathan Zong in its Forbes 30 Under 30 in Science 2023 list for his work in data visualization accessibility.

    The underlying theme to all Zong’s work is the exploration of autonomy and agency, even in his artwork, which is heavily inclusive of text and semiotic play. In “Public Display,” he created a handmade digital display font by erasing parts of celebrity faces that were taken from a facial recognition dataset. The piece was exhibited in 2020 in MIT’s Wiesner Gallery, and received the third-place prize in the MIT Schnitzer Prize in the Visual Arts that year. The work deals not only with the neurological aspects of distinguishing faces from typefaces, but also with the implications for erasing individuals’ identities through the practice of using facial recognition programs that often target individuals in communities of color in unfair ways. Another of his works, “Biometric Sans,” a typography system that stretches letters based on a person’s typing speed, will be included in a show at the Harvard Science Center sometime next fall.

    “MAD, particularly the large events MAD jointly hosted, played a really important function in showing the rest of MIT that this is the kind of work we value. This is what design can look like and is capable of doing. I think it all contributes to that culture shift where this kind of interdisciplinary work can be valued, recognized, and serve the public.

    “There are shared ideas around embodiment and representation that tie these different pursuits together for me,” Zong says. “In the ethics work, and the art on surveillance, I’m thinking about whether data collectors are representing people the way they want to be seen through data. And similarly, the accessibility work is about whether we can make systems that are flexible to the way people want to use them.” More