More stories

  • in

    Improving drug development with a vast map of the immune system

    The human immune system is a network made up of trillions of cells that are constantly circulating throughout the body. The cellular network orchestrates interactions with every organ and tissue to carry out an impossibly long list of functions that scientists are still working to understand. All that complexity limits our ability to predict which patients will respond to treatments and which ones might suffer debilitating side effects.

    The issue often leads pharmaceutical companies to stop developing drugs that could help certain patients, halting clinical trials even when drugs show promising results for some people.

    Now, Immunai is helping to predict how patients will respond to treatments by building a comprehensive map of the immune system. The company has assembled a vast database it calls AMICA, that combines multiple layers of gene and protein expression data in cells with clinical trial data to match the right drugs to the right patients.

    “Our starting point was creating what I call the Google Maps for the immune system,” Immunai co-founder and CEO Noam Solomon says. “We started with single-cell RNA sequencing, and over time we’ve added more and more ‘omics’: genomics, proteomics, epigenomics, all to measure the immune system’s cellular expression and function, to measure the immune environment holistically. Then we started working with pharmaceutical companies and hospitals to profile the immune systems of patients undergoing treatments to really get to the root mechanisms of action and resistance for therapeutics.”

    Immunai’s big data foundation is a result of its founders’ unique background. Solomon and co-founder Luis Voloch ’13, SM ’15 hold degrees in mathematics and computer science. In fact, Solomon was a postdoc in MIT’s Department of Mathematics at the time of Immunai’s founding.

    Solomon frames Immunai’s mission as stopping the decades-long divergence of computer science and the life sciences. He believes the single biggest factor driving the explosion of computing has been Moore’s Law — our ability to exponentially increase the number of transistors on a chip over the past 60 years. In the pharmaceutical industry, the reverse is happening: By one estimate, the cost of developing a new drug roughly doubles every nine years. The phenomenon has been dubbed Eroom’s Law (“Eroom” for “Moore” spelled backward).

    Solomon sees the trend eroding the case for developing new drugs, with huge consequences for patients.

    “Why should pharmaceutical companies invest in discovery if they won’t get a return on investment?” Solomon asks. “Today, there’s only a 5 to 10 percent chance that any given clinical trial will be successful. What we’ve built through a very robust and granular mapping of the immune system is a chance to improve the preclinical and clinical stages of drug development.”

    A change in plans

    Solomon entered Tel Aviv University when he was 14 and earned his bachelor’s degree in computer science by 19. He earned two PhDs in Israel, one in computer science and the other in mathematics, before coming to MIT in 2017 as a postdoc to continue his mathematical research career.

    That year Solomon met Voloch, who had already earned bachelor’s and master’s degrees in math and computer science from MIT. But the researchers were soon exposed to a problem that would take them out of their comfort zones and change the course of their careers.

    Voloch’s grandfather was receiving a cocktail of treatments for cancer at the time. The cancer went into remission, but he suffered terrible side effects that caused him to stop taking his medication.

    Voloch and Solomon began wondering if their expertise could help patients like Voloch’s grandfather.

    “When we realized we could make an impact, we made the difficult decision to stop our academic pursuits and start a new journey,” Solomon recalls. “That was the starting point for Immunai.”

    Voloch and Solomon soon partnered with Immunai scientific co-founders Ansu Satpathy, a researcher at Stanford University at the time, and Danny Wells, a researcher at the Parker Institute for Cancer Immunotherapy. Satpathy and Wells had shown that single-cell RNA sequencing could be used to gain insights into why patients respond differently to a common cancer treatment.

    The team began analyzing single-cell RNA sequencing data published in scientific papers, trying to link common biomarkers with patient outcomes. Then they integrated data from the United Kingdom’s Biobank public health database, finding they were able to improve their models’ predictions. Soon they were incorporating data from hospitals, academic research institutions, and pharmaceutical companies, analyzing information about the structure, function, and environment of cells — multiomics — to get a clearer picture of immune activity.

    “Single cell sequencing gives you metrics you can measure in thousands of cells, where you can look at 20,000 different genes, and those metrics give you an immune profile,” Solomon explains. “When you measure all of that over time, especially before and after getting therapy, and compare patients who do respond with patients who don’t, you can apply machine learning models to understand why.”

    Those data and models make up AMICA, what Immunai calls the world’s largest cell-level immune knowledge base. AMICA stands for Annotated Multiomic Immune Cell Atlas. It analyzes single cell multiomic data from almost 10,000 patients and bulk-RNA data from 100,000 patients across more than 800 cell types and 500 diseases.

    At the core of Immunai’s approach is a focus on the immune system, which other companies shy away from because of its complexity.

    “We don’t want to be like other groups that are studying mainly tumor microenvironments,” Solomon says. “We look at the immune system because the immune system is the common denominator. It’s the one system that is implicated in every disease, in your body’s response to everything that you encounter, whether it’s a viral infection or bacterial infection or a drug that you are receiving — even how you are aging.”

    Turning data into better treatments

    Immunai has already partnered with some of the largest pharmaceutical companies in the world to help them identify promising treatments and set up their clinical trials for success. Immunai’s insights can help partners make critical decisions about treatment schedules, dosing, drug combinations, patient selection, and more.

    “Everyone is talking about AI, but I think the most exciting aspect of the platform we have built is the fact that it’s vertically integrated, from wet lab to computational modeling with multiple iterations,” Solomon says. “For example, we may do single-cell immune profiling of patient samples, then we upload that data to the cloud and our computational models come up with insights, and with those insights we do in vitro or in vivo validation to see if our models are right and iteratively improve them.”

    Ultimately Immunai wants to enable a future where lab experiments can more reliably turn into impactful new recommendations and treatments for patients.

    “Scientists can cure nearly every type of cancer, but only in mice,” Solomon says. “In preclinical models we know how to cure cancer. In human beings, in most cases, we still don’t. To overcome that, most scientists are looking for better ex vivo or in vivo models. Our approach is to be more agnostic as to the model system, but feed the machine with more and more data from multiple model systems. We’re demonstrating that our algorithms can repeatedly beat the top benchmarks in identifying the top preclinical immune features that match to patient outcomes.” More

  • in

    This 3D printer can figure out how to print with an unknown material

    While 3D printing has exploded in popularity, many of the plastic materials these printers use to create objects cannot be easily recycled. While new sustainable materials are emerging for use in 3D printing, they remain difficult to adopt because 3D printer settings need to be adjusted for each material, a process generally done by hand.

    To print a new material from scratch, one must typically set up to 100 parameters in software that controls how the printer will extrude the material as it fabricates an object. Commonly used materials, like mass-manufactured polymers, have established sets of parameters that were perfected through tedious, trial-and-error processes.

    But the properties of renewable and recyclable materials can fluctuate widely based on their composition, so fixed parameter sets are nearly impossible to create. In this case, users must come up with all these parameters by hand.

    Researchers tackled this problem by developing a 3D printer that can automatically identify the parameters of an unknown material on its own.

    A collaborative team from MIT’s Center for Bits and Atoms (CBA), the U.S. National Institute of Standards and Technology (NIST), and the National Center for Scientific Research in Greece (Demokritos) modified the extruder, the “heart” of a 3D printer, so it can measure the forces and flow of a material.

    These data, gathered through a 20-minute test, are fed into a mathematical function that is used to automatically generate printing parameters. These parameters can be entered into off-the-shelf 3D printing software and used to print with a never-before-seen material. 

    The automatically generated parameters can replace about half of the parameters that typically must be tuned by hand. In a series of test prints with unique materials, including several renewable materials, the researchers showed that their method can consistently produce viable parameters.

    This research could help to reduce the environmental impact of additive manufacturing, which typically relies on nonrecyclable polymers and resins derived from fossil fuels.

    “In this paper, we demonstrate a method that can take all these interesting materials that are bio-based and made from various sustainable sources and show that the printer can figure out by itself how to print those materials. The goal is to make 3D printing more sustainable,” says senior author Neil Gershenfeld, who leads CBA.

    His co-authors include first author Jake Read a graduate student in the CBA who led the printer development; Jonathan Seppala, a chemical engineer in the Materials Science and Engineering Division of NIST; Filippos Tourlomousis, a former CBA postdoc who now heads the Autonomous Science Lab at Demokritos; James Warren, who leads the Materials Genome Program at NIST; and Nicole Bakker, a research assistant at CBA. The research is published in the journal Integrating Materials and Manufacturing Innovation.

    Shifting material properties

    In fused filament fabrication (FFF), which is often used in rapid prototyping, molten polymers are extruded through a heated nozzle layer-by-layer to build a part. Software, called a slicer, provides instructions to the machine, but the slicer must be configured to work with a particular material.

    Using renewable or recycled materials in an FFF 3D printer is especially challenging because there are so many variables that affect the material properties.

    For instance, a bio-based polymer or resin might be composed of different mixes of plants based on the season. The properties of recycled materials also vary widely based on what is available to recycle.

    “In ‘Back to the Future,’ there is a ‘Mr. Fusion’ blender where Doc just throws whatever he has into the blender and it works [as a power source for the DeLorean time machine]. That is the same idea here. Ideally, with plastics recycling, you could just shred what you have and print with it. But, with current feed-forward systems, that won’t work because if your filament changes significantly during the print, everything would break,” Read says.

    To overcome these challenges, the researchers developed a 3D printer and workflow to automatically identify viable process parameters for any unknown material.

    They started with a 3D printer their lab had previously developed that can capture data and provide feedback as it operates. The researchers added three instruments to the machine’s extruder that take measurements which are used to calculate parameters.

    A load cell measures the pressure being exerted on the printing filament, while a feed rate sensor measures the thickness of the filament and the actual rate at which it is being fed through the printer.

    “This fusion of measurement, modeling, and manufacturing is at the heart of the collaboration between NIST and CBA, as we work develop what we’ve termed ‘computational metrology,’” says Warren.

    These measurements can be used to calculate the two most important, yet difficult to determine, printing parameters: flow rate and temperature. Nearly half of all print settings in standard software are related to these two parameters. 

    Deriving a dataset

    Once they had the new instruments in place, the researchers developed a 20-minute test that generates a series of temperature and pressure readings at different flow rates. Essentially, the test involves setting the print nozzle at its hottest temperature, flowing the material through at a fixed rate, and then turning the heater off.

    “It was really difficult to figure out how to make that test work. Trying to find the limits of the extruder means that you are going to break the extruder pretty often while you are testing it. The notion of turning the heater off and just passively taking measurements was the ‘aha’ moment,” says Read.

    These data are entered into a function that automatically generates real parameters for the material and machine configuration, based on relative temperature and pressure inputs. The user can then enter those parameters into 3D printing software and generate instructions for the printer.

    In experiments with six different materials, several of which were bio-based, the method automatically generated viable parameters that consistently led to successful prints of a complex object.

    Moving forward, the researchers plan to integrate this process with 3D printing software so parameters don’t need to be entered manually. In addition, they want to enhance their workflow by incorporating a thermodynamic model of the hot end, which is the part of the printer that melts the filament.

    This collaboration is now more broadly developing computational metrology, in which the output of a measurement is a predictive model rather than just a parameter. The researchers will be applying this in other areas of advanced manufacturing, as well as in expanding access to metrology.

    “By developing a new method for the automatic generation of process parameters for fused filament fabrication, this study opens the door to the use of recycled and bio-based filaments that have variable and unknown behaviors. Importantly, this enhances the potential for digital manufacturing technology to utilize locally sourced sustainable materials,” says Alysia Garmulewicz, an associate professor in the Faculty of Administration and Economics at the University of Santiago in Chile who was not involved with this work.

    This research is supported, in part, by the National Institute of Standards and Technology and the Center for Bits and Atoms Consortia. More

  • in

    New software enables blind and low-vision users to create interactive, accessible charts

    A growing number of tools enable users to make online data representations, like charts, that are accessible for people who are blind or have low vision. However, most tools require an existing visual chart that can then be converted into an accessible format.

    This creates barriers that prevent blind and low-vision users from building their own custom data representations, and it can limit their ability to explore and analyze important information.

    A team of researchers from MIT and University College London (UCL) wants to change the way people think about accessible data representations.

    They created a software system called Umwelt (which means “environment” in German) that can enable blind and low-vision users to build customized, multimodal data representations without needing an initial visual chart.

    Umwelt, an authoring environment designed for screen-reader users, incorporates an editor that allows someone to upload a dataset and create a customized representation, such as a scatterplot, that can include three modalities: visualization, textual description, and sonification. Sonification involves converting data into nonspeech audio.

    The system, which can represent a variety of data types, includes a viewer that enables a blind or low-vision user to interactively explore a data representation, seamlessly switching between each modality to interact with data in a different way.

    The researchers conducted a study with five expert screen-reader users who found Umwelt to be useful and easy to learn. In addition to offering an interface that empowered them to create data representations — something they said was sorely lacking — the users said Umwelt could facilitate communication between people who rely on different senses.

    “We have to remember that blind and low-vision people aren’t isolated. They exist in these contexts where they want to talk to other people about data,” says Jonathan Zong, an electrical engineering and computer science (EECS) graduate student and lead author of a paper introducing Umwelt. “I am hopeful that Umwelt helps shift the way that researchers think about accessible data analysis. Enabling the full participation of blind and low-vision people in data analysis involves seeing visualization as just one piece of this bigger, multisensory puzzle.”

    Joining Zong on the paper are fellow EECS graduate students Isabella Pedraza Pineros and Mengzhu “Katie” Chen; Daniel Hajas, a UCL researcher who works with the Global Disability Innovation Hub; and senior author Arvind Satyanarayan, associate professor of computer science at MIT who leads the Visualization Group in the Computer Science and Artificial Intelligence Laboratory. The paper will be presented at the ACM Conference on Human Factors in Computing.

    De-centering visualization

    The researchers previously developed interactive interfaces that provide a richer experience for screen reader users as they explore accessible data representations. Through that work, they realized most tools for creating such representations involve converting existing visual charts.

    Aiming to decenter visual representations in data analysis, Zong and Hajas, who lost his sight at age 16, began co-designing Umwelt more than a year ago.

    At the outset, they realized they would need to rethink how to represent the same data using visual, auditory, and textual forms.

    “We had to put a common denominator behind the three modalities. By creating this new language for representations, and making the output and input accessible, the whole is greater than the sum of its parts,” says Hajas.

    To build Umwelt, they first considered what is unique about the way people use each sense.

    For instance, a sighted user can see the overall pattern of a scatterplot and, at the same time, move their eyes to focus on different data points. But for someone listening to a sonification, the experience is linear since data are converted into tones that must be played back one at a time.

    “If you are only thinking about directly translating visual features into nonvisual features, then you miss out on the unique strengths and weaknesses of each modality,” Zong adds.

    They designed Umwelt to offer flexibility, enabling a user to switch between modalities easily when one would better suit their task at a given time.

    To use the editor, one uploads a dataset to Umwelt, which employs heuristics to automatically creates default representations in each modality.

    If the dataset contains stock prices for companies, Umwelt might generate a multiseries line chart, a textual structure that groups data by ticker symbol and date, and a sonification that uses tone length to represent the price for each date, arranged by ticker symbol.

    The default heuristics are intended to help the user get started.

    “In any kind of creative tool, you have a blank-slate effect where it is hard to know how to begin. That is compounded in a multimodal tool because you have to specify things in three different representations,” Zong says.

    The editor links interactions across modalities, so if a user changes the textual description, that information is adjusted in the corresponding sonification. Someone could utilize the editor to build a multimodal representation, switch to the viewer for an initial exploration, then return to the editor to make adjustments.

    Helping users communicate about data

    To test Umwelt, they created a diverse set of multimodal representations, from scatterplots to multiview charts, to ensure the system could effectively represent different data types. Then they put the tool in the hands of five expert screen reader users.

    Study participants mostly found Umwelt to be useful for creating, exploring, and discussing data representations. One user said Umwelt was like an “enabler” that decreased the time it took them to analyze data. The users agreed that Umwelt could help them communicate about data more easily with sighted colleagues.

    “What stands out about Umwelt is its core philosophy of de-emphasizing the visual in favor of a balanced, multisensory data experience. Often, nonvisual data representations are relegated to the status of secondary considerations, mere add-ons to their visual counterparts. However, visualization is merely one aspect of data representation. I appreciate their efforts in shifting this perception and embracing a more inclusive approach to data science,” says JooYoung Seo, an assistant professor in the School of Information Sciences at the University of Illinois at Urbana-Champagne, who was not involved with this work.

    Moving forward, the researchers plan to create an open-source version of Umwelt that others can build upon. They also want to integrate tactile sensing into the software system as an additional modality, enabling the use of tools like refreshable tactile graphics displays.

    “In addition to its impact on end users, I am hoping that Umwelt can be a platform for asking scientific questions around how people use and perceive multimodal representations, and how we can improve the design beyond this initial step,” says Zong.

    This work was supported, in part, by the National Science Foundation and the MIT Morningside Academy for Design Fellowship. More

  • in

    Q&A: How refusal can be an act of design

    This month in the ACM Journal on Responsible Computing, MIT graduate student Jonathan Zong SM ’20 and co-author J. Nathan Matias SM ’13, PhD ’17 of the Cornell Citizens and Technology Lab examine how the notion of refusal can open new avenues in the field of data ethics. In their open-access report, “Data Refusal From Below: A Framework for Understanding, Evaluating, and Envisioning Refusal as Design,” the pair proposes a framework in four dimensions to map how individuals can say “no” to technology misuses. At the same time, the researchers argue that just like design, refusal is generative, and has the potential to create alternate futures.

    Zong, a PhD candidate in electrical engineering and computer science, 2022-23 MIT Morningside Academy for Design Design Fellow, and member of the MIT Visualization Group, describes his latest work in this Q&A.

    Q: How do you define the concept of “refusal,” and where does it come from?

    A: Refusal was developed in feminist and Indigenous studies. It’s this idea of saying “no,” without being given permission to say “no.” Scholars like Ruha Benjamin write about refusal in the context of surveillance, race, and bioethics, and talk about it as a necessary counterpart to consent. Others, like the authors of the “Feminist Data Manifest-No,” think of refusal as something that can help us commit to building better futures.

    Benjamin illustrates cases where the choice to refuse is not equally possible for everyone, citing examples involving genetic data and refugee screenings in the U.K. The imbalance of power in these situations underscores the broader concept of refusal, extending beyond rejecting specific options to challenging the entire set of choices presented.

    Q: What inspired you to work on the notion of refusal as an act of design?

    A: In my work on data ethics, I’ve been thinking about how to incorporate processes into research data collection, particularly around consent and opt-out, with a focus on individual autonomy and the idea of giving people choices about the way that their data is used. But when it comes to data privacy, simply making choices available is not enough. Choices can be unequally available, or create no-win situations where all options are bad. This led me to the concept of refusal: questioning the authority of data collectors and challenging their legitimacy.

    The key idea of my work is that refusal is an act of design. I think of refusal as deliberate actions to redesign our socio-technical landscape by exerting some sort of influence. Like design, refusal is generative. Like design, it’s oriented towards creating alternate possibilities and alternate futures. Design is a process of exploring or traversing a space of possibility. Applying a design framework to cases of refusal drawn from scholarly and journalistic sources allowed me to establish a common language for talking about refusal and to imagine refusals that haven’t been explored yet.

    Q: What are the stakes around data privacy and data collection?

    A: The use of data for facial recognition surveillance in the U.S. is a big example we use in the paper. When people do everyday things like post on social media or walk past cameras in public spaces, they might be contributing their data to training facial recognition systems. For instance, a tech company may take photos from a social media site and build facial recognition that they then sell to the government. In the U.S., these systems are disproportionately used by police to surveil communities of color. It is difficult to apply concepts like consent and opt out of these processes, because they happen over time and involve multiple kinds of institutions. It’s also not clear that individual opt-out would do anything to change the overall situation. Refusal then becomes a crucial avenue, at both individual and community levels, to think more broadly of how affected people still exert some kind of voice or agency, without necessarily having an official channel to do so.

    Q: Why do you think these issues are more particularly affecting disempowered communities?

    A: People who are affected by technologies are not always included in the design process for those technologies. Refusal then becomes a meaningful expression of values and priorities for those who were not part of the early design conversations. Actions taken against technologies like face surveillance — be it legal battles against companies, advocacy for stricter regulations, or even direct action like disabling security cameras — may not fit the conventional notion of participating in a design process. And yet, these are the actions available to refusers who may be excluded from other forms of participation.

    I’m particularly inspired by the movement around Indigenous data sovereignty. Organizations like the First Nations Information Governance Centre work towards prioritizing Indigenous communities’ perspectives in data collection, and refuse inadequate representation in official health data from the Canadian government. I think this is a movement that exemplifies the potential of refusal, not only as a way to reject what’s being offered, but also as a means to propose a constructive alternative, very much like design. Refusal is not merely a negation, but a pathway to different futures.

    Q: Can you elaborate on the design framework you propose?

    A: Refusals vary widely across contexts and scales. Developing a framework for refusal is about helping people see actions that are seemingly very different as instances of the same broader idea. Our framework consists of four facets: autonomy, time, power, and cost.

    Consider the case of IBM creating a facial recognition dataset using people’s photos without consent. We saw multiple forms of refusal emerge in response. IBM allowed individuals to opt out by withdrawing their photos. People collectively refused by creating a class-action lawsuit against IBM. Around the same time, many U.S. cities started passing local legislation banning the government use of facial recognition. Evaluating these cases through the framework highlights commonalities and differences. The framework highlights varied approaches to autonomy, like individual opt-out and collective action. Regarding time, opt-outs and lawsuits react to past harm, while legislation might proactively prevent future harm. Power dynamics differ; withdrawing individual photos minimally influences IBM, while legislation could potentially cause longer-term change. And as for cost, individual opt-out seems less demanding, while other approaches require more time and effort, balanced against potential benefits.

    The framework facilitates case description and comparison across these dimensions. I think its generative nature encourages exploration of novel forms of refusal as well. By identifying the characteristics we want to see in future refusal strategies — collective, proactive, powerful, low-cost… — we can aspire to shape future approaches and change the behavior of data collectors. We may not always be able to combine all these criteria, but the framework provides a means to articulate our aspirational goals in this context.

    Q: What impact do you hope this research will have?

    A: I hope to expand the notion of who can participate in design, and whose actions are seen as legitimate expressions of design input. I think a lot of work so far in the conversation around data ethics prioritizes the perspective of computer scientists who are trying to design better systems, at the expense of the perspective of people for whom the systems are not currently working. So, I hope designers and computer scientists can embrace the concept of refusal as a legitimate form of design, and a source of inspiration. There’s a vital conversation happening, one that should influence the design of future systems, even if expressed through unconventional means.

    One of the things I want to underscore in the paper is that design extends beyond software. Taking a socio-technical perspective, the act of designing encompasses software, institutions, relationships, and governance structures surrounding data use. I want people who aren’t software engineers, like policymakers or activists, to view themselves as integral to the technology design process. More

  • in

    AI generates high-quality images 30 times faster in a single step

    In our current age of artificial intelligence, computers can generate their own “art” by way of diffusion models, iteratively adding structure to a noisy initial state until a clear image or video emerges. Diffusion models have suddenly grabbed a seat at everyone’s table: Enter a few words and experience instantaneous, dopamine-spiking dreamscapes at the intersection of reality and fantasy. Behind the scenes, it involves a complex, time-intensive process requiring numerous iterations for the algorithm to perfect the image.

    MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) researchers have introduced a new framework that simplifies the multi-step process of traditional diffusion models into a single step, addressing previous limitations. This is done through a type of teacher-student model: teaching a new computer model to mimic the behavior of more complicated, original models that generate images. The approach, known as distribution matching distillation (DMD), retains the quality of the generated images and allows for much faster generation. 

    “Our work is a novel method that accelerates current diffusion models such as Stable Diffusion and DALLE-3 by 30 times,” says Tianwei Yin, an MIT PhD student in electrical engineering and computer science, CSAIL affiliate, and the lead researcher on the DMD framework. “This advancement not only significantly reduces computational time but also retains, if not surpasses, the quality of the generated visual content. Theoretically, the approach marries the principles of generative adversarial networks (GANs) with those of diffusion models, achieving visual content generation in a single step — a stark contrast to the hundred steps of iterative refinement required by current diffusion models. It could potentially be a new generative modeling method that excels in speed and quality.”

    This single-step diffusion model could enhance design tools, enabling quicker content creation and potentially supporting advancements in drug discovery and 3D modeling, where promptness and efficacy are key.

    Distribution dreams

    DMD cleverly has two components. First, it uses a regression loss, which anchors the mapping to ensure a coarse organization of the space of images to make training more stable. Next, it uses a distribution matching loss, which ensures that the probability to generate a given image with the student model corresponds to its real-world occurrence frequency. To do this, it leverages two diffusion models that act as guides, helping the system understand the difference between real and generated images and making training the speedy one-step generator possible.

    The system achieves faster generation by training a new network to minimize the distribution divergence between its generated images and those from the training dataset used by traditional diffusion models. “Our key insight is to approximate gradients that guide the improvement of the new model using two diffusion models,” says Yin. “In this way, we distill the knowledge of the original, more complex model into the simpler, faster one, while bypassing the notorious instability and mode collapse issues in GANs.” 

    Yin and colleagues used pre-trained networks for the new student model, simplifying the process. By copying and fine-tuning parameters from the original models, the team achieved fast training convergence of the new model, which is capable of producing high-quality images with the same architectural foundation. “This enables combining with other system optimizations based on the original architecture to further accelerate the creation process,” adds Yin. 

    When put to the test against the usual methods, using a wide range of benchmarks, DMD showed consistent performance. On the popular benchmark of generating images based on specific classes on ImageNet, DMD is the first one-step diffusion technique that churns out pictures pretty much on par with those from the original, more complex models, rocking a super-close Fréchet inception distance (FID) score of just 0.3, which is impressive, since FID is all about judging the quality and diversity of generated images. Furthermore, DMD excels in industrial-scale text-to-image generation and achieves state-of-the-art one-step generation performance. There’s still a slight quality gap when tackling trickier text-to-image applications, suggesting there’s a bit of room for improvement down the line. 

    Additionally, the performance of the DMD-generated images is intrinsically linked to the capabilities of the teacher model used during the distillation process. In the current form, which uses Stable Diffusion v1.5 as the teacher model, the student inherits limitations such as rendering detailed depictions of text and small faces, suggesting that DMD-generated images could be further enhanced by more advanced teacher models. 

    “Decreasing the number of iterations has been the Holy Grail in diffusion models since their inception,” says Fredo Durand, MIT professor of electrical engineering and computer science, CSAIL principal investigator, and a lead author on the paper. “We are very excited to finally enable single-step image generation, which will dramatically reduce compute costs and accelerate the process.” 

    “Finally, a paper that successfully combines the versatility and high visual quality of diffusion models with the real-time performance of GANs,” says Alexei Efros, a professor of electrical engineering and computer science at the University of California at Berkeley who was not involved in this study. “I expect this work to open up fantastic possibilities for high-quality real-time visual editing.” 

    Yin and Durand’s fellow authors are MIT electrical engineering and computer science professor and CSAIL principal investigator William T. Freeman, as well as Adobe research scientists Michaël Gharbi SM ’15, PhD ’18; Richard Zhang; Eli Shechtman; and Taesung Park. Their work was supported, in part, by U.S. National Science Foundation grants (including one for the Institute for Artificial Intelligence and Fundamental Interactions), the Singapore Defense Science and Technology Agency, and by funding from Gwangju Institute of Science and Technology and Amazon. Their work will be presented at the Conference on Computer Vision and Pattern Recognition in June. More

  • in

    Exploring the cellular neighborhood

    Cells rely on complex molecular machines composed of protein assemblies to perform essential functions such as energy production, gene expression, and protein synthesis. To better understand how these machines work, scientists capture snapshots of them by isolating proteins from cells and using various methods to determine their structures. However, isolating proteins from cells also removes them from the context of their native environment, including protein interaction partners and cellular location.

    Recently, cryogenic electron tomography (cryo-ET) has emerged as a way to observe proteins in their native environment by imaging frozen cells at different angles to obtain three-dimensional structural information. This approach is exciting because it allows researchers to directly observe how and where proteins associate with each other, revealing the cellular neighborhood of those interactions within the cell.

    With the technology available to image proteins in their native environment, MIT graduate student Barrett Powell wondered if he could take it one step further: What if molecular machines could be observed in action? In a paper published March 8 in Nature Methods, Powell describes the method he developed, called tomoDRGN, for modeling structural differences of proteins in cryo-ET data that arise from protein motions or proteins binding to different interaction partners. These variations are known as structural heterogeneity. 

    Although Powell had joined the lab of MIT associate professor of biology Joey Davis as an experimental scientist, he recognized the potential impact of computational approaches in understanding structural heterogeneity within a cell. Previously, the Davis Lab developed a related methodology named cryoDRGN to understand structural heterogeneity in purified samples. As Powell and Davis saw cryo-ET rising in prominence in the field, Powell took on the challenge of re-imagining this framework to work in cells.

    When solving structures with purified samples, each particle is imaged only once. By contrast, cryo-ET data is collected by imaging each particle more than 40 times from different angles. That meant tomoDRGN needed to be able to merge the information from more than 40 images, which was where the project hit a roadblock: the amount of data led to an information overload.

    To address this, Powell successfully rebuilt the cryoDRGN model to prioritize only the highest-quality data. When imaging the same particle multiple times, radiation damage occurs. The images acquired earlier, therefore, tend to be of higher quality because the particles are less damaged.

    “By excluding some of the lower-quality data, the results were actually better than using all of the data — and the computational performance was substantially faster,” Powell says.

    Just as Powell was beginning work on testing his model, he had a stroke of luck: The authors of a groundbreaking new study that visualized, for the first time, ribosomes inside cells at near-atomic resolution, shared their raw data on the Electric Microscopy Public Image Archive (EMPIAR). This dataset was an exemplary test case for Powell, through which he demonstrated that tomoDRGN could uncover structural heterogeneity within cryo-ET data. 

    According to Powell, one exciting result is what tomoDRGN found surrounding a subset of ribosomes in the EMPIAR dataset. Some of the ribosomal particles were associated with a bacterial cell membrane and engaged in a process called cotranslational translocation. This occurs when a protein is being simultaneously synthesized and transported across a membrane. Researchers can use this result to make new hypotheses about how the ribosome functions with other protein machinery integral to transporting proteins outside of the cell, now guided by a structure of the complex in its native environment. 

    After seeing that tomoDRGN could resolve structural heterogeneity from a structurally diverse dataset, Powell was curious: How small of a population could tomoDRGN identify? For that test, he chose a protein named apoferritin, which is a commonly used benchmark for cryo-ET and is often treated as structurally homogeneous. Ferritin is a protein used for iron storage and is referred to as apoferritin when it lacks iron.

    Surprisingly, in addition to the expected particles, tomoDRGN revealed a minor population of ferritin particles — with iron bound — making up just 2 percent of the dataset, that was not previously reported. This result further demonstrated tomoDRGN’s ability to identify structural states that occur so infrequently that they would be averaged out of a 3D reconstruction. 

    Powell and other members of the Davis Lab are excited to see how tomoDRGN can be applied to further ribosomal studies and to other systems. Davis works on understanding how cells assemble, regulate, and degrade molecular machines, so the next steps include exploring ribosome biogenesis within cells in greater detail using this new tool.

    “What are the possible states that we may be losing during purification?” Davis asks. “Perhaps more excitingly, we can look at how they localize within the cell and what partners and protein complexes they may be interacting with.” More

  • in

    Using generative AI to improve software testing

    Generative AI is getting plenty of attention for its ability to create text and images. But those media represent only a fraction of the data that proliferate in our society today. Data are generated every time a patient goes through a medical system, a storm impacts a flight, or a person interacts with a software application.

    Using generative AI to create realistic synthetic data around those scenarios can help organizations more effectively treat patients, reroute planes, or improve software platforms — especially in scenarios where real-world data are limited or sensitive.

    For the last three years, the MIT spinout DataCebo has offered a generative software system called the Synthetic Data Vault to help organizations create synthetic data to do things like test software applications and train machine learning models.

    The Synthetic Data Vault, or SDV, has been downloaded more than 1 million times, with more than 10,000 data scientists using the open-source library for generating synthetic tabular data. The founders — Principal Research Scientist Kalyan Veeramachaneni and alumna Neha Patki ’15, SM ’16 — believe the company’s success is due to SDV’s ability to revolutionize software testing.

    SDV goes viral

    In 2016, Veeramachaneni’s group in the Data to AI Lab unveiled a suite of open-source generative AI tools to help organizations create synthetic data that matched the statistical properties of real data.

    Companies can use synthetic data instead of sensitive information in programs while still preserving the statistical relationships between datapoints. Companies can also use synthetic data to run new software through simulations to see how it performs before releasing it to the public.

    Veeramachaneni’s group came across the problem because it was working with companies that wanted to share their data for research.

    “MIT helps you see all these different use cases,” Patki explains. “You work with finance companies and health care companies, and all those projects are useful to formulate solutions across industries.”

    In 2020, the researchers founded DataCebo to build more SDV features for larger organizations. Since then, the use cases have been as impressive as they’ve been varied.

    With DataCebo’s new flight simulator, for instance, airlines can plan for rare weather events in a way that would be impossible using only historic data. In another application, SDV users synthesized medical records to predict health outcomes for patients with cystic fibrosis. A team from Norway recently used SDV to create synthetic student data to evaluate whether various admissions policies were meritocratic and free from bias.

    In 2021, the data science platform Kaggle hosted a competition for data scientists that used SDV to create synthetic data sets to avoid using proprietary data. Roughly 30,000 data scientists participated, building solutions and predicting outcomes based on the company’s realistic data.

    And as DataCebo has grown, it’s stayed true to its MIT roots: All of the company’s current employees are MIT alumni.

    Supercharging software testing

    Although their open-source tools are being used for a variety of use cases, the company is focused on growing its traction in software testing.

    “You need data to test these software applications,” Veeramachaneni says. “Traditionally, developers manually write scripts to create synthetic data. With generative models, created using SDV, you can learn from a sample of data collected and then sample a large volume of synthetic data (which has the same properties as real data), or create specific scenarios and edge cases, and use the data to test your application.”

    For example, if a bank wanted to test a program designed to reject transfers from accounts with no money in them, it would have to simulate many accounts simultaneously transacting. Doing that with data created manually would take a lot of time. With DataCebo’s generative models, customers can create any edge case they want to test.

    “It’s common for industries to have data that is sensitive in some capacity,” Patki says. “Often when you’re in a domain with sensitive data you’re dealing with regulations, and even if there aren’t legal regulations, it’s in companies’ best interest to be diligent about who gets access to what at which time. So, synthetic data is always better from a privacy perspective.”

    Scaling synthetic data

    Veeramachaneni believes DataCebo is advancing the field of what it calls synthetic enterprise data, or data generated from user behavior on large companies’ software applications.

    “Enterprise data of this kind is complex, and there is no universal availability of it, unlike language data,” Veeramachaneni says. “When folks use our publicly available software and report back if works on a certain pattern, we learn a lot of these unique patterns, and it allows us to improve our algorithms. From one perspective, we are building a corpus of these complex patterns, which for language and images is readily available. “

    DataCebo also recently released features to improve SDV’s usefulness, including tools to assess the “realism” of the generated data, called the SDMetrics library as well as a way to compare models’ performances called SDGym.

    “It’s about ensuring organizations trust this new data,” Veeramachaneni says. “[Our tools offer] programmable synthetic data, which means we allow enterprises to insert their specific insight and intuition to build more transparent models.”

    As companies in every industry rush to adopt AI and other data science tools, DataCebo is ultimately helping them do so in a way that is more transparent and responsible.

    “In the next few years, synthetic data from generative models will transform all data work,” Veeramachaneni says. “We believe 90 percent of enterprise operations can be done with synthetic data.” More

  • in

    Startup accelerates progress toward light-speed computing

    Our ability to cram ever-smaller transistors onto a chip has enabled today’s age of ubiquitous computing. But that approach is finally running into limits, with some experts declaring an end to Moore’s Law and a related principle, known as Dennard’s Scaling.

    Those developments couldn’t be coming at a worse time. Demand for computing power has skyrocketed in recent years thanks in large part to the rise of artificial intelligence, and it shows no signs of slowing down.

    Now Lightmatter, a company founded by three MIT alumni, is continuing the remarkable progress of computing by rethinking the lifeblood of the chip. Instead of relying solely on electricity, the company also uses light for data processing and transport. The company’s first two products, a chip specializing in artificial intelligence operations and an interconnect that facilitates data transfer between chips, use both photons and electrons to drive more efficient operations.

    “The two problems we are solving are ‘How do chips talk?’ and ‘How do you do these [AI] calculations?’” Lightmatter co-founder and CEO Nicholas Harris PhD ’17 says. “With our first two products, Envise and Passage, we’re addressing both of those questions.”

    In a nod to the size of the problem and the demand for AI, Lightmatter raised just north of $300 million in 2023 at a valuation of $1.2 billion. Now the company is demonstrating its technology with some of the largest technology companies in the world in hopes of reducing the massive energy demand of data centers and AI models.

    “We’re going to enable platforms on top of our interconnect technology that are made up of hundreds of thousands of next-generation compute units,” Harris says. “That simply wouldn’t be possible without the technology that we’re building.”

    From idea to $100K

    Prior to MIT, Harris worked at the semiconductor company Micron Technology, where he studied the fundamental devices behind integrated chips. The experience made him see how the traditional approach for improving computer performance — cramming more transistors onto each chip — was hitting its limits.

    “I saw how the roadmap for computing was slowing, and I wanted to figure out how I could continue it,” Harris says. “What approaches can augment computers? Quantum computing and photonics were two of those pathways.”

    Harris came to MIT to work on photonic quantum computing for his PhD under Dirk Englund, an associate professor in the Department of Electrical Engineering and Computer Science. As part of that work, he built silicon-based integrated photonic chips that could send and process information using light instead of electricity.

    The work led to dozens of patents and more than 80 research papers in prestigious journals like Nature. But another technology also caught Harris’s attention at MIT.

    “I remember walking down the hall and seeing students just piling out of these auditorium-sized classrooms, watching relayed live videos of lectures to see professors teach deep learning,” Harris recalls, referring to the artificial intelligence technique. “Everybody on campus knew that deep learning was going to be a huge deal, so I started learning more about it, and we realized that the systems I was building for photonic quantum computing could actually be leveraged to do deep learning.”

    Harris had planned to become a professor after his PhD, but he realized he could attract more funding and innovate more quickly through a startup, so he teamed up with Darius Bunandar PhD ’18, who was also studying in Englund’s lab, and Thomas Graham MBA ’18. The co-founders successfully launched into the startup world by winning the 2017 MIT $100K Entrepreneurship Competition.

    Seeing the light

    Lightmatter’s Envise chip takes the part of computing that electrons do well, like memory, and combines it with what light does well, like performing the massive matrix multiplications of deep-learning models.

    “With photonics, you can perform multiple calculations at the same time because the data is coming in on different colors of light,” Harris explains. “In one color, you could have a photo of a dog. In another color, you could have a photo of a cat. In another color, maybe a tree, and you could have all three of those operations going through the same optical computing unit, this matrix accelerator, at the same time. That drives up operations per area, and it reuses the hardware that’s there, driving up energy efficiency.”

    Passage takes advantage of light’s latency and bandwidth advantages to link processors in a manner similar to how fiber optic cables use light to send data over long distances. It also enables chips as big as entire wafers to act as a single processor. Sending information between chips is central to running the massive server farms that power cloud computing and run AI systems like ChatGPT.

    Both products are designed to bring energy efficiencies to computing, which Harris says are needed to keep up with rising demand without bringing huge increases in power consumption.

    “By 2040, some predict that around 80 percent of all energy usage on the planet will be devoted to data centers and computing, and AI is going to be a huge fraction of that,” Harris says. “When you look at computing deployments for training these large AI models, they’re headed toward using hundreds of megawatts. Their power usage is on the scale of cities.”

    Lightmatter is currently working with chipmakers and cloud service providers for mass deployment. Harris notes that because the company’s equipment runs on silicon, it can be produced by existing semiconductor fabrication facilities without massive changes in process.

    The ambitious plans are designed to open up a new path forward for computing that would have huge implications for the environment and economy.

    “We’re going to continue looking at all of the pieces of computers to figure out where light can accelerate them, make them more energy efficient, and faster, and we’re going to continue to replace those parts,” Harris says. “Right now, we’re focused on interconnect with Passage and on compute with Envise. But over time, we’re going to build out the next generation of computers, and it’s all going to be centered around light.” More