More stories

  • in

    Advancing technology for aquaculture

    According to the National Oceanic and Atmospheric Administration, aquaculture in the United States represents a $1.5 billion industry annually. Like land-based farming, shellfish aquaculture requires healthy seed production in order to maintain a sustainable industry. Aquaculture hatchery production of shellfish larvae — seeds — requires close monitoring to track mortality rates and assess health from the earliest stages of life. 

    Careful observation is necessary to inform production scheduling, determine effects of naturally occurring harmful bacteria, and ensure sustainable seed production. This is an essential step for shellfish hatcheries but is currently a time-consuming manual process prone to human error. 

    With funding from MIT’s Abdul Latif Jameel Water and Food Systems Lab (J-WAFS), MIT Sea Grant is working with Associate Professor Otto Cordero of the MIT Department of Civil and Environmental Engineering, Professor Taskin Padir and Research Scientist Mark Zolotas at the Northeastern University Institute for Experiential Robotics, and others at the Aquaculture Research Corporation (ARC), and the Cape Cod Commercial Fishermen’s Alliance, to advance technology for the aquaculture industry. Located on Cape Cod, ARC is a leading shellfish hatchery, farm, and wholesaler that plays a vital role in providing high-quality shellfish seed to local and regional growers.

    Two MIT students have joined the effort this semester, working with Robert Vincent, MIT Sea Grant’s assistant director of advisory services, through the Undergraduate Research Opportunities Program (UROP). 

    First-year student Unyime Usua and sophomore Santiago Borrego are using microscopy images of shellfish seed from ARC to train machine learning algorithms that will help automate the identification and counting process. The resulting user-friendly image recognition tool aims to aid aquaculturists in differentiating and counting healthy, unhealthy, and dead shellfish larvae, improving accuracy and reducing time and effort.

    Vincent explains that AI is a powerful tool for environmental science that enables researchers, industry, and resource managers to address challenges that have long been pinch points for accurate data collection, analysis, predictions, and streamlining processes. “Funding support from programs like J-WAFS enable us to tackle these problems head-on,” he says. 

    ARC faces challenges with manually quantifying larvae classes, an important step in their seed production process. “When larvae are in their growing stages they are constantly being sized and counted,” explains Cheryl James, ARC larval/juvenile production manager. “This process is critical to encourage optimal growth and strengthen the population.” 

    Developing an automated identification and counting system will help to improve this step in the production process with time and cost benefits. “This is not an easy task,” says Vincent, “but with the guidance of Dr. Zolotas at the Northeastern University Institute for Experiential Robotics and the work of the UROP students, we have made solid progress.” 

    The UROP program benefits both researchers and students. Involving MIT UROP students in developing these types of systems provides insights into AI applications that they might not have considered, providing opportunities to explore, learn, and apply themselves while contributing to solving real challenges.

    Borrego saw this project as an opportunity to apply what he’d learned in class 6.390 (Introduction to Machine Learning) to a real-world issue. “I was starting to form an idea of how computers can see images and extract information from them,” he says. “I wanted to keep exploring that.”

    Usua decided to pursue the project because of the direct industry impacts it could have. “I’m pretty interested in seeing how we can utilize machine learning to make people’s lives easier. We are using AI to help biologists make this counting and identification process easier.” While Usua wasn’t familiar with aquaculture before starting this project, she explains, “Just hearing about the hatcheries that Dr. Vincent was telling us about, it was unfortunate that not a lot of people know what’s going on and the problems that they’re facing.”

    On Cape Cod alone, aquaculture is an $18 million per year industry. But the Massachusetts Division of Marine Fisheries estimates that hatcheries are only able to meet 70–80 percent of seed demand annually, which impacts local growers and economies. Through this project, the partners aim to develop technology that will increase seed production, advance industry capabilities, and help understand and improve the hatchery microbiome.

    Borrego explains the initial challenge of having limited data to work with. “Starting out, we had to go through and label all of the data, but going through that process helped me learn a lot.” In true MIT fashion, he shares his takeaway from the project: “Try to get the best out of what you’re given with the data you have to work with. You’re going to have to adapt and change your strategies depending on what you have.”

    Usua describes her experience going through the research process, communicating in a team, and deciding what approaches to take. “Research is a difficult and long process, but there is a lot to gain from it because it teaches you to look for things on your own and find your own solutions to problems.”

    In addition to increasing seed production and reducing the human labor required in the hatchery process, the collaborators expect this project to contribute to cost savings and technology integration to support one of the most underserved industries in the United States. 

    Borrego and Usua both plan to continue their work for a second semester with MIT Sea Grant. Borrego is interested in learning more about how technology can be used to protect the environment and wildlife. Usua says she hopes to explore more projects related to aquaculture. “It seems like there’s an infinite amount of ways to tackle these issues.” More

  • in

    Using deep learning to image the Earth’s planetary boundary layer

    Although the troposphere is often thought of as the closest layer of the atmosphere to the Earth’s surface, the planetary boundary layer (PBL) — the lowest layer of the troposphere — is actually the part that most significantly influences weather near the surface. In the 2018 planetary science decadal survey, the PBL was raised as an important scientific issue that has the potential to enhance storm forecasting and improve climate projections.  

    “The PBL is where the surface interacts with the atmosphere, including exchanges of moisture and heat that help lead to severe weather and a changing climate,” says Adam Milstein, a technical staff member in Lincoln Laboratory’s Applied Space Systems Group. “The PBL is also where humans live, and the turbulent movement of aerosols throughout the PBL is important for air quality that influences human health.” 

    Although vital for studying weather and climate, important features of the PBL, such as its height, are difficult to resolve with current technology. In the past four years, Lincoln Laboratory staff have been studying the PBL, focusing on two different tasks: using machine learning to make 3D-scanned profiles of the atmosphere, and resolving the vertical structure of the atmosphere more clearly in order to better predict droughts.  

    This PBL-focused research effort builds on more than a decade of related work on fast, operational neural network algorithms developed by Lincoln Laboratory for NASA missions. These missions include the Time-Resolved Observations of Precipitation structure and storm Intensity with a Constellation of Smallsats (TROPICS) mission as well as Aqua, a satellite that collects data about Earth’s water cycle and observes variables such as ocean temperature, precipitation, and water vapor in the atmosphere. These algorithms retrieve temperature and humidity from the satellite instrument data and have been shown to significantly improve the accuracy and usable global coverage of the observations over previous approaches. For TROPICS, the algorithms help retrieve data that are used to characterize a storm’s rapidly evolving structures in near-real time, and for Aqua, it has helped increase forecasting models, drought monitoring, and fire prediction. 

    These operational algorithms for TROPICS and Aqua are based on classic “shallow” neural networks to maximize speed and simplicity, creating a one-dimensional vertical profile for each spectral measurement collected by the instrument over each location. While this approach has improved observations of the atmosphere down to the surface overall, including the PBL, laboratory staff determined that newer “deep” learning techniques that treat the atmosphere over a region of interest as a three-dimensional image are needed to improve PBL details further.

    “We hypothesized that deep learning and artificial intelligence (AI) techniques could improve on current approaches by incorporating a better statistical representation of 3D temperature and humidity imagery of the atmosphere into the solutions,” Milstein says. “But it took a while to figure out how to create the best dataset — a mix of real and simulated data; we needed to prepare to train these techniques.”

    The team collaborated with Joseph Santanello of the NASA Goddard Space Flight Center and William Blackwell, also of the Applied Space Systems Group, in a recent NASA-funded effort showing that these retrieval algorithms can improve PBL detail, including more accurate determination of the PBL height than the previous state of the art. 

    While improved knowledge of the PBL is broadly useful for increasing understanding of climate and weather, one key application is prediction of droughts. According to a Global Drought Snapshot report released last year, droughts are a pressing planetary issue that the global community needs to address. Lack of humidity near the surface, specifically at the level of the PBL, is the leading indicator of drought. While previous studies using remote-sensing techniques have examined the humidity of soil to determine drought risk, studying the atmosphere can help predict when droughts will happen.  

    In an effort funded by Lincoln Laboratory’s Climate Change Initiative, Milstein, along with laboratory staff member Michael Pieper, are working with scientists at NASA’s Jet Propulsion Laboratory (JPL) to use neural network techniques to improve drought prediction over the continental United States. While the work builds off of existing operational work JPL has done incorporating (in part) the laboratory’s operational “shallow” neural network approach for Aqua, the team believes that this work and the PBL-focused deep learning research work can be combined to further improve the accuracy of drought prediction. 

    “Lincoln Laboratory has been working with NASA for more than a decade on neural network algorithms for estimating temperature and humidity in the atmosphere from space-borne infrared and microwave instruments, including those on the Aqua spacecraft,” Milstein says. “Over that time, we have learned a lot about this problem by working with the science community, including learning about what scientific challenges remain. Our long experience working on this type of remote sensing with NASA scientists, as well as our experience with using neural network techniques, gave us a unique perspective.”

    According to Milstein, the next step for this project is to compare the deep learning results to datasets from the National Oceanic and Atmospheric Administration, NASA, and the Department of Energy collected directly in the PBL using radiosondes, a type of instrument flown on a weather balloon. “These direct measurements can be considered a kind of ‘ground truth’ to quantify the accuracy of the techniques we have developed,” Milstein says.

    This improved neural network approach holds promise to demonstrate drought prediction that can exceed the capabilities of existing indicators, Milstein says, and to be a tool that scientists can rely on for decades to come. More

  • in

    Growing our donated organ supply

    For those in need of one, an organ transplant is a matter of life and death. 

    Every year, the medical procedure gives thousands of people with advanced or end-stage diseases extended life. This “second chance” is heavily dependent on the availability, compatibility, and proximity of a precious resource that can’t be simply bought, grown, or manufactured — at least not yet.

    Instead, organs must be given — cut from one body and implanted into another. And because living organ donation is only viable in certain cases, many organs are only available for donation after the donor’s death.

    Unsurprisingly, the logistical and ethical complexity of distributing a limited number of transplant organs to a growing wait list of patients has received much attention. There’s an important part of the process that has received less focus, however, and which may hold significant untapped potential: organ procurement itself.

    “If you have a donated organ, who should you give it to? This question has been extensively studied in operations research, economics, and even applied computer science,” says Hammaad Adam, a graduate student in the Social and Engineering Systems (SES) doctoral program at the MIT Institute for Data, Systems, and Society (IDSS). “But there’s been a lot less research on where that organ comes from in the first place.”

    In the United States, nonprofits called organ procurement organizations, or OPOs, are responsible for finding and evaluating potential donors, interacting with grieving families and hospital administrations, and recovering and delivering organs — all while following the federal laws that serve as both their mandate and guardrails. Recent studies estimate that obstacles and inefficiencies lead to thousands of organs going uncollected every year, even as the demand for transplants continues to grow.

    “There’s been little transparent data on organ procurement,” argues Adam. Working with MIT computer science professors Marzyeh Ghassemi and Ashia Wilson, and in collaboration with stakeholders in organ procurement, Adam led a project to create a dataset called ORCHID: Organ Retrieval and Collection of Health Information for Donation. ORCHID contains a decade of clinical, financial, and administrative data from six OPOs.

    “Our goal is for the ORCHID database to have an impact in how organ procurement is understood, internally and externally,” says Ghassemi.

    Efficiency and equity 

    It was looking to make an impact that drew Adam to SES and MIT. With a background in applied math and experience in strategy consulting, solving problems with technical components sits right in his wheelhouse.

    “I really missed challenging technical problems from a statistics and machine learning standpoint,” he says of his time in consulting. “So I went back and got a master’s in data science, and over the course of my master’s got involved in a bunch of academic research projects in a few different fields, including biology, management science, and public policy. What I enjoyed most were some of the more social science-focused projects that had immediate impact.”

    As a grad student in SES, Adam’s research focuses on using statistical tools to uncover health-care inequities, and developing machine learning approaches to address them. “Part of my dissertation research focuses on building tools that can improve equity in clinical trials and other randomized experiments,” he explains.

    One recent example of Adam’s work: developing a novel method to stop clinical trials early if the treatment has an unintended harmful effect for a minority group of participants. “I’ve also been thinking about ways to increase minority representation in clinical trials through improved patient recruitment,” he adds.

    Racial inequities in health care extend into organ transplantation, where a majority of wait-listed patients are not white — far in excess of their demographic groups’ proportion to the overall population. There are fewer organ donations from many of these communities, due to various obstacles in need of better understanding if they are to be overcome. 

    “My work in organ transplantation began on the allocation side,” explains Adam. “In work under review, we examined the role of race in the acceptance of heart, liver, and lung transplant offers by physicians on behalf of their patients. We found that Black race of the patient was associated with significantly lower odds of organ offer acceptance — in other words, transplant doctors seemed more likely to turn down organs offered to Black patients. This trend may have multiple explanations, but it is nevertheless concerning.”

    Adam’s research has also found that donor-candidate race match was associated with significantly higher odds of offer acceptance, an association that Adam says “highlights the importance of organ donation from racial minority communities, and has motivated our work on equitable organ procurement.”

    Working with Ghassemi through the IDSS Initiative on Combatting Systemic Racism, Adam was introduced to OPO stakeholders looking to collaborate. “It’s this opportunity to impact not only health-care efficiency, but also health-care equity, that really got me interested in this research,” says Adam.

    Play video

    MIT Initiative on Combatting Systemic Racism – HealthcareVideo: IDSS

    Making an impact

    Creating a database like ORCHID means solving problems in multiple domains, from the technical to the political. Some efforts never overcome the first step: getting data in the first place. Thankfully, several OPOs were already seeking collaborations and looking to improve their performance.

    “We have been lucky to have a strong partnership with the OPOs, and we hope to work together to find important insights to improve efficiency and equity,” says Ghassemi.

    The value of a database like ORCHID is in its potential for generating new insights, especially through quantitative analysis with statistics and computing tools like machine learning. The potential value in ORCHID was recognized with an MIT Prize for Open Data, an MIT Libraries award highlighting the importance and impact of research data that is openly shared.

    “It’s nice that the work got some recognition,” says Adam of the prize. “And it was cool to see some of the other great open data work that’s happening at MIT. I think there’s real impact in releasing publicly available data in an important and understudied domain.”

    All the same, Adam knows that building the database is only the first step.

    “I’m very interested in understanding the bottlenecks in the organ procurement process,” he explains. “As part of my thesis research, I’m exploring this by modeling OPO decision-making using causal inference and structural econometrics.”

    Using insights from this research, Adam also aims to evaluate policy changes that can improve both equity and efficiency in organ procurement. “And we’re hoping to recruit more OPOs, and increase the amount of data we’re releasing,” he says. “The dream state is every OPO joins our collaboration and provides updated data every year.”

    Adam is excited to see how other researchers might use the data to address inefficiencies in organ procurement. “Every organ donor saves between three and four lives,” he says. “So every research project that comes out of this dataset could make a real impact.” More

  • in

    AI generates high-quality images 30 times faster in a single step

    In our current age of artificial intelligence, computers can generate their own “art” by way of diffusion models, iteratively adding structure to a noisy initial state until a clear image or video emerges. Diffusion models have suddenly grabbed a seat at everyone’s table: Enter a few words and experience instantaneous, dopamine-spiking dreamscapes at the intersection of reality and fantasy. Behind the scenes, it involves a complex, time-intensive process requiring numerous iterations for the algorithm to perfect the image.

    MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) researchers have introduced a new framework that simplifies the multi-step process of traditional diffusion models into a single step, addressing previous limitations. This is done through a type of teacher-student model: teaching a new computer model to mimic the behavior of more complicated, original models that generate images. The approach, known as distribution matching distillation (DMD), retains the quality of the generated images and allows for much faster generation. 

    “Our work is a novel method that accelerates current diffusion models such as Stable Diffusion and DALLE-3 by 30 times,” says Tianwei Yin, an MIT PhD student in electrical engineering and computer science, CSAIL affiliate, and the lead researcher on the DMD framework. “This advancement not only significantly reduces computational time but also retains, if not surpasses, the quality of the generated visual content. Theoretically, the approach marries the principles of generative adversarial networks (GANs) with those of diffusion models, achieving visual content generation in a single step — a stark contrast to the hundred steps of iterative refinement required by current diffusion models. It could potentially be a new generative modeling method that excels in speed and quality.”

    This single-step diffusion model could enhance design tools, enabling quicker content creation and potentially supporting advancements in drug discovery and 3D modeling, where promptness and efficacy are key.

    Distribution dreams

    DMD cleverly has two components. First, it uses a regression loss, which anchors the mapping to ensure a coarse organization of the space of images to make training more stable. Next, it uses a distribution matching loss, which ensures that the probability to generate a given image with the student model corresponds to its real-world occurrence frequency. To do this, it leverages two diffusion models that act as guides, helping the system understand the difference between real and generated images and making training the speedy one-step generator possible.

    The system achieves faster generation by training a new network to minimize the distribution divergence between its generated images and those from the training dataset used by traditional diffusion models. “Our key insight is to approximate gradients that guide the improvement of the new model using two diffusion models,” says Yin. “In this way, we distill the knowledge of the original, more complex model into the simpler, faster one, while bypassing the notorious instability and mode collapse issues in GANs.” 

    Yin and colleagues used pre-trained networks for the new student model, simplifying the process. By copying and fine-tuning parameters from the original models, the team achieved fast training convergence of the new model, which is capable of producing high-quality images with the same architectural foundation. “This enables combining with other system optimizations based on the original architecture to further accelerate the creation process,” adds Yin. 

    When put to the test against the usual methods, using a wide range of benchmarks, DMD showed consistent performance. On the popular benchmark of generating images based on specific classes on ImageNet, DMD is the first one-step diffusion technique that churns out pictures pretty much on par with those from the original, more complex models, rocking a super-close Fréchet inception distance (FID) score of just 0.3, which is impressive, since FID is all about judging the quality and diversity of generated images. Furthermore, DMD excels in industrial-scale text-to-image generation and achieves state-of-the-art one-step generation performance. There’s still a slight quality gap when tackling trickier text-to-image applications, suggesting there’s a bit of room for improvement down the line. 

    Additionally, the performance of the DMD-generated images is intrinsically linked to the capabilities of the teacher model used during the distillation process. In the current form, which uses Stable Diffusion v1.5 as the teacher model, the student inherits limitations such as rendering detailed depictions of text and small faces, suggesting that DMD-generated images could be further enhanced by more advanced teacher models. 

    “Decreasing the number of iterations has been the Holy Grail in diffusion models since their inception,” says Fredo Durand, MIT professor of electrical engineering and computer science, CSAIL principal investigator, and a lead author on the paper. “We are very excited to finally enable single-step image generation, which will dramatically reduce compute costs and accelerate the process.” 

    “Finally, a paper that successfully combines the versatility and high visual quality of diffusion models with the real-time performance of GANs,” says Alexei Efros, a professor of electrical engineering and computer science at the University of California at Berkeley who was not involved in this study. “I expect this work to open up fantastic possibilities for high-quality real-time visual editing.” 

    Yin and Durand’s fellow authors are MIT electrical engineering and computer science professor and CSAIL principal investigator William T. Freeman, as well as Adobe research scientists Michaël Gharbi SM ’15, PhD ’18; Richard Zhang; Eli Shechtman; and Taesung Park. Their work was supported, in part, by U.S. National Science Foundation grants (including one for the Institute for Artificial Intelligence and Fundamental Interactions), the Singapore Defense Science and Technology Agency, and by funding from Gwangju Institute of Science and Technology and Amazon. Their work will be presented at the Conference on Computer Vision and Pattern Recognition in June. More

  • in

    Exploring the cellular neighborhood

    Cells rely on complex molecular machines composed of protein assemblies to perform essential functions such as energy production, gene expression, and protein synthesis. To better understand how these machines work, scientists capture snapshots of them by isolating proteins from cells and using various methods to determine their structures. However, isolating proteins from cells also removes them from the context of their native environment, including protein interaction partners and cellular location.

    Recently, cryogenic electron tomography (cryo-ET) has emerged as a way to observe proteins in their native environment by imaging frozen cells at different angles to obtain three-dimensional structural information. This approach is exciting because it allows researchers to directly observe how and where proteins associate with each other, revealing the cellular neighborhood of those interactions within the cell.

    With the technology available to image proteins in their native environment, MIT graduate student Barrett Powell wondered if he could take it one step further: What if molecular machines could be observed in action? In a paper published March 8 in Nature Methods, Powell describes the method he developed, called tomoDRGN, for modeling structural differences of proteins in cryo-ET data that arise from protein motions or proteins binding to different interaction partners. These variations are known as structural heterogeneity. 

    Although Powell had joined the lab of MIT associate professor of biology Joey Davis as an experimental scientist, he recognized the potential impact of computational approaches in understanding structural heterogeneity within a cell. Previously, the Davis Lab developed a related methodology named cryoDRGN to understand structural heterogeneity in purified samples. As Powell and Davis saw cryo-ET rising in prominence in the field, Powell took on the challenge of re-imagining this framework to work in cells.

    When solving structures with purified samples, each particle is imaged only once. By contrast, cryo-ET data is collected by imaging each particle more than 40 times from different angles. That meant tomoDRGN needed to be able to merge the information from more than 40 images, which was where the project hit a roadblock: the amount of data led to an information overload.

    To address this, Powell successfully rebuilt the cryoDRGN model to prioritize only the highest-quality data. When imaging the same particle multiple times, radiation damage occurs. The images acquired earlier, therefore, tend to be of higher quality because the particles are less damaged.

    “By excluding some of the lower-quality data, the results were actually better than using all of the data — and the computational performance was substantially faster,” Powell says.

    Just as Powell was beginning work on testing his model, he had a stroke of luck: The authors of a groundbreaking new study that visualized, for the first time, ribosomes inside cells at near-atomic resolution, shared their raw data on the Electric Microscopy Public Image Archive (EMPIAR). This dataset was an exemplary test case for Powell, through which he demonstrated that tomoDRGN could uncover structural heterogeneity within cryo-ET data. 

    According to Powell, one exciting result is what tomoDRGN found surrounding a subset of ribosomes in the EMPIAR dataset. Some of the ribosomal particles were associated with a bacterial cell membrane and engaged in a process called cotranslational translocation. This occurs when a protein is being simultaneously synthesized and transported across a membrane. Researchers can use this result to make new hypotheses about how the ribosome functions with other protein machinery integral to transporting proteins outside of the cell, now guided by a structure of the complex in its native environment. 

    After seeing that tomoDRGN could resolve structural heterogeneity from a structurally diverse dataset, Powell was curious: How small of a population could tomoDRGN identify? For that test, he chose a protein named apoferritin, which is a commonly used benchmark for cryo-ET and is often treated as structurally homogeneous. Ferritin is a protein used for iron storage and is referred to as apoferritin when it lacks iron.

    Surprisingly, in addition to the expected particles, tomoDRGN revealed a minor population of ferritin particles — with iron bound — making up just 2 percent of the dataset, that was not previously reported. This result further demonstrated tomoDRGN’s ability to identify structural states that occur so infrequently that they would be averaged out of a 3D reconstruction. 

    Powell and other members of the Davis Lab are excited to see how tomoDRGN can be applied to further ribosomal studies and to other systems. Davis works on understanding how cells assemble, regulate, and degrade molecular machines, so the next steps include exploring ribosome biogenesis within cells in greater detail using this new tool.

    “What are the possible states that we may be losing during purification?” Davis asks. “Perhaps more excitingly, we can look at how they localize within the cell and what partners and protein complexes they may be interacting with.” More

  • in

    Using generative AI to improve software testing

    Generative AI is getting plenty of attention for its ability to create text and images. But those media represent only a fraction of the data that proliferate in our society today. Data are generated every time a patient goes through a medical system, a storm impacts a flight, or a person interacts with a software application.

    Using generative AI to create realistic synthetic data around those scenarios can help organizations more effectively treat patients, reroute planes, or improve software platforms — especially in scenarios where real-world data are limited or sensitive.

    For the last three years, the MIT spinout DataCebo has offered a generative software system called the Synthetic Data Vault to help organizations create synthetic data to do things like test software applications and train machine learning models.

    The Synthetic Data Vault, or SDV, has been downloaded more than 1 million times, with more than 10,000 data scientists using the open-source library for generating synthetic tabular data. The founders — Principal Research Scientist Kalyan Veeramachaneni and alumna Neha Patki ’15, SM ’16 — believe the company’s success is due to SDV’s ability to revolutionize software testing.

    SDV goes viral

    In 2016, Veeramachaneni’s group in the Data to AI Lab unveiled a suite of open-source generative AI tools to help organizations create synthetic data that matched the statistical properties of real data.

    Companies can use synthetic data instead of sensitive information in programs while still preserving the statistical relationships between datapoints. Companies can also use synthetic data to run new software through simulations to see how it performs before releasing it to the public.

    Veeramachaneni’s group came across the problem because it was working with companies that wanted to share their data for research.

    “MIT helps you see all these different use cases,” Patki explains. “You work with finance companies and health care companies, and all those projects are useful to formulate solutions across industries.”

    In 2020, the researchers founded DataCebo to build more SDV features for larger organizations. Since then, the use cases have been as impressive as they’ve been varied.

    With DataCebo’s new flight simulator, for instance, airlines can plan for rare weather events in a way that would be impossible using only historic data. In another application, SDV users synthesized medical records to predict health outcomes for patients with cystic fibrosis. A team from Norway recently used SDV to create synthetic student data to evaluate whether various admissions policies were meritocratic and free from bias.

    In 2021, the data science platform Kaggle hosted a competition for data scientists that used SDV to create synthetic data sets to avoid using proprietary data. Roughly 30,000 data scientists participated, building solutions and predicting outcomes based on the company’s realistic data.

    And as DataCebo has grown, it’s stayed true to its MIT roots: All of the company’s current employees are MIT alumni.

    Supercharging software testing

    Although their open-source tools are being used for a variety of use cases, the company is focused on growing its traction in software testing.

    “You need data to test these software applications,” Veeramachaneni says. “Traditionally, developers manually write scripts to create synthetic data. With generative models, created using SDV, you can learn from a sample of data collected and then sample a large volume of synthetic data (which has the same properties as real data), or create specific scenarios and edge cases, and use the data to test your application.”

    For example, if a bank wanted to test a program designed to reject transfers from accounts with no money in them, it would have to simulate many accounts simultaneously transacting. Doing that with data created manually would take a lot of time. With DataCebo’s generative models, customers can create any edge case they want to test.

    “It’s common for industries to have data that is sensitive in some capacity,” Patki says. “Often when you’re in a domain with sensitive data you’re dealing with regulations, and even if there aren’t legal regulations, it’s in companies’ best interest to be diligent about who gets access to what at which time. So, synthetic data is always better from a privacy perspective.”

    Scaling synthetic data

    Veeramachaneni believes DataCebo is advancing the field of what it calls synthetic enterprise data, or data generated from user behavior on large companies’ software applications.

    “Enterprise data of this kind is complex, and there is no universal availability of it, unlike language data,” Veeramachaneni says. “When folks use our publicly available software and report back if works on a certain pattern, we learn a lot of these unique patterns, and it allows us to improve our algorithms. From one perspective, we are building a corpus of these complex patterns, which for language and images is readily available. “

    DataCebo also recently released features to improve SDV’s usefulness, including tools to assess the “realism” of the generated data, called the SDMetrics library as well as a way to compare models’ performances called SDGym.

    “It’s about ensuring organizations trust this new data,” Veeramachaneni says. “[Our tools offer] programmable synthetic data, which means we allow enterprises to insert their specific insight and intuition to build more transparent models.”

    As companies in every industry rush to adopt AI and other data science tools, DataCebo is ultimately helping them do so in a way that is more transparent and responsible.

    “In the next few years, synthetic data from generative models will transform all data work,” Veeramachaneni says. “We believe 90 percent of enterprise operations can be done with synthetic data.” More

  • in

    Dealing with the limitations of our noisy world

    Tamara Broderick first set foot on MIT’s campus when she was a high school student, as a participant in the inaugural Women’s Technology Program. The monthlong summer academic experience gives young women a hands-on introduction to engineering and computer science.

    What is the probability that she would return to MIT years later, this time as a faculty member?

    That’s a question Broderick could probably answer quantitatively using Bayesian inference, a statistical approach to probability that tries to quantify uncertainty by continuously updating one’s assumptions as new data are obtained.

    In her lab at MIT, the newly tenured associate professor in the Department of Electrical Engineering and Computer Science (EECS) uses Bayesian inference to quantify uncertainty and measure the robustness of data analysis techniques.

    “I’ve always been really interested in understanding not just ‘What do we know from data analysis,’ but ‘How well do we know it?’” says Broderick, who is also a member of the Laboratory for Information and Decision Systems and the Institute for Data, Systems, and Society. “The reality is that we live in a noisy world, and we can’t always get exactly the data that we want. How do we learn from data but at the same time recognize that there are limitations and deal appropriately with them?”

    Broadly, her focus is on helping people understand the confines of the statistical tools available to them and, sometimes, working with them to craft better tools for a particular situation.

    For instance, her group recently collaborated with oceanographers to develop a machine-learning model that can make more accurate predictions about ocean currents. In another project, she and others worked with degenerative disease specialists on a tool that helps severely motor-impaired individuals utilize a computer’s graphical user interface by manipulating a single switch.

    A common thread woven through her work is an emphasis on collaboration.

    “Working in data analysis, you get to hang out in everybody’s backyard, so to speak. You really can’t get bored because you can always be learning about some other field and thinking about how we can apply machine learning there,” she says.

    Hanging out in many academic “backyards” is especially appealing to Broderick, who struggled even from a young age to narrow down her interests.

    A math mindset

    Growing up in a suburb of Cleveland, Ohio, Broderick had an interest in math for as long as she can remember. She recalls being fascinated by the idea of what would happen if you kept adding a number to itself, starting with 1+1=2 and then 2+2=4.

    “I was maybe 5 years old, so I didn’t know what ‘powers of two’ were or anything like that. I was just really into math,” she says.

    Her father recognized her interest in the subject and enrolled her in a Johns Hopkins program called the Center for Talented Youth, which gave Broderick the opportunity to take three-week summer classes on a range of subjects, from astronomy to number theory to computer science.

    Later, in high school, she conducted astrophysics research with a postdoc at Case Western University. In the summer of 2002, she spent four weeks at MIT as a member of the first class of the Women’s Technology Program.

    She especially enjoyed the freedom offered by the program, and its focus on using intuition and ingenuity to achieve high-level goals. For instance, the cohort was tasked with building a device with LEGOs that they could use to biopsy a grape suspended in Jell-O.

    The program showed her how much creativity is involved in engineering and computer science, and piqued her interest in pursuing an academic career.

    “But when I got into college at Princeton, I could not decide — math, physics, computer science — they all seemed super-cool. I wanted to do all of it,” she says.

    She settled on pursuing an undergraduate math degree but took all the physics and computer science courses she could cram into her schedule.

    Digging into data analysis

    After receiving a Marshall Scholarship, Broderick spent two years at Cambridge University in the United Kingdom, earning a master of advanced study in mathematics and a master of philosophy in physics.

    In the UK, she took a number of statistics and data analysis classes, including her first class on Bayesian data analysis in the field of machine learning.

    It was a transformative experience, she recalls.

    “During my time in the U.K., I realized that I really like solving real-world problems that matter to people, and Bayesian inference was being used in some of the most important problems out there,” she says.

    Back in the U.S., Broderick headed to the University of California at Berkeley, where she joined the lab of Professor Michael I. Jordan as a grad student. She earned a PhD in statistics with a focus on Bayesian data analysis. 

    She decided to pursue a career in academia and was drawn to MIT by the collaborative nature of the EECS department and by how passionate and friendly her would-be colleagues were.

    Her first impressions panned out, and Broderick says she has found a community at MIT that helps her be creative and explore hard, impactful problems with wide-ranging applications.

    “I’ve been lucky to work with a really amazing set of students and postdocs in my lab — brilliant and hard-working people whose hearts are in the right place,” she says.

    One of her team’s recent projects involves a collaboration with an economist who studies the use of microcredit, or the lending of small amounts of money at very low interest rates, in impoverished areas.

    The goal of microcredit programs is to raise people out of poverty. Economists run randomized control trials of villages in a region that receive or don’t receive microcredit. They want to generalize the study results, predicting the expected outcome if one applies microcredit to other villages outside of their study.

    But Broderick and her collaborators have found that results of some microcredit studies can be very brittle. Removing one or a few data points from the dataset can completely change the results. One issue is that researchers often use empirical averages, where a few very high or low data points can skew the results.

    Using machine learning, she and her collaborators developed a method that can determine how many data points must be dropped to change the substantive conclusion of the study. With their tool, a scientist can see how brittle the results are.

    “Sometimes dropping a very small fraction of data can change the major results of a data analysis, and then we might worry how far those conclusions generalize to new scenarios. Are there ways we can flag that for people? That is what we are getting at with this work,” she explains.

    At the same time, she is continuing to collaborate with researchers in a range of fields, such as genetics, to understand the pros and cons of different machine-learning techniques and other data analysis tools.

    Happy trails

    Exploration is what drives Broderick as a researcher, and it also fuels one of her passions outside the lab. She and her husband enjoy collecting patches they earn by hiking all the trails in a park or trail system.

    “I think my hobby really combines my interests of being outdoors and spreadsheets,” she says. “With these hiking patches, you have to explore everything and then you see areas you wouldn’t normally see. It is adventurous, in that way.”

    They’ve discovered some amazing hikes they would never have known about, but also embarked on more than a few “total disaster hikes,” she says. But each hike, whether a hidden gem or an overgrown mess, offers its own rewards.

    And just like in her research, curiosity, open-mindedness, and a passion for problem-solving have never led her astray. More

  • in

    Startup accelerates progress toward light-speed computing

    Our ability to cram ever-smaller transistors onto a chip has enabled today’s age of ubiquitous computing. But that approach is finally running into limits, with some experts declaring an end to Moore’s Law and a related principle, known as Dennard’s Scaling.

    Those developments couldn’t be coming at a worse time. Demand for computing power has skyrocketed in recent years thanks in large part to the rise of artificial intelligence, and it shows no signs of slowing down.

    Now Lightmatter, a company founded by three MIT alumni, is continuing the remarkable progress of computing by rethinking the lifeblood of the chip. Instead of relying solely on electricity, the company also uses light for data processing and transport. The company’s first two products, a chip specializing in artificial intelligence operations and an interconnect that facilitates data transfer between chips, use both photons and electrons to drive more efficient operations.

    “The two problems we are solving are ‘How do chips talk?’ and ‘How do you do these [AI] calculations?’” Lightmatter co-founder and CEO Nicholas Harris PhD ’17 says. “With our first two products, Envise and Passage, we’re addressing both of those questions.”

    In a nod to the size of the problem and the demand for AI, Lightmatter raised just north of $300 million in 2023 at a valuation of $1.2 billion. Now the company is demonstrating its technology with some of the largest technology companies in the world in hopes of reducing the massive energy demand of data centers and AI models.

    “We’re going to enable platforms on top of our interconnect technology that are made up of hundreds of thousands of next-generation compute units,” Harris says. “That simply wouldn’t be possible without the technology that we’re building.”

    From idea to $100K

    Prior to MIT, Harris worked at the semiconductor company Micron Technology, where he studied the fundamental devices behind integrated chips. The experience made him see how the traditional approach for improving computer performance — cramming more transistors onto each chip — was hitting its limits.

    “I saw how the roadmap for computing was slowing, and I wanted to figure out how I could continue it,” Harris says. “What approaches can augment computers? Quantum computing and photonics were two of those pathways.”

    Harris came to MIT to work on photonic quantum computing for his PhD under Dirk Englund, an associate professor in the Department of Electrical Engineering and Computer Science. As part of that work, he built silicon-based integrated photonic chips that could send and process information using light instead of electricity.

    The work led to dozens of patents and more than 80 research papers in prestigious journals like Nature. But another technology also caught Harris’s attention at MIT.

    “I remember walking down the hall and seeing students just piling out of these auditorium-sized classrooms, watching relayed live videos of lectures to see professors teach deep learning,” Harris recalls, referring to the artificial intelligence technique. “Everybody on campus knew that deep learning was going to be a huge deal, so I started learning more about it, and we realized that the systems I was building for photonic quantum computing could actually be leveraged to do deep learning.”

    Harris had planned to become a professor after his PhD, but he realized he could attract more funding and innovate more quickly through a startup, so he teamed up with Darius Bunandar PhD ’18, who was also studying in Englund’s lab, and Thomas Graham MBA ’18. The co-founders successfully launched into the startup world by winning the 2017 MIT $100K Entrepreneurship Competition.

    Seeing the light

    Lightmatter’s Envise chip takes the part of computing that electrons do well, like memory, and combines it with what light does well, like performing the massive matrix multiplications of deep-learning models.

    “With photonics, you can perform multiple calculations at the same time because the data is coming in on different colors of light,” Harris explains. “In one color, you could have a photo of a dog. In another color, you could have a photo of a cat. In another color, maybe a tree, and you could have all three of those operations going through the same optical computing unit, this matrix accelerator, at the same time. That drives up operations per area, and it reuses the hardware that’s there, driving up energy efficiency.”

    Passage takes advantage of light’s latency and bandwidth advantages to link processors in a manner similar to how fiber optic cables use light to send data over long distances. It also enables chips as big as entire wafers to act as a single processor. Sending information between chips is central to running the massive server farms that power cloud computing and run AI systems like ChatGPT.

    Both products are designed to bring energy efficiencies to computing, which Harris says are needed to keep up with rising demand without bringing huge increases in power consumption.

    “By 2040, some predict that around 80 percent of all energy usage on the planet will be devoted to data centers and computing, and AI is going to be a huge fraction of that,” Harris says. “When you look at computing deployments for training these large AI models, they’re headed toward using hundreds of megawatts. Their power usage is on the scale of cities.”

    Lightmatter is currently working with chipmakers and cloud service providers for mass deployment. Harris notes that because the company’s equipment runs on silicon, it can be produced by existing semiconductor fabrication facilities without massive changes in process.

    The ambitious plans are designed to open up a new path forward for computing that would have huge implications for the environment and economy.

    “We’re going to continue looking at all of the pieces of computers to figure out where light can accelerate them, make them more energy efficient, and faster, and we’re going to continue to replace those parts,” Harris says. “Right now, we’re focused on interconnect with Passage and on compute with Envise. But over time, we’re going to build out the next generation of computers, and it’s all going to be centered around light.” More