More stories

  • in

    Building robots to expand access to cell therapies

    Over the last two years, Multiply Labs has helped pharmaceutical companies produce biologic drugs with its robotic manufacturing platform. The robots can work around the clock, precisely formulating small batches of drugs to help companies run clinical trials more quickly.

    Now Multiply Labs, which was founded by Fred Parietti PhD ’16 and former visiting PhD at MIT Alice Melocchi, is hoping to bring the speed and precision of its robots to a new type of advanced treatment.

    In a recently announced project, Multiply Labs is developing a new robotic manufacturing platform to ease bottlenecks in the creation of cell therapies. These therapies have proven to be a powerful tool in the fight against cancer, but their production is incredibly labor intensive, contributing to their high cost. CAR-T cell therapy, for example, requires scientists to extract blood from a patient, isolate immune cells, genetically engineer those cells, grow the new cells, and inject them back into the patient. In many cases, each of those steps must be repeated for each patient.

    Multiply Labs is attempting to automate many processes that can currently only be done by highly trained scientists, reducing the potential for human error. The platform will also perform some of the most time-consuming tasks of cell therapy production in parallel. For instance, the company’s system will contain multiple bioreactors, which are used to grow the genetically modified cells that will be injected back into the patient. Some labs today only use one bioreactor in each clean room because of the specific environmental conditions that have to be met to optimize cell growth. By running multiple reactors simultaneously in a space about a quarter of the size of a basketball court, the company believes it can multiply the throughput of cell therapy production.

    Multiply Labs has partnered with global life sciences company Cytiva, which provides cell therapy equipment and services, as well as researchers at the University of California San Francisco to bring the platform to market.

    Multiply Labs’ efforts come at a time when demand for cell therapy treatment is expected to explode: There are currently more than 1,000 clinical trials underway to explore the treatment’s potential in a range of diseases. In the few areas where cell therapies are already approved, they have helped cancer patients when other treatment options had failed.

    “These [cell therapy] treatments are needed by millions of people, but only dozens of them can be administered by many centers,” Parietti says. “The real potential we see is enabling pharmaceutical companies to get these treatments approved and manufactured quicker so they can scale to hundreds of thousands — or millions — of patients.”

    A force multiplier

    Multiply Labs’ move into cell therapy is just the latest pivot for the company. The original idea for the startup came from Melocchi, who was a visiting PhD candidate in MIT’s chemical engineering department in 2013 and 2014. Melocchi had been creating drugs by hand in the MIT-Novartis Center for Continuous Manufacturing when she toured Parietti’s space at MIT. Parietti was building robotic limbs for factory workers and people with disabilities at the time, and his workspace was littered with robotic appendages and 3-D printers. Melocchi saw the machines as a way to make personalized drug capsules.

    Parietti developed the first robotic prototype in the kitchen of his Cambridge apartment, and the founders received early funding from the MIT Sandbox Innovation Fund Program.

    After going through the Y Combinator startup accelerator, the founders realized their biggest market would be pharmaceutical companies running clinical trials. Early trials often involve testing drugs of different potencies.

    “Every clinical trial is essentially personalized, because drug developers don’t know the right dosage,” Parietti says.

    Today Multiply Labs’ robotic clusters are being deployed on the production floors of leading pharmaceutical companies. The cloud-based platforms can produce 30,000 drug capsules a day and are modular, so companies can purchase as many systems as they need and run them together. Each system is contained in 15 square feet.

    “Our goal is to be the gold standard for the manufacturing of individualized drugs,” Parietti says. “We believe the future of medicine is going to be individualized drugs made on demand for single patients, and the only way to make those is with robots.”

    Multiply Labs robots handle each step of the drug formulation process.

    Roboticists enter cell therapy

    The move to cell therapy comes after Parietti’s small team of mostly MIT-trained roboticists and engineers spent the last two years learning about cell therapy production separately from its drug capsule work. Earlier this month, the company raised $20 million and is expecting to triple its team.

    Multiply labs is already working with Cytiva to incorporate the company’s bioreactors into its platform.

    “[Multiply Labs’] automation has broad implications for the industry that include expanding patient access to existing treatments and accelerating the next generation of treatments,” says Cytiva’s Parker Donner, the company’s head of business development for cell and gene therapy.

    Multiply Labs aims to ship a demo to a cell therapy manufacturing facility at UCSF for clinical validation in the next nine months.

    “It really is a great adventure for someone like me, a physician-scientist, to interact with mechanical engineers and see how they think and solve problems,” says Jonathan Esensten, an assistant adjunct professor at UCSF whose research group is being sponsored by Multiply Labs for the project. “I think they have complementary ways of approaching problems compared to my team, and I think it’s going to lead to great things. I’m hopeful we’ll build technologies that push this field forward and bend the cost curve to allow us to do things better, faster, and cheaper. That’s what we need if these really exciting therapies are going to be made widely available.”

    Esensten, whose workspace is also an FDA-compliant cell therapy manufacturing facility, says his research group struggles to produce more than approximately six cell therapies per month.

    “The beauty of the Multiply Labs concept is that it’s modular,” Esensten said. “You could imagine a robot where there are no bottlenecks: You have as much capacity as you need at every step, no matter how long it takes. Of course, there are theoretical limits, but for a given footprint the robot will be able to manufacture many more products than we could do using manual processes in our clean rooms.”

    Parietti thinks Esensten’s lab is a great partner to prove robots can be a game changer for a nascent field with a lot of promise.

    “Cell therapies are amazing in terms of efficacy,” Parietti says. “But right now, they’re made by hand. Scientists are being used for manufacturing; it’s essentially artisanal. That’s not the way to scale. The way we think about it, the more successful we are, the more patients we help.” More

  • in

    New system cleans messy data tables automatically

    MIT researchers have created a new system that automatically cleans “dirty data” —  the typos, duplicates, missing values, misspellings, and inconsistencies dreaded by data analysts, data engineers, and data scientists. The system, called PClean, is the latest in a series of domain-specific probabilistic programming languages written by researchers at the Probabilistic Computing Project that aim to simplify and automate the development of AI applications (others include one for 3D perception via inverse graphics and another for modeling time series and databases).

    According to surveys conducted by Anaconda and Figure Eight, data cleaning can take a quarter of a data scientist’s time. Automating the task is challenging because different datasets require different types of cleaning, and common-sense judgment calls about objects in the world are often needed (e.g., which of several cities called “Beverly Hills” someone lives in). PClean provides generic common-sense models for these kinds of judgment calls that can be customized to specific databases and types of errors.

    PClean uses a knowledge-based approach to automate the data cleaning process: Users encode background knowledge about the database and what sorts of issues might appear. Take, for instance, the problem of cleaning state names in a database of apartment listings. What if someone said they lived in Beverly Hills but left the state column empty? Though there is a well-known Beverly Hills in California, there’s also one in Florida, Missouri, and Texas … and there’s a neighborhood of Baltimore known as Beverly Hills. How can you know in which the person lives? This is where PClean’s expressive scripting language comes in. Users can give PClean background knowledge about the domain and about how data might be corrupted. PClean combines this knowledge via common-sense probabilistic reasoning to come up with the answer. For example, given additional knowledge about typical rents, PClean infers the correct Beverly Hills is in California because of the high cost of rent where the respondent lives. 

    Alex Lew, the lead author of the paper and a PhD student in the Department of Electrical Engineering and Computer Science (EECS), says he’s most excited that PClean gives a way to enlist help from computers in the same way that people seek help from one another. “When I ask a friend for help with something, it’s often easier than asking a computer. That’s because in today’s dominant programming languages, I have to give step-by-step instructions, which can’t assume that the computer has any context about the world or task — or even just common-sense reasoning abilities. With a human, I get to assume all those things,” he says. “PClean is a step toward closing that gap. It lets me tell the computer what I know about a problem, encoding the same kind of background knowledge I’d explain to a person helping me clean my data. I can also give PClean hints, tips, and tricks I’ve already discovered for solving the task faster.”

    Co-authors are Monica Agrawal, a PhD student in EECS; David Sontag, an associate professor in EECS; and Vikash K. Mansinghka, a principal research scientist in the Department of Brain and Cognitive Sciences.

    What innovations allow this to work? 

    The idea that probabilistic cleaning based on declarative, generative knowledge could potentially deliver much greater accuracy than machine learning was previously suggested in a 2003 paper by Hanna Pasula and others from Stuart Russell’s lab at the University of California at Berkeley. “Ensuring data quality is a huge problem in the real world, and almost all existing solutions are ad-hoc, expensive, and error-prone,” says Russell, professor of computer science at UC Berkeley. “PClean is the first scalable, well-engineered, general-purpose solution based on generative data modeling, which has to be the right way to go. The results speak for themselves.” Co-author Agrawal adds that “existing data cleaning methods are more constrained in their expressiveness, which can be more user-friendly, but at the expense of being quite limiting. Further, we found that PClean can scale to very large datasets that have unrealistic runtimes under existing systems.”

    PClean builds on recent progress in probabilistic programming, including a new AI programming model built at MIT’s Probabilistic Computing Project that makes it much easier to apply realistic models of human knowledge to interpret data. PClean’s repairs are based on Bayesian reasoning, an approach that weighs alternative explanations of ambiguous data by applying probabilities based on prior knowledge to the data at hand. “The ability to make these kinds of uncertain decisions, where we want to tell the computer what kind of things it is likely to see, and have the computer automatically use that in order to figure out what is probably the right answer, is central to probabilistic programming,” says Lew.

    PClean is the first Bayesian data-cleaning system that can combine domain expertise with common-sense reasoning to automatically clean databases of millions of records. PClean achieves this scale via three innovations. First, PClean’s scripting language lets users encode what they know. This yields accurate models, even for complex databases. Second, PClean’s inference algorithm uses a two-phase approach, based on processing records one-at-a-time to make informed guesses about how to clean them, then revisiting its judgment calls to fix mistakes. This yields robust, accurate inference results. Third, PClean provides a custom compiler that generates fast inference code. This allows PClean to run on million-record databases with greater speed than multiple competing approaches. “PClean users can give PClean hints about how to reason more effectively about their database, and tune its performance — unlike previous probabilistic programming approaches to data cleaning, which relied primarily on generic inference algorithms that were often too slow or inaccurate,” says Mansinghka. 

    As with all probabilistic programs, the lines of code needed for the tool to work are many fewer than alternative state-of-the-art options: PClean programs need only about 50 lines of code to outperform benchmarks in terms of accuracy and runtime. For comparison, a simple snake cellphone game takes twice as many lines of code to run, and Minecraft comes in at well over 1 million lines of code.

    In their paper, just presented at the 2021 Society for Artificial Intelligence and Statistics conference, the authors show PClean’s ability to scale to datasets containing millions of records by using PClean to detect errors and impute missing values in the 2.2 million-row Medicare Physician Compare National dataset. Running for just seven-and-a-half hours, PClean found more than 8,000 errors. The authors then verified by hand (via searches on hospital websites and doctor LinkedIn pages) that for more than 96 percent of them, PClean’s proposed fix was correct. 

    Since PClean is based on Bayesian probability, it can also give calibrated estimates of its uncertainty. “It can maintain multiple hypotheses — give you graded judgments, not just yes/no answers. This builds trust and helps users override PClean when necessary. For example, you can look at a judgment where PClean was uncertain, and tell it the right answer. It can then update the rest of its judgments in light of your feedback,” says Mansinghka. “We think there’s a lot of potential value in that kind of interactive process that interleaves human judgment with machine judgment. We see PClean as an early example of a new kind of AI system that can be told more of what people know, report when it is uncertain, and reason and interact with people in more useful, human-like ways.”

    David Pfau, a senior research scientist at DeepMind, noted in a tweet that PClean meets a business need: “When you consider that the vast majority of business data out there is not images of dogs, but entries in relational databases and spreadsheets, it’s a wonder that things like this don’t yet have the success that deep learning has.”

    Benefits, risks, and regulation

    PClean makes it cheaper and easier to join messy, inconsistent databases into clean records, without the massive investments in human and software systems that data-centric companies currently rely on. This has potential social benefits — but also risks, among them that PClean may make it cheaper and easier to invade peoples’ privacy, and potentially even to de-anonymize them, by joining incomplete information from multiple public sources.

    “We ultimately need much stronger data, AI, and privacy regulation, to mitigate these kinds of harms,” says Mansinghka. Lew adds, “As compared to machine-learning approaches to data cleaning, PClean might allow for finer-grained regulatory control. For example, PClean can tell us not only that it merged two records as referring to the same person, but also why it did so — and I can come to my own judgment about whether I agree. I can even tell PClean only to consider certain reasons for merging two entries.” Unfortunately, the reseachers say, privacy concerns persist no matter how fairly a dataset is cleaned.

    Mansinghka and Lew are excited to help people pursue socially beneficial applications. They have been approached by people who want to use PClean to improve the quality of data for journalism and humanitarian applications, such as anticorruption monitoring and consolidating donor records submitted to state boards of elections. Agrawal says she hopes PClean will free up data scientists’ time, “to focus on the problems they care about instead of data cleaning. Early feedback and enthusiasm around PClean suggest that this might be the case, which we’re excited to hear.” More

  • in

    A comprehensive map of the SARS-CoV-2 genome

    In early 2020, a few months after the Covid-19 pandemic began, scientists were able to sequence the full genome of SARS-CoV-2, the virus that causes the Covid-19 infection. While many of its genes were already known at that point, the full complement of protein-coding genes was unresolved.

    Now, after performing an extensive comparative genomics study, MIT researchers have generated what they describe as the most accurate and complete gene annotation of the SARS-CoV-2 genome. In their study, which appears today in Nature Communications, they confirmed several protein-coding genes and found that a few others that had been suggested as genes do not code for any proteins.

    “We were able to use this powerful comparative genomics approach for evolutionary signatures to discover the true functional protein-coding content of this enormously important genome,” says Manolis Kellis, who is the senior author of the study and a professor of computer science in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) as well as a member of the Broad Institute of MIT and Harvard.

    The research team also analyzed nearly 2,000 mutations that have arisen in different SARS-CoV-2 isolates since it began infecting humans, allowing them to rate how important those mutations may be in changing the virus’ ability to evade the immune system or become more infectious.

    Comparative genomics

    The SARS-CoV-2 genome consists of nearly 30,000 RNA bases. Scientists have identified several regions known to encode protein-coding genes, based on their similarity to protein-coding genes found in related viruses. A few other regions were suspected to encode proteins, but they had not been definitively classified as protein-coding genes.

    To nail down which parts of the SARS-CoV-2 genome actually contain genes, the researchers performed a type of study known as comparative genomics, in which they compare the genomes of similar viruses. The SARS-CoV-2 virus belongs to a subgenus of viruses called Sarbecovirus, most of which infect bats. The researchers performed their analysis on SARS-CoV-2, SARS-CoV (which caused the 2003 SARS outbreak), and 42 strains of bat sarbecoviruses.

    Kellis has previously developed computational techniques for doing this type of analysis, which his team has also used to compare the human genome with genomes of other mammals. The techniques are based on analyzing whether certain DNA or RNA bases are conserved between species, and comparing their patterns of evolution over time.

    Using these techniques, the researchers confirmed six protein-coding genes in the SARS-CoV-2 genome in addition to the five that are well established in all coronaviruses. They also determined that the region that encodes a gene called ORF3a also encodes an additional gene, which they name ORF3c. The gene has RNA bases that overlap with ORF3a but occur in a different reading frame. This gene-within-a-gene is rare in large genomes, but common in many viruses, whose genomes are under selective pressure to stay compact. The role for this new gene, as well as several other SARS-CoV-2 genes, is not known yet.

    The researchers also showed that five other regions that had been proposed as possible genes do not encode functional proteins, and they also ruled out the possibility that there are any more conserved protein-coding genes yet to be discovered.

    “We analyzed the entire genome and are very confident that there are no other conserved protein-coding genes,” says Irwin Jungreis, lead author of the study and a CSAIL research scientist. “Experimental studies are needed to figure out the functions of the uncharacterized genes, and by determining which ones are real, we allow other researchers to focus their attention on those genes rather than spend their time on something that doesn’t even get translated into protein.”

    The researchers also recognized that many previous papers used not only incorrect gene sets, but sometimes also conflicting gene names. To remedy the situation, they brought together the SARS-CoV-2 community and presented a set of recommendations for naming SARS-CoV-2 genes, in a separate paper published a few weeks ago in Virology.

    Fast evolution

    In the new study, the researchers also analyzed more than 1,800 mutations that have arisen in SARS-CoV-2 since it was first identified. For each gene, they compared how rapidly that particular gene has evolved in the past with how much it has evolved since the current pandemic began.

    They found that in most cases, genes that evolved rapidly for long periods of time before the current pandemic have continued to do so, and those that tended to evolve slowly have maintained that trend. However, the researchers also identified exceptions to these patterns, which may shed light on how the virus has evolved as it has adapted to its new human host, Kellis says.

    In one example, the researchers identified a region of the nucleocapsid protein, which surrounds the viral genetic material, that had many more mutations than expected from its historical evolution patterns. This protein region is also classified as a target of human B cells. Therefore, mutations in that region may help the virus evade the human immune system, Kellis says.

    “The most accelerated region in the entire genome of SARS-CoV-2 is sitting smack in the middle of this nucleocapsid protein,” he says. “We speculate that those variants that don’t mutate that region get recognized by the human immune system and eliminated, whereas those variants that randomly accumulate mutations in that region are in fact better able to evade the human immune system and remain in circulation.”

    The researchers also analyzed mutations that have arisen in variants of concern, such as the B.1.1.7 strain from England, the P.1 strain from Brazil, and the B.1.351 strain from South Africa. Many of the mutations that make those variants more dangerous are found in the spike protein, and help the virus spread faster and avoid the immune system. However, each of those variants carries other mutations as well.

    “Each of those variants has more than 20 other mutations, and it’s important to know which of those are likely to be doing something and which aren’t,” Jungreis says. “So, we used our comparative genomics evidence to get a first-pass guess at which of these are likely to be important based on which ones were in conserved positions.”

    This data could help other scientists focus their attention on the mutations that appear most likely to have significant effects on the virus’ infectivity, the researchers say. They have made the annotated gene set and their mutation classifications available in the University of California at Santa Cruz Genome Browser for other researchers who wish to use it.

    “We can now go and actually study the evolutionary context of these variants and understand how the current pandemic fits in that larger history,” Kellis says. “For strains that have many mutations, we can see which of these mutations are likely to be host-specific adaptations, and which mutations are perhaps nothing to write home about.”

    The research was funded by the National Human Genome Research Institute and the National Institutes of Health. Rachel Sealfon, a research scientist at the Flatiron Institute Center for Computational Biology, is also an author of the paper. More

  • in

    A robot that can help you untangle your hair

    With rapidly growing demands on health care systems, nurses typically spend 18 to 40 percent of their time performing direct patient care tasks, oftentimes for many patients and with little time to spare. Personal care robots that brush hair could provide substantial help and relief. 

    This may seem like a truly radical form of “self-care,” but crafty robots for things like shaving, hair-washing, and makeup are not new. In 2011, the tech giant Panasonic developed a robot that could wash, massage, and even blow-dry hair, explicitly designed to help support “safe and comfortable living of the elderly and people with limited mobility, while reducing the burden of caregivers.” 

    Hair-combing bots, however, proved to be less explored, leading scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Soft Math Lab at Harvard University to develop a robotic arm setup with a sensorized soft brush. The robot is equipped with a camera that helps it “see” and assess curliness, so it can plan a delicate and time-efficient brush-out.  

    Play video

    Robotic Hair Brushing

    The team’s control strategy is adaptive to the degree of tangling in the fiber bunch, and they put “RoboWig” to the test by brushing wigs ranging from straight to very curly hair.

    While the hardware setup of RoboWig looks futuristic and shiny, the underlying model of the hair fibers is what makes it tick. CSAIL postdoc Josie Hughes and her team opted to represent the entangled hair as sets of entwined double helices —  think classic DNA strands. This level of granularity provided key insights into mathematical models and control systems for manipulating bundles of soft fibers, with a wide range of applications in the textile industry, animal care, and other fibrous systems. 

    “By developing a model of tangled fibers, we understand from a model-based perspective how hairs must be entangled: starting from the bottom and slowly working the way up to prevent ‘jamming’ of the fibers,” says Hughes, the lead author on a paper about RoboWig. “This is something everyone who has brushed hair has learned from experience, but is now something we can demonstrate through a model, and use to inform a robot.”  

    This task at hand is a tangled one. Every head of hair is different, and the intricate interplay between hairs when combing can easily lead to knots. What’s more, if the incorrect brushing strategy is used, the process can be very painful and damaging to the hair. 

    Previous research in the brushing domain has mostly been on the mechanical, dynamic, and visual properties of hair, as opposed to RoboWig’s refined focus on tangling and combing behavior. 

    To brush and manipulate the hair, the researchers added a soft-bristled sensorized brush to the robot arm, to allow forces during brushing to be measured. They combined this setup with something called a “closed-loop control system,” which takes feedback from an output and automatically performs an action without human intervention. This created “force feedback” from the brush — a control method that lets the user feel what the device is doing — so the length of the stroke could be optimized to take into account both the potential “pain,” and time taken to brush. 

    Initial tests preserved the human head — for now — and instead were done on a number of wigs of various hair styles and types. The model provided insight into the behaviors of the combing, related to the number of entanglements, and how those could be efficiently and effectively brushed out by choosing appropriate brushing lengths. For example, for curlier hair, the pain cost would dominate, so shorter brush lengths were optimal. 

    The team wants to eventually perform more realistic experiments on humans, to better understand the performance of the robot with respect to their experience of pain — a metric that is obviously highly subjective, as one person’s “two” could be another’s “eight.”

    “To allow robots to extend their task-solving abilities to more complex tasks such as hair brushing, we need not only novel safe hardware, but also an understanding of the complex behavior of the soft hair and tangled fibers,” says Hughes. “In addition to hair brushing, the insights provided by our approach could be applied to brushing of fibers for textiles, or animal fibers.” 

    Hughes wrote the paper alongside Harvard University School of Engineering and Applied Sciences PhD students Thomas Bolton Plumb-Reyes and Nicholas Charles; Professor L. Mahadevan of Harvard’s School of Engineering and Applied Sciences, Department of Physics, and Organismic and Evolutionary Biology; and MIT professor and CSAIL Director Daniela Rus. They presented the paper virtually at the IEEE Conference on Soft Robotics (RoboSoft) earlier this month. 

    The project was supported, in part, by the National Science Foundation’s Emerging Frontiers in Research and Innovation program between MIT CSAIL and the Soft Math Lab at Harvard.  More

  • in

    Media Advisory — MIT researchers: AI policy needed to manage impacts, build more equitable systems

    On Thursday, May 6 and Friday, May 7, the AI Policy Forum — a global effort convened by researchers from MIT — will present their initial policy recommendations aimed at managing the effects of artificial intelligence and building AI systems that better reflect society’s values. Recognizing that there is unlikely to be any singular national AI policy, but rather public policies for the distinct ways in which we encounter AI in our lives, forum leaders will preview their preliminary findings and policy recommendations in three key areas: finance, mobility, and health care.

    The inaugural AI Policy Forum Symposium, a virtual event hosted by the MIT Schwarzman College of Computing, will bring together AI and public policy leaders, government officials from around the world, regulators, and advocates to investigate some of the pressing questions posed by AI in our economies and societies. The symposium’s program will feature remarks from public policymakers helping shape governments’ approaches to AI; state and federal regulators on the front lines of these issues; designers of self-driving cars and cancer-diagnosing algorithms; faculty examining the systems used in emerging finance companies and associated concerns; and researchers pushing the boundaries of AI.

    WHAT: AI Policy Forum (AIPF) Symposium

    WHO:MIT speakers: 

    Martin A. Schmidt, MIT provost
    Daniel Huttenlocher, AIPF chair and dean of the MIT Schwarzman College of Computing
    Regina Barzilay, MIT School of Engineering Distinguished Professor of AI and Health; AI faculty lead of the Jameel Clinic at MIT
    Daniel Weitzner, founding director of the MIT Internet Policy Research Initiative; former U.S. deputy chief technology officer in the Office of Science and Technology Policy
    Luis Videgaray, senior lecturer in the MIT Sloan School of Management; former foreign minister and minister of finance of Mexico
    Aleksander Madry, professor of computer science in the MIT Department of Electrical Engineering and Computer Science
    R. David Edelman, director of public policy for the MIT Internet Policy Research Initiative; former special assistant to U.S. President Barack Obama for economic and technology policy
    Julie Shah, MIT associate professor of aeronautics and astronautics; associate dean of social and ethical responsibilities of computing in the MIT Schwarzman College of Computing
    Andrew Lo, professor of finance in the MIT Sloan School of Management

    Guest speakers and participants: 

    Julie Bishop, chancellor of the Australian National University; former minister of foreign affairs and member of the Parliament of Australia
    Andrew Wyckoff, director for science, technology and innovation at the Organization for Economic Cooperation and Development (OECD)
    Martha Minow, 300th Anniversary University Professor at Harvard Law School; former dean of the Harvard Law School
    Alejandro Poiré, dean of the School of Public Policy at Monterrey Tec; former secretary of the interior of Mexico
    Ngaire Woods, dean of the Blavatnik School of Government at the University of Oxford
    Darran Anderson, director of strategy and innovation at the Texas Department of Transportation
    Nat Beuse, vice president of security at Aurora; former head safety regulator for autonomous vehicles at the U.S. Department of Transportation
    Laura Major, chief technology officer of Motional
    Manuela Veloso, head of AI research at JP Morgan Chase
    Stephanie Lee, managing director of BlackRock Systematic Active Equities Emerging Markets

    WHEN: Thursday and Friday, May 6 and 7

    Media RSVP:Reporters interested in attending can register here. More information on the AI Policy Forum can be found here.  More

  • in

    Nano flashlight enables new applications of light

    In work that could someday turn cell phones into sensors capable of detecting viruses and other minuscule objects, MIT researchers have built a powerful nanoscale flashlight on a chip.

    Their approach to designing the tiny light beam on a chip could also be used to create a variety of other nano flashlights with different beam characteristics for different applications. Think of a wide spotlight versus a beam of light focused on a single point.

    For many decades, scientists have used light to identify a material by observing how that light interacts with the material. They do so by essentially shining a beam of light on the material, then analyzing that light after it passes through the material. Because all materials interact with light differently, an analysis of the light that passes through the material provides a kind of “fingerprint” for that material. Imagine doing this for several colors — i.e., several wavelengths of light — and capturing the interaction of light with the material for each color. That would lead to a fingerprint that is even more detailed.

    Most instruments for doing this, known as spectrometers, are relatively large. Making them much smaller would have a number of advantages. For example, they could be portable and have additional applications (imagine a futuristic cell phone loaded with a self-contained sensor for a specific gas). However, while researchers have made great strides toward miniaturizing the sensor for detecting and analyzing the light that has passed through a given material, a miniaturized and appropriately shaped light beam—or flashlight—remains a challenge. Today that light beam is most often provided by macroscale equipment like a laser system that is not built into the chip itself as the sensors are.

    Complete sensor

    Enter the MIT work. In two recent papers in Nature Scientific Reports, researchers describe not only their approach for designing on-chip flashlights with a variety of beam characteristics, they also report building and successfully testing a prototype. Importantly, they created the device using existing fabrication technologies familiar to the microelectronics industry, so they are confident that the approach could be deployable at a mass scale with the lower cost that implies.

    Overall, this could enable industry to create a complete sensor on a chip with both light source and detector. As a result, the work represents a significant advance in the use of silicon photonics for the manipulation of light waves on microchips for sensor applications.

    “Silicon photonics has so much potential to improve and miniaturize the existing bench-scale biosensing schemes. We just need smarter design strategies to tap its full potential. This work shows one such approach,” says PhD candidate Robin Singh SM ’18, who is lead author of both papers.

    “This work is significant, and represents a new paradigm of photonic device design, enabling enhancements in the manipulation of optical beams,” says Dawn Tan, an associate professor at the Singapore University of Technology and Design who was not involved in the research.

    The senior coauthors on the first paper are Anuradha “Anu” Murthy Agarwal, a principal research scientist in MIT’s Materials Research Laboratory, Microphotonics Center, and Initiative for Knowledge and Innovation in Manufacturing; and Brian W. Anthony, a principal research scientist in MIT’s Department of Mechanical Engineering. Singh’s coauthors on the second paper are Agarwal; Anthony; Yuqi Nie, now at Princeton University; and Mingye Gao, a graduate student in MIT’s Department of Electrical Engineering and Computer Science.

    How they did it

    Singh and colleagues created their overall design using multiple computer modeling tools. These included conventional approaches based on the physics involved in the propagation and manipulation of light, and more cutting-edge machine-learning techniques in which the computer is taught to predict potential solutions using huge amounts of data. “If we show the computer many examples of nano flashlights, it can learn how to make better flashlights,” says Anthony. Ultimately, “we can then tell the computer the pattern of light that we want, and it will tell us what the design of the flashlight needs to be.”

    All of these modeling tools have advantages and disadvantages; together they resulted in a final, optimal design that can be adapted to create flashlights with different kinds of light beams.

    The researchers went on to use that design to create a specific flashlight with a collimated beam, or one in which the rays of light are perfectly parallel to each other. Collimated beams are key to some types of sensors. The overall flashlight that the researchers made involved some 500 rectangular nanoscale structures of different dimensions that the team’s modeling predicted would enable a collimated beam. Nanostructures of different dimensions would lead to different kinds of beams that in turn are key to other applications.

    The tiny flashlight with a collimated beam worked. Not only that, it provided a beam that was five times more powerful than is possible with conventional structures. That’s partly because “being able to control the light better means that less is scattered and lost,” says Agarwal.

    Singh describes the excitement he felt upon creating that first flashlight. “It was great to see through a microscope what I had designed on a computer. Then we tested it, and it worked!”

    This research was supported, in part, by the MIT Skoltech Initiative.

    Additional MIT facilities and departments that made this work possible are the Department of Materials Science and Engineering, the Materials Research Laboratory, the Institute for Medical Engineering and Science, and MIT.nano. More

  • in

    Climate solutions depend on technology, policy, and businesses working together

    “The challenge for humanity now is how to decarbonize the global economy by 2050. To do that, we need a supercharged decade of energy innovation,” said Ernest J. Moniz, the Cecil and Ida Green Professor of Physics and Engineering Systems Emeritus, founding director of the MIT Energy Initiative, and a former U.S. secretary of energy, as he opened the MIT Forefront virtual event on April 21. “But we also need practical visionaries, in every economic sector, to develop new business models that allow them to remain profitable while achieving the zero-carbon emissions.”

    The event, “Addressing Climate and Sustainability through Technology, Policy, and Business Models,” was the third in the MIT Forefront series, which invites top minds from the worlds of science, industry, and policy to propose bold new answers to urgent global problems. Moniz moderated the event, and more than 12,000 people tuned in online.

    MIT and other universities play an important role in preparing the world’s best minds to take on big climate challenges and develop the technology needed to advance sustainability efforts, a point illustrated in the main session with a video about Via Separations, a company supported by MIT’s The Engine. Co-founded by Shreya Dave ’09, SM ’12, PhD ’16, Via Separations customizes filtration technology to reduce waste and save money across multiple industries. “By next year, we are going to be eliminating carbon dioxide emissions from our customers’ facilities,” Dave said.

    Via Separations is one of many innovative companies born of MIT’s energy and climate initiatives — the work of which, as the panel went on to discuss, is critical to achieving net-zero emissions and deploying successful environmental sustainability efforts. As Moniz put it, the company embodies “the spirit of science and technology in action for the good of humankind” and exemplifies how universities and businesses, as well as technology and policy, must work together to make the best environmental choices.

    How businesses confront climate change

    Innovation in sustainable practices can be met with substantial challenges when proposed or applied to business models, particularly on the policy side, the panelists noted. But they shared some key ways that their respective organizations have employed current technologies and the challenges they face in reaching their sustainability goals. Despite each business’s different products and services, a common thread of needing new technologies to achieve their sustainability goals emerged. 

    Although 2050 is the long-term goal for net-zero emissions put forth by the Paris Agreement, the businesses represented by the panelists are thinking about the shorter term. “IBM has committed to net-zero emissions by 2030 ― without carbon offsets,” said Arvind Krishna, chairman and chief executive officer of IBM. “We believe that some carbon taxes would be a good policy tool. But policy alone is insufficient. We need advanced technological tools to reach our goal.” 

    Jeff Wilke SM ’93, who retired as Amazon’s chief executive officer of Worldwide Consumer in February 2021, outlined a number of initiatives that the online retail giant is undertaking to curb emissions. Transportation is one of their biggest hurdles to reaching zero emissions, leading to a significant investment in Class 8 electric trucks. “Another objective is to remove the need for plane shipments by getting inventory closer to urban areas, and that has been happening steadily over the years,” he said.

    Jim Fitterling, chair and chief executive officer of Dow, explained that Dow has reduced its carbon emissions by 15 percent in the past decade and is poised to reduce it further in the next. Future goals include working toward electrifying ethylene production. “If we can electrify that, it will allow us to make major strides toward carbon-dioxide reduction,” he said. “But we need more reliable and stable power to get to that point.” 

    Collaboration is key to advancing climate solutions

    Maria T. Zuber, MIT’s vice president for research, who was recently appointed by U.S. President Joe Biden as co-chair of the President’s Council of Advisors on Science and Technology, stressed that MIT innovators and industry leaders must work together to implement climate solutions. 

    “Innovation is a team sport,” said Zuber, who is also the E. A. Griswold Professor of Geophysics. “Even if MIT researchers make a huge discovery, deploying it requires cooperation on a policy level and often industry support. Policymakers need to solve problems and seize opportunities in ways that are popular. It’s not just solving technical problems ― there is a human behavior component.”

    But businesses, Zuber said, can play a huge role in advancing innovation. “If a company becomes convinced of the potential of a new technology, they can be the best advocates with policymakers,” she said.

    The question of “sustainability vs. shareholders” 

    During the Q&A session, an audience member pointed out that environmentalists are often distrustful of companies’ sustainability policies when their focus is on shareholders and profit.

    “Companies have to show that they’re part of the solution,” Fitterling said. “Investors will be afraid of high costs up front, so, say, completely electrifying a plant overnight is off the table. You have to make a plan to get there, and then incentivize that plan through policy. Carbon taxes are one way, but miss the market leverage.”

    Krishna also pushed back on the idea that companies have to choose between sustainability and profit. “It’s a false dichotomy,” he said. “If companies were only interested in short-term profits, they wouldn’t last for long.”

    “A belief I’ve heard from some environmental groups is that ‘anything a company does is greenwashing,’ and that they’ll abandon those efforts if the economy tanks,” Zuber said, referring to a practice wherein organizations spend more time marketing themselves as environmentally sustainable than on maximizing their sustainability efforts. “The economy tanked in 2020, though, and we saw companies double down on their sustainability plans. They see that it’s good for business.”

    The role of universities and businesses in sustainability innovation

    “Amazon and all corporations are adapting to the effects of climate change, like extreme weather patterns, and will need to adapt more — but I’m not ready to throw in the towel for decarbonization,” Wilke said. “Either way, companies will have to invest in decarbonization. There is no way we are going to make the progress we have to make without it.” 

    Another component is the implications of artificial intelligence (AI) and quantum computing. Krishna noted multiple ways that AI and quantum computing will play a role at IBM, including finding the most environmentally sustainable and cost-efficient ways to advance carbon separation in exhaust gases and lithium battery life in electric cars. 

    AI, quantum computing, and alternate energy sources such as fusion energy that have the potential to achieve net-zero energy, are key areas that students, researchers, and faculty members are pursuing at MIT.

    “Universities like MIT need to go as fast as we can as far as we can with the science and technology we have now,” Zuber said. “In parallel, we need to invest in and deploy a suite of new tools in science and technology breakthroughs that we need to reach the 2050 goal of decarbonizing. Finally, we need to continue to train the next generation of students and researchers who are solving these issues and deploy them to these companies to figure it out.” More

  • in

    With a zap of light, system switches objects’ colors and patterns

    When was the last time you repainted your car? Redesigned your coffee mug collection? Gave your shoes a colorful facelift?

    You likely answered: never, never, and never. You might consider these arduous tasks not worth the effort. But a new color-shifting “programmable matter” system could change that with a zap of light.

    MIT researchers have developed a way to rapidly update imagery on object surfaces. The system, dubbed “ChromoUpdate” pairs an ultraviolet (UV) light projector with items coated in light-activated dye. The projected light alters the reflective properties of the dye, creating colorful new images in just a few minutes. The advance could accelerate product development, enabling product designers to churn through prototypes without getting bogged down with painting or printing.

    An ultraviolet (UV) light projector is used on a cell-phone case coated in light-activated dye. The projected light alters the reflective properties of the dye, creating images in just a few minutes.

    ChromoUpdate “takes advantage of fast programming cycles — things that wouldn’t have been possible before,” says Michael Wessley, the study’s lead author and a postdoc in MIT’s Computer Science and Artificial Intelligence Laboratory.

    The research will be presented at the ACM Conference on Human Factors in Computing Systems this month. Wessely’s co-authors include his advisor, Professor Stefanie Mueller, as well as postdoc Yuhua Jin, recent graduate Cattalyya Nuengsigkapian ’19, MNG ’20, visiting master’s student Aleksei Kashapov, postdoc Isabel Qamar, and Professor Dzmitry Tsetserukou of the Skolkovo Institute of Science and Technology.

    ChromoUpdate builds on the researchers’ previous programmable matter system, called PhotoChromeleon. That method was “the first to show that we can have high-resolution, multicolor textures that we can just reprogram over and over again,” says Wessely. PhotoChromeleon used a lacquer-like ink comprising cyan, magenta, and yellow dyes. The user covered an object with a layer of the ink, which could then be reprogrammed using light. First, UV light from an LED was shone on the ink, fully saturating the dyes. Next, the dyes were selectively desaturated with a visible light projector, bringing each pixel to its desired color and leaving behind the final image. PhotoChromeleon was innovative, but it was sluggish. It took about 20 minutes to update an image. “We can accelerate the process,” says Wessely.

    They achieved that with ChromoUpdate, by fine-tuning the UV saturation process. Rather than using an LED, which uniformly blasts the entire surface, ChromoUpdate uses a UV projector that can vary light levels across the surface. So, the operator has pixel-level control over saturation levels. “We can saturate the material locally in the exact pattern we want,” says Wessely. That saves time — someone designing a car’s exterior might simply want to add racing stripes to an otherwise completed design. ChromoUpdate lets them do just that, without erasing and reprojecting the entire exterior.

    This selective saturation procedure allows designers to create a black-and-white preview of a design in seconds, or a full-color prototype in minutes. That means they could try out dozens of designs in a single work session, a previously unattainable feat. “You can actually have a physical prototype to see if your design really works,” says Wessely. “You can see how it looks when sunlight shines on it or when shadows are cast. It’s not enough just to do this on a computer.”

    Play video

    That speed also means ChromoUpdate could be used for providing real-time notifications without relying on screens. “One example is your coffee mug,” says Wessely. “You put your mug in our projector system and program it to show your daily schedule. And it updates itself directly when a new meeting comes in for that day, or it shows you the weather forecast.”

    Wessely hopes to keep improving the technology. At present, the light-activated ink is specialized for smooth, rigid surfaces like mugs, phone cases, or cars. But the researchers are working toward flexible, programmable textiles. “We’re looking at methods to dye fabrics and potentially use light-emitting fibers,” says Wessely. “So, we could have clothing — t-shirts and shoes and all that stuff — that can reprogram itself.”

    The researchers have partnered with a group of textile makers in Paris to see how ChomoUpdate can be incorporated into the design process.

    This research was funded, in part, by Ford. More