More stories

  • in

    A unique collaboration with US Special Operations Command

    When General Richard D. Clarke, commander of the U.S. Special Operations Command (USSOCOM), visited MIT in fall 2019, he had artificial intelligence on the mind. As the commander of a military organization tasked with advancing U.S. policy objectives as well as predicting and mitigating future security threats, he knew that the acceleration and proliferation of artificial intelligence technologies worldwide would change the landscape on which USSOCOM would have to act.

    Clarke met with Anantha P. Chandrakasan, dean of the School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science, and after touring multiple labs both agreed that MIT — as a hub for AI innovation — would be an ideal institution to help USSOCOM rise to the challenge. Thus, a new collaboration between the MIT School of Engineering, MIT Professional Education, and USSOCOM was born: a six-week AI and machine learning crash course designed for special operations personnel.

    “There has been tremendous growth in the fields of computing and artificial intelligence over the past few years,” says Chandrakasan. “It was an honor to craft this course in collaboration with U.S. Special Operations Command and MIT Professional Education, and to convene experts from across the spectrum of engineering and science disciplines, to present the full power of artificial intelligence to course participants.”

    In speaking to course participants, Clarke underscored his view that the nature of threats, and how U.S. Special Operations defends against them, will be fundamentally affected by AI. “This includes, perhaps most profoundly, potential game-changing impacts to how we can see the environment, make decisions, execute mission command, and operate in information-space and cyberspace.”

    Due to the ubiquitous applications of AI and machine learning, the course was taught by MIT faculty as well as military and industry representatives from across many disciplines, including electrical and mechanical engineering, computer science, brain and cognitive science, aeronautics and astronautics, and economics.

    “We assembled a lineup of people who we believe are some of the top leaders in the field,” says faculty co-organizer of the USSOCOM course and associate professor in the Department of Aeronautics and Astronautics at MIT, Sertac Karaman. “All of them are able to come in and contribute a unique perspective. This was just meant to be an introduction … but there was still a lot to cover.”

    The potential applications of AI, spanning civilian and military uses, are diverse, and include advances in areas like restorative and regenerative medical care, cyber resiliency, natural language processing, computer vision, and autonomous robotics.

    A fireside chat with MIT President L. Rafael Reif and Eric Schmidt, co-founder of Schmidt Futures and former chair and CEO of Google, who is also an MIT innovation fellow, painted a particularly vivid picture of the way that AI will inform future conflicts.

    “It’s quite obvious that the cyber wars of the future will be largely AI-driven,” Schmidt told course participants. “In other words, they’ll be very vicious and they’ll be over in about 1 millisecond.”

    However, the capabilities of AI represented only one aspect of the course. The faculty also emphasized the ethical, social, and logistical issues inherent in the implementation of AI.

    “People don’t know, actually, [that] some existing technology is quite fragile. It can make mistakes,” says Karaman. “And in the Department of Defense domain, that could be extremely damaging to their mission.”

    AI is vulnerable to both intentional tampering and attacks as well as mistakes caused by programming and data oversights. For instance, images can be intentionally distorted in ways that are imperceptible to humans, but will mislead AI. In another example, a programmer could “train” AI to navigate traffic under ideal conditions, only to have the program malfunction in an area where traffic signs have been vandalized.

    Asu Ozdaglar, the MathWorks Professor of Electrical Engineering and Computer Science, head of the Department of Electrical Engineering and Computer Science, and deputy dean of academics in the MIT Schwarzman College of Computing, told course participants that researchers must find ways to incorporate context and semantic information into AI models prior to “training,” so that they “don’t run into these issues which are very counterintuitive from our perspective … as humans.”

    In addition to providing an orientation to this concept of “robustness” (how prone a technology is, or is not, to error), the course included some best-practice guidance for wielding AI in ways that are ethical, responsible, and strive to limit and eliminate bias.

    Julie Shah, faculty co-organizer of the USSOCOM course, associate dean of social and ethical responsibilities of computing, and associate professor in the Department of Aeronautics and Astronautics at MIT, lectured on this topic and emphasized the importance of considering the future ramifications of AI before and during the development of both the use plan and the technology itself.

    “We talk about how difficult [it is to predict] the unintended uses and consequences,” she told course participants. “But much like we put all of this engineering work into understanding the machine learning models and their development, we need to build new habits of mind and action that involve a range of disciplines and stakeholders, to envision those futures in advance.”

    In addition to moral and safety issues, the logistics of advancing AI in the military are complex and involve a lot of moving parts; the AI technology itself is only one part of this picture. For instance, the actualization of a fleet of military vehicles operated by a handful of personnel would require novel strategic research, partnerships with manufacturers to build new kinds of vehicles, and additional personnel training. Further, AI technology is often developed in the private or academic sectors, and the military doesn’t automatically have access to those innovations.

    Clarke told course participants that USSOCOM had been a “pathfinder within the Department of Defense in the early application of some of this data-driven technology” and that connections with organizations like MIT “are indispensable elements in our preparation to maintain advantage and to ensure that our special operations forces are ready for the future and a new era.”

    Schmidt agreed with Clarke, adding that a functional hiring pipeline from academia and the tech industry into the military, as well as the highest and best utilization of available technology and personnel, is essential to maintain U.S global competitiveness.

    The USSOCOM course was part of the ongoing expansion of AI research and education at MIT, which has accelerated over the last five years. Computer science courses at MIT are typically oversubscribed and attract students from many different disciplines.

    In addition to the USSOCOM course, AI initiatives at MIT span many areas and initiatives, including:

    The MIT Schwarzman College of Computing, which seeks to advance computing, diversify AI applications, and address social and ethical aspects of AI.
    The MIT-IBM Watson AI Lab, which focuses on AI applications to health, climate, cybersecurity.
    The MIT Jameel Clinic for Machine Learning in Health, which investigates applications of AI to health care, including early disease diagnosis.
    The MIT-Takeda Program, which seeks to apply AI capabilities to drug development and other human health challenges.
    The MIT Quest for Intelligence, which applies human intelligence research to the development of next-generation AI technologies.

    “More than a third of MIT’s faculty are working on AI-related research,” Chandrakasan told course participants.

    MIT faculty instructors, USSOCOM instructors, and special guests for the course included:

    Daron Acemoglu, MIT Institute Professor;
    Regina Barzilay, School of Engineering Distinguished Professor for AI and Health at MIT and AI faculty lead at Jameel Clinic;
    Ash Carter, director of the Belfer Center for Science and International Affairs at Harvard Kennedy School, and the 25th U.S. secretary of defense;
    Anantha Chandrakasan, dean of the MIT School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science;
    General Richard Clarke, commander of USSOCOM;
    Colonel Drew Cukor, chief of Algorithmic Warfare Cross Function Team in the ISR Operations Directorate, Warfighter Support, Office of the Undersecretary of Defense for Intelligence;
    Stephanie Culberson, chief of international affairs in the Department of Defense Joint Artificial Intelligence Center;
    Dario Gil, senior vice president and director of IBM Research and chair of the MIT-IBM Watson Lab;
    Tucker “Cinco” Hamilton, U.S. Air Force colonel, and U.S. Air Force director of the USAF/MIT AI Accelerator;
    Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren (1894) Professor;
    David Joyner, executive director of online education and of the Online Master of Science in Computer Science Program in Georgia Tech’s College of Computing;
    Sertac Karaman, associate professor of aeronautics and astronautics at MIT;
    Thom Kenney, USSOCOM chief data officer and the director of SOF Artificial Intelligence;
    Sangbae Kim, professor of mechanical engineering at MIT;
    Aleksander Madry, professor of computer science at MIT;
    Asu Ozdaglar, the MathWorks Professor of Electrical Engineering and Computer Science at MIT;
    L. Rafael Reif, MIT president;
    Eric Schmidt, visiting MIT Innovation Fellow, former CEO and chair of Google, and co-founder of Schmidt Futures;
    Julie Shah, associate professor of aeronautics and astronautics at MIT;
    David Spirk, U.S. Department of Defense chief data officer;
    Joshua Tenenbaum, professor of computational cognitive science at MIT;
    Antonio Torralba, the Delta Electronics Professor of Electrical Engineering and Computer Science at MIT; and
    Daniel Weitzner, founding director of the MIT Internet Policy Research Initiative and principal research scientist at the MIT Computer Science and Artificial Intelligence Laboratory.

    Originally envisioned as an on-campus program, the USSOCOM course was moved online due to the Covid-19 pandemic. This change made it possible to accommodate a significantly higher number of attendees, and roughly 300 USSOCOM members participated in the course. Though it was conducted remotely, the course remained highly interactive with roughly 40 participant questions per week fielded by MIT faculty and other presenters in chat and Q&A sessions. Participants who completed the course also received a certificate of completion.

    The success of the course is a promising sign that more offerings of this type could become available at MIT, according to Bhaskar Pant, executive director of MIT Professional Education, which offers continuing education courses to professionals worldwide. “This program has become a blueprint for MIT faculty to brief senior executives on the impact of AI and other technologies that will transform organizations and industries in significant ways,” he says. More

  • in

    Uncovering the mysteries of milk

    Sarah Nyquist got her first introduction to biology during high school, when she took an online MIT course taught by genomics pioneer Eric Lander. Initially unsure what to expect, she quickly discovered biology to be her favorite subject. She began experimenting with anything she could find, beginning with an old PCR machine and some dining hall vegetables.

    Nyquist entered college as a biology major but soon gravitated toward the more hands-on style of the coursework in her computer science classes. Even as a computer science major and a two-time summer intern at Google, biology was never far from Nyquist’s mind. Her favorite class was taught by a computational biology professor: “It got me so excited to use computer science as a tool to interrogate biological questions,” she recalls.

    During her last two years as an undergraduate at Rice University, Nyquist also worked in a lab at Baylor College of Medicine, eventually co-authoring a paper with Eric Lander himself.

    Nyquist is now a PhD candidate studying computational and systems biology. Her work is co-advised by professors Alex Shalek and Bonnie Berger and uses machine learning to understand single-cell genomic data. Since this technology can be applied to nearly any living material, Nyquist was left to choose her focus.

    After shifting between potential thesis ideas, Nyquist finally settled on studying lactation, an important and overlooked topic in human development. She and postdoc Brittany Goods are currently part of the MIT Milk Study, the first longitudinal study to profile the cells in human breast milk using single cell genomic data. “A lot of people don’t realize there’s actually live cells in breast milk. Our research is to see what the different cell types are and what they might be doing,” Nyquist says.

    While she started out at MIT studying infectious diseases, Nyquist now enjoys investigating basic science questions about the reproductive health of people assigned female at birth. “Working on my dissertation has opened my eyes to this really important area of research. As a woman, I’ve always noticed a lot is unknown about female reproductive health,” she says. “The idea that I can contribute to that knowledge is really exciting to me.”

    The complexities of milk

    For her thesis, Nyquist and her team have sourced breast milk from over a dozen donors. These samples are provided immediately postpartum to around 40 weeks later, which provides insight into how breast milk changes over time. “We took record of the many changing environmental factors, such as if the child had started day care, if the mother had started menstruating, or if the mother had started hormonal birth control,” says Nyquist. “Any of these co-factors could explain the compositional changes we witnessed.”

    Nyquist also hypothesized that discoveries about breast milk could be a proxy for studying breast tissue. Since breast tissue is necessary for lactation, researchers have been historically struggled to collect tissue samples. “A lot is unknown about the cellular composition of human breast tissue during lactation, even though it produces an important early source of nutrition,” she adds.

    Overall, the team has found a lot of heterogeneity between donors, suggesting breast milk is more complicated than expected. They have witnessed that the cells in milk are composed primarily of a type of structural cells that increase in quantity over time. Her team hypothesized that this transformation could be due to the high turnover of breast epithelial tissue during breastfeeding. While the reasons are still unclear, their data add to the field’s previous understandings.

    Other aspects of their findings have validated some early discoveries about important immune cells in breast milk. “We found a type of macrophage in human breast milk that other researchers have identified before in mouse breast tissue,” says Nyquist. “We were really excited that our results confirmed similar things they were seeing.”

    Applying her research to Covid-19

    In addition to studying cells in breast milk, Nyquist has applied her skills to studying organ cells that can be infected by Covid-19. The study began early into the pandemic, when Nyquist and her lab mates realized they could explore their lab’s collective cellular data in a new way. “We began looking to see if there were any cells that expressed genes that can be hijacked for cellular entry by the Covid-19 virus,” she says. “Sure enough, we found there are cells in nasal, lung, and gut tissues that are more susceptible to mediating viral entry.”

    Their results were published and communicated to the public at a rapid speed. To Nyquist, this was evidence for how collaboration and computational tools are essential at producing next generation biological research. “I had never been on a project this fast-moving before — we were able to produce figures in just two weeks. I think it was encouraging to the public to see that scientists are working on this so quickly,” she says.

    Outside of her own research, Nyquist enjoys mentoring and teaching other scientists. One of her favorite experiences was teaching coding at HSSP, a multiweekend program for middle and high schoolers, run by MIT students. The experience encouraged her to think of ways to make coding approachable to students of any background. “It can be challenging to figure out whether to message it as easy or hard, because either can scare people away. I try to get people excited enough to where they can learn the basics and build confidence to dive in further,” she says.

    After graduation, Nyquist hopes to continue her love for mentoring by pursuing a career as a professor. She plans on deepening her research into uterine health, potentially by studying how different infectious diseases affect female reproductive tissues. Her goal is to provide greater insight about biological processes that have long been considered taboo.

    “It’s crazy to me that we have so much more to learn about important topics like periods, breastfeeding, or menopause,” says Nyquist. “For example, we don’t understand how some medications impact people differently during pregnancy. Some doctors tell pregnant people to go off their antidepressants, because they worry it might affect their baby. In reality, there’s so much we don’t actually know.”   

    “When I tell people that this is my career direction, they often say that it’s hard to get funding for female reproductive health research, since it only affects 50 percent of the population,” she says.

    “I think I can convince them to change their minds.” More

  • in

    The new wave of robotic automation

    Ask Peter Howard SM ’84, CEO of Realtime Robotics and MIT Sloan School of Management alumnus, what he thinks is the biggest bottleneck facing the robotics industry, and he’ll tell you without hesitation it’s return on investment. “Robotics automation is capable of handling almost any single task that a human can do, but the ROI is not compelling due to the high cost of deployment and the inability to achieve commensurate throughput,” he says.

    But Realtime Robotics has developed a combination of proprietary software and hardware that reduces system deployment time by 70 percent or more, reduces deployment costs by 30 percent or more, and reduces the programming component of building a robotic system in the industrial robot space by upwards of 90 percent. In other words, Realtime Robotics is making robot adoption well worth the investment.

    On some level, people are always planning — even the most spontaneous among us. We plan the day: breakfast, work, meeting, lunch, pick up the dry cleaning, etc. On a more intuitive level, that trip from your desk to the coffee machine and back requires many micro-decisions that get you from point A to point B without bumping into anything or anyone. In fact, we don’t stop making decisions that allow us to successfully navigate our physical environment until we fall asleep.

    In the field of robotics, the computational process of moving a robot from one place to another in the optimal manner without collisions is called motion planning. For 30 years, it has been a thorn in the side of the industry, because successful motion planning is really about instilling robots with the capabilities (intelligence) to make their own decisions to achieve their goals. To be successful, it has to be done in real-time to accommodate variables that pop up in real-life situations. Furthermore, if a robot is going to work with other robots or people, its movements need to be coordinated with its teammates.

    But traditional motion planning relies on rigid software that only allows robots to follow absolute motion plans based on a strict decision tree. It’s a painstaking process that can take days, weeks, even months of point-by-point programming that must take into consideration all possible options to recommend the best, collision-free path for the robot. The fact is, it’s always been too slow to be effective for robot and autonomous vehicle applications in dynamic environments like a factory floor shared by robots and people alike.

    Until Realtime Robotics stepped up and solved the problem with autonomous robot motion planning and multi-robot deconfliction. Meaning, they’ve developed a platform including a proprietary processor tailor-made to produce autonomous, collision-free motion plans for multiple robots.

    Built on the research of co-founder George Konidaris, a former postdoc at MIT in the Department of Electrical Engineering and Computer Science, the core technology is embodied in an industrial PC called the Realtime Controller. It precomputes a field of thousands, even millions, of potential motions that the robot is likely to need, and then hardware accelerates the searching of those motions at runtime.

    “We can look at all the potential options, see moment-to-moment, millisecond-to-millisecond, which ones are available, and then find the optimal path through the workspace to get the job done,” says Howard.

    They’ve baked in AI-for-multiple-robot optimization to find the best and highest efficiency for the structure of the work cell — everything from the positioning of the robot to the sequencing of tasks, and which tasks are going to be done by which robot. “In the space of running this AI for just a few hours, you’re able to achieve a throughput rate that is unimaginably better than what a human programmer is capable of doing,” explains Howard. “Our platform allows new AI-based system makers to stay focused on what they’re good at, while we take care of the difficult underside of the robotics problem.”

    The Realtime Robotics platform also incorporates powerful spatial and object perception pipelines that are used for collision avoidance and workpiece perception, providing unprecedented flexibility while keeping human coworkers safe. “We’re putting on the market the first system that is capable of interacting intimately with people and keeping them safe in the presence of industrial robots,” says Howard.

    In May 2017, Realtime Robotics set up shop at MassRobotics, a Boston-area robotics collective. Three months later they had completed their first seed round of funding and landed their first contract with Amazon Robotics. A year later, they had demonstrated their first killer demo for an audience that included two of the top six robot makers.

    Howard says their strong ties to MIT played no small role in helping garner attention. “MIT ILP [Industrial Liaison Program] and the Startup Exchange have a very strong relationship at MassRobotics and throughout the Boston robotics ecosystem — they were continuously bringing world leaders in the robotics industry through the facility.”

    With Howard guiding the decision-making processes for Realtime Robotics, the go-to-market strategy is to reach end users by collaborating with leading industrial manufacturers as non-exclusive partners. Most recently, they’ve teamed up with Siemens Digital Industries software division to help original equipment manufacturers (OEMs) reduce the time to deploy and adapt to changes during simulation and on the shop floor.

    As for use cases, Howard points to Realtime Robotics’ recent work with Toyota. After working through the first three phases of a multi-phase project, he says they are now entering the exciting process of going out on the factory floor with the automotive manufacturer. To date, they are controlling a multi-robot cell with four robots on the production line. But it won’t be long before this expands to more applications and facilities across North America.

    And it’s not just the factory floor where Realtime Robotics expects to have an impact. Autonomous vehicles (AVs) will benefit tremendously from risk-aware motion planning. Realtime Robotics’ dedicated technology, known as Lightning, can run through hundreds of potential forecasts per sensor cycle. It gives AV stack partners the ability to imagine a world of possibilities and their various probabilities as identified by Realtime Robotics’ sensors and AI-powered perception stack to calculate the best immediate motion plan that ensures safety in anticipation of those possibilities.

    Realtime Robotics currently has global automation OEM leaders promoting their products and top 10 automakers doing the first product rollouts while incorporating the game-changing technology in their own standard tools and workflows. “With breakthrough new capabilities for optimization and safety being added to our platform, as well as tanking up a little bit on the fundraising side, the next six months are going to be a very exciting time,” says Howard. More

  • in

    Creating “digital twins” at scale

    Picture this: A delivery drone suffers some minor wing damage on its flight. Should it land immediately, carry on as usual, or reroute to a new destination? A digital twin, a computer model of the drone that has been flying the same route and now experiences the same damage in its virtual world, can help make the call.

    Digital twins are an important part of engineering, medicine, and urban planning, but in most of these cases each twin is a bespoke, custom implementation that only works with a specific application. Michael Kapteyn SM ’18, PhD ’21 has now developed a model that can enable the deployment of digital twins at scale — creating twins for a whole fleet of drones, for instance.

    A mathematical representation called a probabilistic graphical model can be the foundation for predictive digital twins, according to a new study by Kapteyn and his colleagues in the journal Nature Computational Science. The researchers tested out the idea on an unpiloted aerial vehicle (UAV) in a scenario like the one described above.

    “The custom implementations that have been demonstrated so far typically require a significant amount of resources, which is a barrier to real-world deployment,” explains Kapteyn, who recently received his doctorate in computational science and engineering from the MIT Department of Aeronautics and Astronautics.

    “This is exacerbated by the fact that digital twins are most useful in situations where you are managing many similar assets,” he adds. “When developing our model, we always kept in mind the goal of creating digital twins for an entire fleet of aircraft, or an entire farm of wind turbines, or a population of human cardiac patients.”

    “Their work pushes the boundaries of digital twins’ custom implementations that require considerable deployment resources and a high level of expertise,” says Omer San, an assistant professor of mechanical and aerospace engineering at Oklahoma State University who was not involved in the research.

    Kapteyn’s co-authors on the paper include his PhD advisor Karen Willcox SM ’96, PhD ’00, MIT visiting professor and director of the Oden Institute for Computational Engineering and Sciences at the University of Texas at Austin, and former MIT engineering and management master’s student Jacob Pretorius ’03, now chief technology officer of The Jessara Group.

    Evolving twins

    Digital twins have a long history in aerospace engineering, from one of its earliest uses by NASA in devising strategies to bring the crippled Apollo 13 moon mission home safely in 1970. Researchers in the medical field have been using digital twins for applications like cardiology, to consider treatments such as valve replacement before a surgery.

    However, expanding the use of digital twins to guide the flight of hundreds of satellites, or recommend precision therapies for thousands of heart patients, requires a different approach than the one-off, highly specific digital twins that are created usually, the researchers write.

    To resolve this, Kapteyn and colleagues sought out a unifying mathematical representation of the relationship between a digital twin and its associated physical asset that was not specific to a particular application or use. The researchers’ model mathematically defines a pair of physical and digital dynamic systems, coupled together via two-way data streams as they evolve over time. In the case of the UAV, for example, the parameters of the digital twin are first calibrated with data collected from the physical UAV so that its twin is an accurate reflection from the start.

    As the overall state of the UAV changes over time (through processes such as mechanical wear and tear and flight time logged, among others), these changes are observed by the digital twin and used to update its own state so that it matches the physical UAV. This updated digital twin can then predict how the UAV will change in the future, using this information to optimally direct the physical asset going forward.

    The graphical model allows each digital twin “to be based on the same underlying computational model, but each physical asset must maintain a unique ‘digital state’ that defines a unique configuration of this model,” Kapteyn explains. This makes it easier to create digital twins for a large collection of similar physical assets.

    UAV test case

    To test their model, the team used a 12-foot wingspan UAV designed and built together with Aurora Flight Sciences and outfitted with sensor “stickers” from The Jessara Group that were used to collect strain, acceleration, and other relevant data from the UAV.

    The UAV was the test bed for everything from calibration experiments to a simulated “light damage” event. Its digital twin was able to analyze sensor data to extract damage information, predict how the structural health of the UAV would change in the future, and recommend changes in its maneuvering to accommodate those changes.

    The UAV case shows how similar digital-twin modeling could be useful in other situations where environmental wear and tear plays a significant role in operation, such as a wind turbine, a bridge, or a nuclear reactor, the researchers note in their paper.

    “I think this idea of maintaining a persistent set of computational models that are constantly being updated and evolved alongside a physical asset over its entire life cycle is really the essence of digital twins,” says Kapteyn, “and is what we have tried to capture in our model.”

    The probabilistic graphical model approach helps to “seamlessly span different phases of the asset life cycle,” he notes. “In our particular case, this manifests as the graphical model seamlessly extending from the calibration phase into our operational, in-flight phase, where we actually start to use the digital twin for decision-making.”

    The research could help make the use of digital twins more widespread, since “even with existing limitations, digital twins are providing valuable decision support in many different application areas,” Willcox said in a recent interview.

    “Ultimately, we would like to see the technology used in every engineering system,” she added. “At that point, we can start thinking not just about how a digital twin might change the way we operate the system, but also how we design it in the first place.”

    This work was partially supported by the Air Force Office of Scientific Research, the SUTD-MIT International Design Center, and the U.S. Department of Energy. More

  • in

    Speeding up clinical trials by making drug production local

    The Boston area has long been home to innovation that leads to impactful new drugs. But manufacturing those drugs for clinical trials often involves international partners and supply chains. The vulnerabilities of that system have become all too apparent during the Covid-19 pandemic.

    Now Snapdragon Chemistry, co-founded by MIT Professor and Associate Provost Tim Jamison, is helping pharmaceutical companies manufacture drugs locally to shorten the time it takes for new drugs to get to patients.

    Snapdragon essentially starts as a chemistry lab, running experiments on behalf of pharmaceutical customers to create molecules of interest. From there it seeks to automate production processes, often lessening the number of steps it takes to create those molecules. Sometimes the new process will require a technology — such as a specialized chemical reactor — the client doesn’t have, so Snapdragon builds the equipment for the client and teaches them to incorporate it into their processes.

    Some of those reactors are being used for the commercial production of approved drugs, although most are designed to help pharmaceutical and biotech companies get through clinical trials more quickly.

    “At the clinical stage, you just want to go as fast as possible to find out whether you have a useful therapeutic or not,” Snapdragon CEO Matt Bio says. “We’re really trying to stay focused on the technology for delivering drugs fast to the clinic.”

    Snapdragon has worked with over 100 companies, ranging from small biotechs to large multinationals like Amgen, for whom it has helped develop potential cancer treatments. The company has also worked with research agencies to push the frontiers of automated material production, including in a project with the Biomedical Advanced Research and Development Authority (BARDA) to develop ribonucleotide triphosphates, which are the building blocks to mRNA-based Covid-19 vaccines.

    In March, Snapdragon announced plans to build a 51,000 square foot facility in Waltham, Massachusetts, that will enable it to produce more drugs in-house, removing yet another step to get new drugs into the clinic.

    “It’s about supplying the client with the fastest route possible to the molecule they need to test in the clinic,” Bio says.

    By focusing on the processes and technology for synthesizing chemicals, the company believes it has potential to transform the economics of drug manufacturing at every scale.

    “We can make [drugs] potentially a lot cheaper, and where that’s really interesting is [around questions like] how do you make a tuberculosis drug that’s, say, half a cent?” Bio says. “That’s a lot harder than making these complex drugs. But you need to save every penny if you’re going to roll out to parts of sub-Saharan Africa. Those are new opportunities we get to engage in.”

    An idea, and a pivot

    Jamison began thinking about starting a company when he noticed other scientists were interested in his research around continuous flow photochemistry, which uses light to spark chemical reactions and can offer huge cost and scale advantages over traditional chemistry processing done in batches.

    “Generally, chemistry has been done since its origins in what we call batch mode,” says Jamison, who was also a principal investigator at the Novartis-MIT Center for Continuous Manufacturing and has published a number of papers around continuous flow chemistry processes. “It’s like cooking. We make a set quantity, that’s a batch. But if you’re going to be a food manufacturer, for example, you’d want something that’s continuous to meet the throughput, like an assembly line.”

    In 2012, Jamison began mapping out what a company would look like with eventual co-founder Aaron Beeler, an associate professor of medicinal chemistry at Boston University.  After two years of developing, vetting, and “pressure testing” their business model by seeking guidance from colleagues in their networks and MIT’s Venture Mentoring Service, the founders set out to start a company that would manufacture specialty and fine chemicals, focusing on those that would be well-suited to continuous flow synthesis. Snapdragon officially formed in October 2014 as Firefly Therapeutics.

    Jamison likes to say the company pivoted on day one. Within a week of incorporating, the founders had secured two contracts — not to sell chemicals, but to help pharmaceutical companies develop continuous manufacturing processes.

    Bio joined in 2015 at a time when the company — by then renamed Snapdragon — had secured consulting and services contracts. Snapdragon’s customer base was growing so rapidly by then the company moved four times in the first four years as it went from needing one lab bench to dozens.

    Snapdragon’s work helping companies improve chemistry processes is still its most common service offering. Most of those improvements come from an understanding of what the latest reactor and automation technology can offer.

    “If you walked around our labs, you’d see a lot of automation and robotics that are doing things that people used to do less efficiently,” Bio says. “Instead of our scientists being in the lab setting up a reaction, breaking down a reaction, they can just think about the chemistry and then use some of the robotic tools to get the answers they want faster.”

    “One area where Snapdragon is really innovating is in lab [operating systems], which are a way of networking literally every single instrument in the company and gathering real-time information about processes,” Jamison says.

    Fulfilling an industry’s potential

    Snapdragon’s Waltham expansion will bring the company full circle, to the cofounders’ original idea of producing specialty chemicals in-house.

    Bio says the expansion will be particularly beneficial for developing treatments to diseases with smaller patient populations and smaller material requirements. He notes that in some mRNA-based treatments, for example, a kilogram of material can treat millions of people.

    The company also recently received a grant from DARPA to try turning plentiful commodities in the U.S., like natural gas and crop waste, into the starting materials for high-value pharmaceuticals.

    Moving forward, Jamison thinks Snapdragon’s machine-based production processes will only accelerate the company’s ability to innovate.

    “Chemistry of the future could be very different from what we’re doing right now, but we don’t have enough data yet,” Jamison says. “One of the longer-term visions for Snapdragon is creating automated systems capable of generating lots of data, and then using those data as training sets for machine learning algorithms toward any number of applications, from how to make something to predicting properties of materials. That unlocks a lot of exciting possibilities.” More

  • in

    Training robots to manipulate soft and deformable objects

    Robots can solve a Rubik’s cube and navigate the rugged terrain of Mars, but they struggle with simple tasks like rolling out a piece of dough or handling a pair of chopsticks. Even with mountains of data, clear instructions, and extensive training, they have a difficult time with tasks easily picked up by a child.

    A new simulation environment, PlasticineLab, is designed to make robot learning more intuitive. By building knowledge of the physical world into the simulator, the researchers hope to make it easier to train robots to manipulate real-world objects and materials that often bend and deform without returning to their original shape. Developed by researchers at MIT, the MIT-IBM Watson AI Lab, and University of California at San Diego, the simulator was launched at the International Conference on Learning Representations in May.

    In PlasticineLab, the robot agent learns how to complete a range of given tasks by manipulating various soft objects in simulation. In RollingPin, the goal is to flatten a piece of dough by pressing on it or rolling over it with a pin; in Rope, to wind a rope around a pillar; and in Chopsticks, to pick up a rope and move it to a target location.

    The researchers trained their agent to complete these and other tasks faster than agents trained under reinforcement-learning algorithms, they say, by embedding physical knowledge of the world into the simulator, which allowed them to leverage gradient descent-based optimization techniques to find the best solution.  

    “Programming a basic knowledge of physics into the simulator makes the learning process more efficient,” says the study’s lead author, Zhiao Huang, a former MIT-IBM Watson AI Lab intern who is now a PhD student at the University of California at San Diego. “This gives the robot a more intuitive sense of the real world, which is full of living things and deformable objects.”

    “It can take thousands of iterations for a robot to master a task through the trial-and-error technique of reinforcement learning, which is commonly used to train robots in simulation,” says the work’s senior author, Chuang Gan, a researcher at IBM. “We show it can be done much faster by baking in some knowledge of physics, which allows the robot to use gradient-based planning algorithms to learn.”

    Basic physics equations are baked in to PlasticineLab through a graphics programming language called Taichi. Both TaiChi and an earlier simulator that PlasticineLab is built on, ChainQueen, were developed by study co-author Yuanming Hu SM ’19, PhD ’21. Through the use of gradient-based planning algorithms, the agent in PlasticineLab is able to continuously compare its goal against the movements it has made to that point, leading to faster course-corrections.

    “We can find the optimal solution through back propagation, the same technique used to train neural networks,” says study co-author Tao Du, a PhD student at MIT. “Back propagation gives the agent the feedback it needs to update its actions to reach its goal more quickly.”

    The work is part of an ongoing effort to endow robots with more common sense so that they one day might be capable of cooking, cleaning, folding the laundry, and performing other mundane tasks in the real world.

    Other authors of PlasticineLab are Siyuan Zhou of Peking University, Hao Su of UCSD, and MIT Professor Joshua Tenenbaum. More

  • in

    Using computational tools for molecule discovery

    Discovering a drug, material, or anything new requires finding and understanding molecules. It’s a time- and labor-intensive process, which can be helped along by a chemist’s expertise, but it can only go so quickly, be so efficient, and there’s no guarantee for success. Connor Coley is looking to change that dynamic. The Henri Slezynger (1957) Career Development Assistant Professor in the MIT Department of Chemical Engineering is developing computational tools that would be able to predict molecular behavior and learn from the successes and mistakes.

    It’s an intuitive approach and one that still has obstacles, but Coley says that this autonomous platform holds enormous potential for remaking the discovery process. A reservoir of untapped and never-before-imagined molecules would be opened up. Suggestions could be made from the outset, offering a running start and shortening the overall timeline from idea to result. And human capital would no longer be a restriction, allowing scientists to be be freed up from monitoring every step and instead tackle bigger questions that they weren’t able to before. “This would let us boost our productivity and scale out the discovery process much more efficiently,” he says.

    Playing detective

    Molecules present a couple of challenges. They take time to figure out and there are a lot of them. Coley cites estimates that there are 1020 to 1060 that are small and biologically relevant, but fewer than 109 have been synthesized and tested. To close that gap and accelerate the process, his group has been working on computational techniques that learn to correlate molecular structures with their functions.

    One of the tools is guided optimization, which would evaluate a molecule across a number of dimensions and determine which will have the best properties for a given task. The aim is to have the model make better predictions as it runs through a technique called active learning, and Coley says that it might reduce the number of experiments it takes for a hypothetical new drug to go from initial stages to clinical trials “by an order of magnitude.”

    There are still inherent limitations. The guided optimization relies on models that are currently available, and molecules, unlike images, aren’t numerical or static. Their shapes change based on factors like environment and temperature. Coley is looking to take those elements into account, so the tool can learn patterns, and the result would be “a more nuanced understanding of what it means to have a molecular structure and how best to capture that as an input to these machine learning models.”

    One bottleneck, as he calls it, is having good test cases to benchmark performance. As an example, two molecules that are mirror images can still behave differently in different environments, one of those being the human body, but many datasets don’t show that. Developing new algorithms and models requires having specific tasks and goals, and he’s working on creating synthetic benchmarks that would be controlled but would still reflect real applications.

    More than selecting molecules, Coley is also working on tools that would generate new structures. The typical method is for a scientist to design property models and make a query. What comes out is a prediction of molecular function, but only for the molecule that was requested. Coley says that new approaches make it possible to ask the model to come up with new ideas and structures that would have a good set of properties, even though it hasn’t been specifically queried. In essence, it “inverts” the process.

    The potential is enormous, but the models are still data-inefficient. It could take more than 100,000 guesses before a “good” molecule is found, which is too many, says Coley, adding that the desire is to be able to discover molecules in a closed-loop fashion. An essential aspect of achieving that goal is to constrain generation to abide by the rules of synthetic chemistry; otherwise, it could take months to test what the model proposes. In the new approach, it would be able to “quality check” and propose both molecules and pathways. He also wants to get to the point where models will understand the variability in and uncertainty of real-world situations. Together, these capabilities would reduce the reliance on human intuition, giving chemists a head start and the time to take on higher level tasks.

    The upside of mistakes

    One limitation with improving any data-driven model is that it hinges on available literature. Coley would like to open that up through a collaborative effort he co-leads, the Open Reaction Database. It would be community-driven, synthetic chemistry-focused and encourage researchers to share experiments that haven’t worked and wouldn’t normally be published. That’s not the usual request, and it would entail a mindset shift in the chemistry field, but Coley says that there’s a value in looking at what weren’t “successes.” “It adds richness to the data we have,” he says.

    That’s the overarching theme to his work. The computational model would build on the last 100 years of chemistry and end up being a platform that keeps learning. The big-picture goal is to fully automate the process of research. Models and robotics could pick the solutions and mixtures and perform the heating, stirring, and purifying, and whatever product was made could be fed back in and be the start for the next experiment. “That could be hugely enabling in terms of our ability to efficiently make, test, and discover new chemical matter,” Coley says.

    And the end result is that restrictions on discovery would come down to the availability of platforms, not the availability of time, a question of capital rather than human resources. The missing piece is designing a computational approach that can identify new structures and have a better chance from the outset of success. In actuality, it’s not about automation. That approach goes through steps in a prescribed manner. What Coley wants is that extra component of being able to generate ideas, test hypotheses, respond to surprises, and adjust accordingly. “My goal is to achieve that full level of autonomy,” he says. More

  • in

    Unleashing capacity at Heineken México with systems thinking from MIT

    It’s no secret that a manufacturer’s ability to maintain and ideally increase production capability is the basis for long-run competitive success. But discovering a way to significantly increase production without buying a single piece of new equipment — that may strike you as a bit more surprising. 

    Global beer manufacturer Heineken is the second-largest brewer in the world. Founded in 1864, the company owns over 160 breweries in more than 70 countries and sells more than 8.5 million barrels of its beer brands in the United States alone. In addition to its sustained earnings, the company has demonstrated significant social and environmental responsibility, making it a globally admired brand. Now, thanks to a pair of MIT Sloan Executive Education alumni, the the firm has applied data-driven developments and AI augmentation to its operations, helping it solve a considerable production bottleneck that unleashed hidden capacity in the form of millions of cases of beer at its plant in México.

    Little’s Law, big payoffs

    Federico Crespo, CEO of fast-growing industrial tech company Valiot.io, and Miguel Aguilera, supply chain digital transformation and innovation manager at Heineken México, first met at the MIT Sloan Executive Education program Implementing Industry 4.0: Leading Change in Manufacturing and Operations. During this short course led by John Carrier, senior lecturer in the System Dynamics Group at MIT Sloan, Crespo and Aguilera acquired the tools they needed to spark a significant improvement process at Mexico’s largest brewery.

    Ultimately, they would use Valiot’s AI-powered technology to optimize the scheduling process in the presence of unpredictable events, drastically increasing the brewery throughput and improving worker experience. But it all started with a proper diagnosis of the problem using Little’s Law.

    Often referred to as the First Law of Operations, Little’s Law is named for John D.C. Little, a professor post tenure at MIT Sloan and an MIT Institute Professor Emeritus. Little proved that the three most important properties of any system — throughput, lead time, and work-in-process — must obey the following simple relationship:

    Little’s law formula says work-in-progress is equal to throughput multiplied by lead time.

    Previous item
    Next item

    Little’s Law is particularly useful for detecting and quantifying the presence of bottlenecks and lost throughput in any system. And it is one of the key frameworks taught in Carrier’s Implementing Industry 4.0 course.

    Crespo and Aguilera applied Little’s Law and worked backward through the entire production process, examining cycle times to assess wait times and identify the biggest bottlenecks in the brewery.

    Specifically, they discovered a significant bottleneck at the filtration stage. As beer moved from maturation and filtration to bright beer tanks (BBT), it was often held up waiting to be routed to the bottling and canning lines, due to various upsets and interruptions throughout the facility as well as real-time demand-based production updates.

    This would typically initiate a manual, time-intensive rescheduling process. Operators had to track down handwritten production logs to figure out the current state of the bottling lines and inventory the supply by manually entering the information into a set of spreadsheets stored on a local computer. Each time a line was down, a couple hours were lost.

    With the deficiency identified, the facility quickly took action to solve it.

    Bottlenecks introduce habits, which evolve into culture

    Once bottlenecks have been identified, the next logical step is to remove them. However, this can be particularly challenging, as persistent bottlenecks change the way the people work within the system, becoming part of worker identity and the reward system.

    “Culture can act to reject any technological advance, no matter how beneficial this technology may be to the overall system,” says Carrier. “But culture can also provide a powerful mechanism for change and serve as a problem-solving device.”

    The best approach to introducing a new technology, advises Carrier, is to find early projects that reduce human struggle, which inevitably leads to overall improvements in productivity, reliability, and safety.

    Heineken México’s digital transformation

    Working with Federico and his team at Valiot.io, and with full support of Sergio Rodriguez, vice president of manufacturing at Heineken México, Aguilera and the Monterrey brewery team began connecting the enterprise resource plan and in-floor sensors to digitize the brewing process. Valiot’s data monitors assured a complete data quality interaction with the application. Fed by real-time data, machine learning was applied for filtering and the BBT process to optimize the daily-optimized production schedule. As a result, BBT and filtration time were reduced in each cycle. Brewing capacity also increased significantly per month. The return on the investment was clear within the first month of implementation.

    The migration to digital has enabled Heineken México to have a real-time visualization of the bottling lines and filtering conditions in each batch. With AI constantly monitoring and learning from ongoing production, the technology automatically optimizes efficiency every step of the way. And, using the real-time visualization tools, human operators in the factory can now make adjustments on the fly without slowing down or stopping production. On top of that, the operators can do their jobs from home effectively, which has had significant benefits given the Covid-19 pandemic.

    The key practical aspects

    The Valoit team was required to be present on the floor with the operators to decode what they were doing, and the algorithm had to be constantly tested against performance. According to Sergio Rodriguez Garza, vice president supply chain for Heineken México, success was ultimately based on the fact that Valiot’s approach was impacting the profit and loss, not simply counting the number of use cases implemented.

    “The people who make the algorithms do not always know where the value in the facility is,” says Garza. “For this reason, it is important to create a bridge between the areas in charge of digitization and the areas in charge of the process. This process is not yet systematic; each plant has a different bottleneck, and each needs its own diagnosis. However, the process of diagnosis is systematic, and each plant manager is responsible for his/her own plant’s diagnosis of the bottleneck.”

    “A unique diagnosis is the key,” adds Carrier, “and a quality diagnosis is based on a fundamental understanding of systems thinking.” More