More stories

  • in

    This 3D printer doesn’t gloss over the details

    Shape, color, and gloss.
    Those are an object’s three most salient visual features. Currently, 3D printers can reproduce shape and color reasonably well. Gloss, however, remains a challenge. That’s because 3D printing hardware isn’t designed to deal with the different viscosities of the varnishes that lend surfaces a glossy or matte look.
    MIT researcher Michael Foshey and his colleagues may have a solution. They’ve developed a combined hardware and software printing system that uses off-the-shelf varnishes to finish objects with realistic, spatially varying gloss patterns. Foshey calls the advance “a chapter in the book of how to do high-fidelity appearance reproduction using a 3D printer.”
    He envisions a range of applications for the technology. It might be used to faithfully reproduce fine art, allowing near-flawless replicas to be distributed to museums without access to originals. It might also help create more realistic-looking prosthetics. Foshey hopes the advance represents a step toward visually perfect 3D printing, “where you could almost not tell the difference between the object and the reproduction.”
    Foshey, a mechanical engineer in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), will present the paper at next month’s SIGGRAPH Asia conference, along with lead author Michal Piovarči of the University of Lugano in Switzerland. Co-authors include MIT’s Wojciech Matusik, Vahid Babaei of the Max Planck Institute, Szymon Rusinkiewicz of Princeton University, and Piotr Didyk of the University of Lugano.
    Glossiness is simply a measure of how much light is reflected from a surface. A high gloss surface is reflective, like a mirror. A low gloss, or matte, surface is unreflective, like concrete. Varnishes that lend a glossy finish tend to be less viscous and to dry into a smooth surface. Varnishes that lend a matte finish are more viscous — closer to honey than water. They contain large polymers that, when dried, protrude randomly from the surface and absorb light. “You have a bunch of these particles popping out of the surface, which gives you that roughness,” says Foshey. 
    But those polymers pose a dilemma for 3D printers, whose skinny fluid channels and nozzles aren’t built for honey. “They’re very small, and they can get clogged easily,” says Foshey.
    The state-of-the-art way to reproduce a surface with spatially varying gloss is labor-intensive: The object is initially printed with high gloss and with support structures covering the spots where a matte finish is ultimately desired. Then the support material is removed to lend roughness to the final surface. “There’s no way of instructing the printer to produce a matte finish in one area, or a glossy finish in another,” says Foshey. So, his team devised one.
    They designed a printer with large nozzles and the ability to deposit varnish droplets of varying sizes. The varnish is stored in the printer’s pressurized reservoir, and a needle valve opens and closes to release varnish droplets onto the printing surface. A variety of droplet sizes is achieved by controlling factors like the reservoir pressure and the speed of the needle valve’s movements. The more varnish released, the larger the droplet deposited. The same goes for the speed of the droplet’s release. “The faster it goes, the more it spreads out once it impacts the surface,” says Foshey. “So we essentially vary all these parameters to get the droplet size we want.”
    The printer achieves spatially varying gloss through halftoning. In this technique, discrete varnish droplets are arranged in patterns that, when viewed from a distance, appear like a continuous surface. “Our eyes actually do the mixing itself,” says Foshey. The printer uses just three off-the-shelf varnishes — one glossy, one matte, and one in between. By incorporating these varnishes into its preprogrammed halftoning pattern, the printer can yield continuous, spatially varying shades of glossiness across the printing surface.
    Along with the hardware, Foshey’s team produced a software pipeline to control the printer’s output. First, the user indicates their desired gloss pattern on the surface to be printed. Next, the printer runs a calibration, trying various halftoning patterns of the three supplied varnishes. Based on the reflectance of those calibration patterns, the printer determines the proper halftoning pattern to use on the final print job to achieve the best possible reproduction. The researchers demonstrated their results on a variety of “2.5D” objects — mostly-flat printouts with textures that varied by half a centimeter in height. “They were impressive,” says Foshey. “They definitely have more of a feel of what you’re actually trying to reproduce.”
    The team plans to continue developing the hardware for use on fully-3D objects. Didyk says “the system is designed in such a way that the future integration with commercial 3D printers is possible.”
    This work was supported by the National Science Foundation and the European Research council. More

  • in

    Shrinking massive neural networks used to model language

    You don’t need a sledgehammer to crack a nut.
    Jonathan Frankle is researching artificial intelligence — not noshing pistachios — but the same philosophy applies to his “lottery ticket hypothesis.” It posits that, hidden within massive neural networks, leaner subnetworks can complete the same task more efficiently. The trick is finding those “lucky” subnetworks, dubbed winning lottery tickets.
    In a new paper, Frankle and colleagues discovered such subnetworks lurking within BERT, a state-of-the-art neural network approach to natural language processing (NLP). As a branch of artificial intelligence, NLP aims to decipher and analyze human language, with applications like predictive text generation or online chatbots. In computational terms, BERT is bulky, typically demanding supercomputing power unavailable to most users. Access to BERT’s winning lottery ticket could level the playing field, potentially allowing more users to develop effective NLP tools on a smartphone — no sledgehammer needed.
    “We’re hitting the point where we’re going to have to make these models leaner and more efficient,” says Frankle, adding that this advance could one day “reduce barriers to entry” for NLP.
    Frankle, a PhD student in Michael Carbin’s group at the MIT Computer Science and Artificial Intelligence Laboratory, co-authored the study, which will be presented next month at the Conference on Neural Information Processing Systems. Tianlong Chen of the University of Texas at Austin is the lead author of the paper, which included collaborators Zhangyang Wang, also of Texas A&M, as well as Shiyu Chang, Sijia Liu, and Yang Zhang, all of the MIT-IBM Watson AI Lab.
    You’ve probably interacted with a BERT network today. It’s one of the technologies that underlies Google’s search engine, and it has sparked excitement among researchers since Google released BERT in 2018. BERT is a method of creating neural networks — algorithms that use layered nodes, or “neurons,” to learn to perform a task through training on numerous examples. BERT is trained by repeatedly attempting to fill in words left out of a passage of writing, and its power lies in the gargantuan size of this initial training dataset. Users can then fine-tune BERT’s neural network to a particular task, like building a customer-service chatbot. But wrangling BERT takes a ton of processing power.
    “A standard BERT model these days — the garden variety — has 340 million parameters,” says Frankle, adding that the number can reach 1 billion. Fine-tuning such a massive network can require a supercomputer. “This is just obscenely expensive. This is way beyond the computing capability of you or me.”
    Chen agrees. Despite BERT’s burst in popularity, such models “suffer from enormous network size,” he says. Luckily, “the lottery ticket hypothesis seems to be a solution.”
    To cut computing costs, Chen and colleagues sought to pinpoint a smaller model concealed within BERT. They experimented by iteratively pruning parameters from the full BERT network, then comparing the new subnetwork’s performance to that of the original BERT model. They ran this comparison for a range of NLP tasks, from answering questions to filling the blank word in a sentence.
    The researchers found successful subnetworks that were 40 to 90 percent slimmer than the initial BERT model, depending on the task. Plus, they were able to identify those winning lottery tickets before running any task-specific fine-tuning — a finding that could further minimize computing costs for NLP. In some cases, a subnetwork picked for one task could be repurposed for another, though Frankle notes this transferability wasn’t universal. Still, Frankle is more than happy with the group’s results.
    “I was kind of shocked this even worked,” he says. “It’s not something that I took for granted. I was expecting a much messier result than we got.”
    This discovery of a winning ticket in a BERT model is “convincing,” according to Ari Morcos, a scientist at Facebook AI Research. “These models are becoming increasingly widespread,” says Morcos. “So it’s important to understand whether the lottery ticket hypothesis holds.” He adds that the finding could allow BERT-like models to run using far less computing power, “which could be very impactful given that these extremely large models are currently very costly to run.”
    Frankle agrees. He hopes this work can make BERT more accessible, because it bucks the trend of ever-growing NLP models. “I don’t know how much bigger we can go using these supercomputer-style computations,” he says. “We’re going to have to reduce the barrier to entry.” Identifying a lean, lottery-winning subnetwork does just that — allowing developers who lack the computing muscle of Google or Facebook to still perform cutting-edge NLP. “The hope is that this will lower the cost, that this will make it more accessible to everyone … to the little guys who just have a laptop,” says Frankle. “To me that’s really exciting.”
    This research was funded, in part, by IBM. More

  • in

    Computer-aided creativity in robot design

    So, you need a robot that climbs stairs. What shape should that robot be? Should it have two legs, like a person? Or six, like an ant?
    Choosing the right shape will be vital for your robot’s ability to traverse a particular terrain. And it’s impossible to build and test every potential form. But now an MIT-developed system makes it possible to simulate them and determine which design works best.
    You start by telling the system, called RoboGrammar, which robot parts are lying around your shop — wheels, joints, etc. You also tell it what terrain your robot will need to navigate. And RoboGrammar does the rest, generating an optimized structure and control program for your robot.
    The advance could inject a dose of computer-aided creativity into the field. “Robot design is still a very manual process,” says Allan Zhao, the paper’s lead author and a PhD student in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). He describes RoboGrammar as “a way to come up with new, more inventive robot designs that could potentially be more effective.”
    Zhao is the lead author of the paper, which he will present at this month’s SIGGRAPH Asia conference. Co-authors include PhD student Jie Xu, postdoc Mina Konaković-Luković, postdoc Josephine Hughes, PhD student Andrew Spielberg, and professors Daniela Rus and Wojciech Matusik, all of MIT.
    Ground rules
    Robots are built for a near-endless variety of tasks, yet “they all tend to be very similar in their overall shape and design,” says Zhao. For example, “when you think of building a robot that needs to cross various terrains, you immediately jump to a quadruped,” he adds, referring to a four-legged animal like a dog. “We were wondering if that’s really the optimal design.”
    Zhao’s team speculated that more innovative design could improve functionality. So they built a computer model for the task — a system that wasn’t unduly influenced by prior convention. And while inventiveness was the goal, Zhao did have to set some ground rules.
    The universe of possible robot forms is “primarily composed of nonsensical designs,” Zhao writes in the paper. “If you can just connect the parts in arbitrary ways, you end up with a jumble,” he says. To avoid that, his team developed a “graph grammar” — a set of constraints on the arrangement of a robot’s components. For example, adjoining leg segments should be connected with a joint, not with another leg segment. Such rules ensure each computer-generated design works, at least at a rudimentary level.
    Zhao says the rules of his graph grammar were inspired not by other robots but by animals — arthropods in particular. These invertebrates include insects, spiders, and lobsters. As a group, arthropods are an evolutionary success story, accounting for more than 80 percent of known animal species. “They’re characterized by having a central body with a variable number of segments. Some segments may have legs attached,” says Zhao. “And we noticed that that’s enough to describe not only arthropods but more familiar forms as well,” including quadrupeds. Zhao adopted the arthropod-inspired rules thanks in part to this flexibility, though he did add some mechanical flourishes. For example, he allowed the computer to conjure wheels instead of legs.

    Play video

    A phalanx of robots
    Using Zhao’s graph grammar, RoboGrammar operates in three sequential steps: defining the problem, drawing up possible robotic solutions, then selecting the optimal ones. Problem definition largely falls to the human user, who inputs the set of available robotic components, like motors, legs, and connecting segments. “That’s key to making sure the final robots can actually be built in the real world,” says Zhao. The user also specifies the variety of terrain to be traversed, which can include combinations of elements like steps, flat areas, or slippery surfaces.
    With these inputs, RoboGrammar then uses the rules of the graph grammar to design hundreds of thousands of potential robot structures. Some look vaguely like a racecar. Others look like a spider, or a person doing a push-up. “It was pretty inspiring for us to see the variety of designs,” says Zhao. “It definitely shows the expressiveness of the grammar.” But while the grammar can crank out quantity, its designs aren’t always of optimal quality.
    Choosing the best robot design requires controlling each robot’s movements and evaluating its function. “Up until now, these robots are just structures,” says Zhao. The controller is the set of instructions that brings those structures to life, governing the movement sequence of the robot’s various motors. The team developed a controller for each robot with an algorithm called Model Predictive Control, which prioritizes rapid forward movement.
    “The shape and the controller of the robot are deeply intertwined,” says Zhao, “which is why we have to optimize a controller for every given robot individually.” Once each simulated robot is free to move about, the researchers seek high-performing robots with a “graph heuristic search.” This neural network algorithm iteratively samples and evaluates sets of robots, and it learns which designs tend to work better for a given task. “The heuristic function improves over time,” says Zhao, “and the search converges to the optimal robot.”
    This all happens before the human designer ever picks up a screw.
    “This work is a crowning achievement in the a 25-year quest to automatically design the morphology and control of robots,” says Hod Lipson, a mechanical engineer and computer scientist at Columbia University, who was not involved in the project. “The idea of using shape-grammars has been around for a while, but nowhere has this idea been executed as beautifully as in this work. Once we can get machines to design, make and program robots automatically, all bets are off.”
    Zhao intends the system as a spark for human creativity. He describes RoboGrammar as a “tool for robot designers to expand the space of robot structures they draw upon.” To show its feasibility, his team plans to build and test some of RoboGrammar’s optimal robots in the real world. Zhao adds that the system could be adapted to pursue robotic goals beyond terrain traversing. And he says RoboGrammar could help populate virtual worlds. “Let’s say in a video game you wanted to generate lots of kinds of robots, without an artist having to create each one,” says Zhao. “RoboGrammar would work for that almost immediately.”
    One surprising outcome of the project? “Most designs did end up being four-legged in the end,” says Zhao. Perhaps manual robot designers were right to gravitate toward quadrupeds all along. “Maybe there really is something to it.” More

  • in

    How humans use objects in novel ways to solve problems

    Human beings are naturally creative tool users. When we need to drive in a nail but don’t have a hammer, we easily realize that we can use a heavy, flat object like a rock in its place. When our table is shaky, we quickly find that we can put a stack of paper under the table leg to stabilize it. But while these actions seem so natural to us, they are believed to be a hallmark of great intelligence — only a few other species use objects in novel ways to solve their problems, and none can do so as flexibly as people. What provides us with these powerful capabilities for using objects in this way?
    In a new paper published in the Proceedings of the National Academy of Sciences describing work conducted at MIT’s Center for Brains, Minds and Machines, researchers Kelsey Allen, Kevin Smith, and Joshua Tenenbaum study the cognitive components that underlie this sort of improvised tool use. They designed a novel task, the Virtual Tools game, that taps into tool-use abilities: People must select one object from a set of “tools” that they can place in a two-dimensional, computerized scene to accomplish a goal, such as getting a ball into a certain container. Solving the puzzles in this game requires reasoning about a number of physical principles, including launching, blocking, or supporting objects.
    The team hypothesized that there are three capabilities that people rely on to solve these puzzles: a prior belief that guides people’s actions toward those that will make a difference in the scene, the ability to imagine the effect of their actions, and a mechanism to quickly update their beliefs about what actions are likely to provide a solution. They built a model that instantiated these principles, called the “Sample, Simulate, Update,” or “SSUP,” model, and had it play the same game as people. They found that SSUP solved each puzzle at similar rates and in similar ways as people did. On the other hand, a popular deep learning model that could play Atari games well but did not have the same object and physical structures was unable to generalize its knowledge to puzzles it was not directly trained on.

    Play video

    Rapid trial-and-error learning with simulation supports flexible tool use and physical reasoning. Video by Kris Brewer.

    This research provides a new framework for studying and formalizing the cognition that supports human tool use. The team hopes to extend this framework to not just study tool use, but also how people can create innovative new tools for new problems, and how humans transmit this information to build from simple physical tools to complex objects like computers or airplanes that are now part of our daily lives.
    Kelsey Allen, a PhD student in the Computational Cognitive Science Lab at MIT, is excited about how the Virtual Tools game might support other cognitive scientists interested in tool use: “There is just so much more to explore in this domain. We have already started collaborating with researchers across multiple different institutions on projects ranging from studying what it means for games to be fun, to studying how embodiment affects disembodied physical reasoning. I hope that others in the cognitive science community will use the game as a tool to better understand how physical models interact with decision-making and planning.”
    Joshua Tenenbaum, professor of computational cognitive science at MIT, sees this work as a step toward understanding not only an important aspect of human cognition and culture, but also how to build more human-like forms of intelligence in machines. “Artificial Intelligence researchers have been very excited about the potential for reinforcement learning (RL) algorithms to learn from trial-and-error experience, as humans do, but the real trial-and-error learning that humans benefit from unfolds over just a handful of trials — not millions or billions of experiences, as in today’s RL systems,” Tenenbaum says. “The Virtual Tools game allows us to study this very rapid and much more natural form of trial-and-error learning in humans, and the fact that the SSUP model is able to capture the fast learning dynamics we see in humans suggests it may also point the way towards new AI approaches to RL that can learn from their successes, their failures, and their near misses as quickly and as flexibly as people do.”  More

  • in

    Six MIT faculty elected 2020 AAAS Fellows

    Six MIT faculty members have been elected as fellows of the American Association for the Advancement of Science (AAAS).
    The new fellows are among a group of 489 AAAS members elected by their peers in recognition of their scientifically or socially distinguished efforts to advance science.
    A virtual induction ceremony for the new fellows will be held on Feb. 13, 2021. 
    Nazli Choucri is a professor of political science, a senior faculty member at the Center of International Studies (CIS), and a faculty affiliate at the Institute for Data, Science, and Society (IDSS). She works in the areas of international relations, conflict and violence, and the international political economy, with a focus on cyberspace and the global environment. Her current research is on cyberpolitics in international relations, focusing on linking integrating cyberspace into the fabric of international relations.
    Catherine Drennan is a professor in the departments of Biology and Chemistry. Her research group seeks to understand how nature harnesses and redirects the reactivity of enzyme metallocenters in order to perform challenging reactions. By combining X-ray crystallography with other biophysical methods, the researchers’ goal is to “visualize” molecular processes by obtaining snapshots of enzymes in action.
    Peter Fisher is a professor in the Department of Physics and currently serves as department head. He carries out research in particle physics in the areas of dark matter detection and the development of new kinds of particle detectors. He is also interested in compact energy supplies and wireless energy transmission.
    Neil Gershenfeld is the director of MIT’s Center for Bits and Atoms, which works to break down boundaries between the digital and physical worlds, from pioneering quantum computing to digital fabrication to the “internet of things.” He is the founder of a global network of over 1,000 fab labs, chairs the Fab Foundation, and leads the Fab Academy.
    Ju Li is the Battelle Energy Alliance Professor of Nuclear Science and Engineering and a professor of materials science and engineering. He studies how atoms and electrons behave and interact, to inform the design new materials from the atomic level on up. His research areas include overcoming timescale challenges in atomistic simulations, energy storage and conversion, and materials in extreme environments and far from equilibrium.
    Daniela Rus is the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science and director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT. Her research interests include robotics, mobile computing, and data science. Rus is a Class of 2002 MacArthur Fellow, a fellow of ACM, AAAI and IEEE, and a member of the National Academy of Engineering, and the American Academy for Arts and Science.
    This year’s fellows will be formally announced in the AAAS News and Notes section of Science on Nov. 27. More

  • in

    An antidote to “fast fashion”

    In today’s world of fast fashion, retailers sell only a fraction of their inventory, and consumers keep their clothes for about half as long as they did 15 years ago. As a result, the clothing industry has become associated with swelling greenhouse gas emissions and wasteful practices.
    The startup Armoire is addressing these issues with a clothing rental service designed to increase the utilization of clothes and save customers time. The service is based on machine-learning algorithms that use feedback from users to make better predictions about what they’ll wear.
    Customers pay a flat monthly price to get access to a range of high-end styles. Each time they log into Armoire, they get a personalized list of items to choose from. When they don’t want the clothing anymore, they return it to be used by someone else.
    “Our whole goal is to help clothes achieve end of life with a customer rather than at the back of your closet or ending up in a landfill,” Armoire co-founder and CEO Ambika Singh MBA ’16 says. “The metric we look at is the utilization of our clothes, and, amazingly, 95 percent of the things we own have been rented — unlike a normal retailer who might sell 35 percent of what they bring in at the beginning of the season.”
    The company says its service is tailored toward busy women who don’t have time to browse cluttered clothing aisles or endless webpages for new outfits.
    According to Singh, Armoire has grown 300 to 500 percent a year since its founding in 2016. The company now has thousands of customers across the U.S.
    “A typical customer response after a while is they feel really happy when they look at their closet instead of overwhelmed,” Singh says. “It’s fun to have this asset-light way of living.”
    Leaning on MIT’s community
    Singh came to MIT in 2014 with plans to start a company. She had previously spent seven years working in the tech industry, first with Microsoft then as an early hire at two startups.
    She says the first thing that struck her about MIT was the integration between its business and engineering schools. The second was how supportive MIT’s community of professors and students were. She quickly took advantage of both attributes.
    Singh spoke at length with professors about the potential for machine-learning algorithms to provide personalized recommendations and leaned on classmates for early idea validation and testing.
    In fact, when Singh started Armoire, classmates used it as a case study for marketing and analytics research projects. Others became early customers. Singh jokes that by the time she graduated, half of her Sloan class had touched Armoire in some way.
    Singh also worked with various entrepreneurial organizations at MIT, receiving support from the MIT Sandbox Innovation Fund and participating in the Martin Trust Center for MIT Entrepreneurship’s delta v summer accelerator.
    Singh remembers showing up on the first day of delta v with huge racks of clothes and seeing the small desks each team was given as workspace. Fortunately, someone found a nearby conference room with a closet.
    During delta v, Singh and her team bought inventory, got the clothes shipped to the Trust Center, packaged the items, and finally delivered them around campus or to the post office by scooter.
    In the fall of 2016, Singh was joined by Armoire co-founder Zachary Owen PhD ’18, who helped build the company’s recommendation systems but is no longer with Armoire.
    Armoire’s core algorithm is something called a collaborative filter, which makes predictions about user preferences based on data collected on many other users. Such filters work on the assumption that if two people have similar tastes around one item, they share preferences on others. Armoire’s algorithms also make use of dozens of labels the company manually enters for each item around things like color, fit, and seasonality.
    At the heart of Armoire is the idea that a clothing rental company can gather more data about customer preferences than a company that sells clothing to customers once. That data can then be used to deliver better service.
    A new model for fashion
    Armoire offers customers three tiers of service depending on how many clothes they want to keep at one time. Customers can keep their clothes as long as they like. The company curates selections from thousands of top designers and independent labels, with styles for being comfortable at home, attending formal business events, working out, and more.
    The Covid-19 pandemic has slowed the company’s growth trajectory, but Singh says it’s also given Armoire’s leadership team a chance to refocus on their existing customers.
    “The good thing about the Covid-19 disruptions is they’ve given us a chance to take a step back and focus on the product,” Singh says. “We’ve focused on our existing base, which is good because with subscription it’s always about adding more value to the customers you have.”
    Singh is also proud of the culture Armoire has fostered. All of Armoire’s warehouse workers are women or nonbinary, an uncommon breakdown in warehouses. Singh credits Armoire’s leadership team with creating a welcoming work environment, noting there’s been very little turnover in Armoire’s warehouses.
    “Some of [our workers] are single moms, and they come with a different set of challenges,” Singh says. “Most warehouses don’t allow people to carry their phone because they’re worried about employees slacking off. If you’re a single mom, that makes the job impractical because you can’t be walking around without your phone and then find out something happened to your kid.”
    Ultimately, Singh credits many companies with trying to innovate in the fashion industry, citing companies helping to clean up clothing production and increase recycling.
    For Armoire, though, meaningful impact will continue to come from helping customers cut down on waste.
    “We don’t get 95 percent of our inventory rented because I’m so good at picking out clothes,” Singh says. “We do it because we took all the data our customers gave us and built a model that helped us understand what we should be buying. It shows the capital efficiency of the business, it shows we make good on our sustainability desire, and when I look forward, it’s about what kind of innovations we can achieve that help us better serve our customers and the world.” More

  • in

    Lincoln Laboratory establishes Biotechnology and Human Systems Division

    MIT Lincoln Laboratory has established a new research and development division, the Biotechnology and Human Systems Division. The division will address emerging threats to both national security and humanity. Research and development will encompass advanced technologies and systems for improving chemical and biological defense, human health and performance, and global resilience to climate change, conflict, and disasters.
    “We strongly believe that research and development in biology, biomedical systems, biological defense, and human systems is a critically important part of national and global security. The new division will focus on improving human conditions on many fronts,” says Eric Evans, Lincoln Laboratory director.
    The new division unifies four research groups: Humanitarian Assistance and Disaster Relief (HADR) Systems, Counter-Weapons of Mass Destruction Systems, Biological and Chemical Technologies, and Human Health and Performance Systems.
    “We are in a historic moment in the country, and it is a historic moment for Lincoln Laboratory to create a new division. The nation and laboratory are faced with several growing security threats, and there is a pressing need to focus our research and development efforts to address these challenges,” says Edward Wack, who is head of the division.
    The laboratory began its initial work in biotechnology in 1995, through several programs that leveraged expertise in sensors and signal processing for chemical and biological defense systems. Work has since grown to include prototyping systems for protecting high-value facilities and transportation systems, architecting integrated early-warning biodefense systems for the U.S. Department of Defense (DoD), and applying artificial intelligence and synthetic biology technologies to accelerate the development of new drugs. In recent years, synthetic biology programs have expanded to include complex metabolic engineering for the production of novel materials and therapeutic molecules. 
    “The ability to leverage the laboratory’s deep technical expertise to solve today’s challenges has long laid the foundation for the new division,” says Christina Rudzinski, who is an assistant head of the division and formerly led the Counter-Weapons of Mass Destruction Systems Group.
    In recent years, the laboratory has also been growing its work for improving the health and performance of service members, veterans, and civilians. Laboratory researchers have applied decades of expertise in human language technology to understand disorders and injuries of the brain. Other programs have used physiological signals captured with wearable devices to detect heat strain, injury, and infection. The laboratory’s AI and robotics expertise has been leveraged to create prototypes of semi-autonomous medical interventions to help medics save lives on the battlefield and in disaster environments.
    The laboratory’s transition to disaster response technology extends over the past decade. Its rich history developing sensors and decision-support software translated well to the area of emergency response, leading to the development in 2010 of an emergency communications platform now in use worldwide, and the deployment of its advanced laser detection and ranging imaging system to quickly assess earthquake damage in Haiti. In 2015, the HADR Systems Group was established to build on this work.
    Today, the group develops novel sensors, communication tools, and decision-support systems to aid national and global responses to disasters and humanitarian crises. Last year, the group launched its climate change initiative to develop new programs to monitor, predict, and address current and future climate change impacts.
    Through these initiatives, the laboratory has come to view its work not only in the context of national security, but also global security.
    “Pandemics and climate change can cause instability, and that instability can breed conflict,” says Wack. “It benefits the United States to have a stable world. To the degree that we can, mitigating future pandemics and reducing the impacts of climate change would improve global stability and national security.”
    In anticipation of the growing importance of these global security issues, the laboratory has been significantly increasing program development, strategic hiring, and investment in biotechnology and human systems research over the past few years. Now, that strategic planning and investment in biotechnology research has come to fruition.
    One of the division’s initial goals is to continue to build relationships with MIT partners, including the Department of Biological Engineering, the Institute for Medical Engineering and Science, and the McGovern Institute for Brain Research, as well as Harvard University and local hospitals such as Massachusetts General Hospital. These collaborators have helped bring the laboratory’s sensor technology and algorithms to clinical applications for Covid-19 diagnostics, lung and liver disorders, bone injury, and spinal surgical tools. “We can have a bigger impact by drawing on some of the great expertise on campus and in our Boston medical ecosystem,” says Wack. 
    Another goal is to lead the nation in research surrounding the intersection of AI and biology. This research includes developing advanced AI algorithms for analyzing multimodal biological data, prototyping intelligent autonomous systems, and making AI-enabled biotechnology that is ethical and transparent.
    “Because of our extensive experience supporting the DoD, the laboratory is in a unique position to translate this cutting-edge research, including that from the commercial sector, into a government and national security context,” says Bill Streilein, principal staff in the Biotechnology and Human System Division. “This means not only addressing typical AI application issues of data collection and curation, model selection and training, and human-machine teaming, but also issues related to traceability, explainability, and fairness.”
    Leadership also sees this new division as an opportunity to continue to shape an innovative, diverse, and inclusive culture at the laboratory. They will be emphasizing the importance of an interdisciplinary approach to solving the complex research challenges the division faces. 
    “We want help from the rest of the laboratory,” says Jeffrey Palmer, an assistant head of the division who previously led the Human Health and Performance Systems Group. “I think there are many ways that we can help other divisions in their missions, and we absolutely need them for success in ours. These challenges are too big to face without applying the combined capabilities of the entire laboratory.”
    The Biotechnology and Human Systems Division joins Lincoln Laboratory’s eight other divisions: Advanced Technology; Air, Missile, and Maritime Defense Technology; Communication Systems; Cyber Security and Information Sciences; Engineering; Homeland Protection and Air Traffic Control; ISR and Tactical Systems; and Space Systems and Technology. Lincoln Laboratory is a federally funded research and development center. More

  • in

    3Q: Christine Walley on the evolving perception of robots in the US

    Christine J. Walley, professor of anthropology at MIT and member of the MIT Task Force on the Work of the Future, explores how robots have often been a symbol for anxiety about artificial intelligence and automation. Walley provides a unique perspective in the recent research brief “Robots as Symbols and Anxiety Over Work Loss.” She highlights the historical context of technology and job displacement and illustrates examples of how other countries approach policies regarding robots, skills, and learning. Here, Walley provides an overview of the brief.Q: How are robots seen as a symbol when we think about the changing nature of work in the United States? A: In the media, there has been a great deal of concern about robots taking people’s jobs, but, as became clear during conversations with robotics experts for MIT’s Task Force on the Work of the Future, the concerns have outstripped what the technologies are at this point actually capable of. For an anthropologist, however, the point is not that people’s concerns are “irrational,” but that robots have become symbolic encapsulations of much broader anxieties about the changing nature of work in the United States. These anxieties are well-founded. In order to put the technology questions into perspective, however, we have to confront more explicitly the dynamics that are creating more precarious forms of employment, particularly for those on the lower end of the economic spectrum, who are most vulnerable to displacement by AI and automation.Q: What can history and anthropology teach us about job displacement and technology and how this affects current anxiety about AI and automation today?A: First, we have to remember that technologies are inherently social. How and why they get created or used depends, of course, on what people or corporations want to do with them and what legal, cultural, and institutional frameworks allow or encourage. From the point of view of the companies, they can be used either to complement what workers do in order to increase productivity or be used to displace workers as a cost-cutting measure. There is a need for policies that encourage the former.My own research uses both history and ethnography to study former industrial communities in the United States. In the late 19th century, mechanization was used in many industries to displace skilled workers, who were more likely to be unionized and have higher wages. Our recent era has had a strong emphasis on shareholder value and what management scholar David Weil calls “the fissured workplace” — settings in which previously in-house work gets externalized through subcontracting and other non-standard work arrangements. Consequently, there is again a strong tendency to view workers primarily as costs to be eliminated. So, there is good reason for people to be anxious. However, we have to keep in mind that these are primarily political and social questions that need to be addressed, rather than anything inevitable about the technology itself.Earlier ethnographies of industrial workplaces found that even with dangerous and repetitive jobs, workers often managed to find ways to take pride in their work and make those jobs meaningful, often through social relationships forged with co-workers. Ethnographies of deindustrialization have also shown how devastating the effects of job loss can be, including long-term transgenerational or cumulative effects on families and entire regions. These effects are found across ethnic and racial groups, with those of color particularly hard hit. The upshot is two-fold. First, we have to be aware of socially and politically destabilizing long-term effects of job loss. There is a need for policies that are better at minimizing this kind of displacement for emergent forms of automation and AI than what we saw with early rounds of deindustrialization in the 1980s and 1990s — particularly since the new jobs being created due to technological innovation won’t necessarily go to those who are losing their jobs. And, second, we need to be thinking not only about numbers of jobs, but how emergent technologies influence workplace sociality and what makes labor meaningful to workers — realities that are crucial to creating a more vibrant future economy that works for ordinary people, and not just Wall Street and corporations.Q: What are some of the key takeaways, including policies, that the United States can learn from other countries in the way they think about technology, skills, and learning?A: Not everyone in the world is as afraid of job displacement by robots or automation as workers are in the United States. This is not surprising, given that among wealthier countries the United States is an outlier in terms of its lack of universal health-care coverage and often in terms of other benefits and protections. Since health-care coverage in the U.S. is often provided through employers, it makes the possibility of being displaced by robots or automation that much more anxiety-provoking (just as it puts companies that provide health care at a disadvantage by saddling them with rising costs, contributing to the desire to save money by replacing workers with automation). In addition, the U.S. public school system is based on local taxes and is highly inequitable along lines of race and class, with relatively little spent on job retraining or vocational education in comparison to many European countries. Given employers’ need for more educated workers and given rapid technological change and job turnover, this puts many Americans at a strong disadvantage. It’s not surprising that we’re seeing declining social mobility rates in the United States in comparison to many other wealthy countries.Policy differences make a substantial difference in how technologies are taken up and the impact they have, or will have, on workers. Some European countries, like Germany and Sweden, have policies in which workers select representatives who participate in decision-making on shop floors or even on management boards, increasing worker input into how new technologies will be used. Some countries, particularly Nordic ones, have also made social benefits more flexible, just as corporations have become more flexible, and are emphasizing continuing education and job retraining as technological transformation creates more job turnover. Although we have seen economic inequality on the rise in many parts of the world, it’s been particularly severe in the U.S. — and emergent technologies are poised to contribute to that. So, it is key for the U.S. to look seriously at what policies are working better in other countries and what we might learn from them. More