More stories

  • in

    MIT forum examines the rise of automation in the workplace

    “Pop culture does a great job of scaring us that AI will take over the world,” said Professor Daniela Rus, speaking at a virtual MIT event on Wednesday. But realistically, said Rus, who directs the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), robots aren’t going to steal everyone’s jobs overnight — they’re not yet good enough at tasks requiring high dexterity or generalized processing of different kinds of information.
    Still, automation has crept into some workplaces in recent years, a trend that’s likely to continue. Throughout the daylong conference, the “AI and the Work of the Future Congress,” which convened speakers from academia, industry, and government, one key theme consistently emerged: Task automation shouldn’t be viewed as a replacement for human work, but a partner for it. With the exception of some middle-skilled manufacturing jobs, automation has generally improved human productivity, not eliminated the need for it. If people thoughtfully guide the development and deployment of new workplace technologies, the speakers agreed, we could see improvements in both productivity and well-being.
    The daylong event was organized by MIT’s Task Force on the Work of the Future, which released its final report this week, along with the Initiative on the Digital Economy and CSAIL. During the forum, task force participants and other science and industry leaders discussed both the [social] and technological dimensions of these changes.
    Narrow AI
    Rus emphasized that current industrial applications of artificial intelligence are relatively narrow. “What today’s AI systems can do is specialized intelligence, or the ability to solve a very fixed, limited number of problems,” she said. In select industries like insurance and health care, artificial intelligence has been used to boost efficiency for individual tasks, but it hasn’t generally displaced human workers. Fully automated systems, like driverless cars, remain decades in the future. 
    While the rise of artificial intelligence in industry remains gradual, multiple speakers noted how other technologies have rocketed to widespread adoption due to the Covid-19 pandemic. Microsoft CEO Satya Nadella described how videoconferencing and related technologies have enabled the transmission of potentially lifesaving information. “The expert can be remote, but can perhaps more seamlessly transfer their knowledge to the person on the front line,” he said.
    Nadella added that, since so many companies have grown used to videoconferencing, they may never return to 100 percent face-to-face interactions. “There’s going to be real, structural change,” he said. “People are going to question what really requires presence that is physical, versus telepresence. And I think the workflow will adjust.” He noted that workplaces would have to be more intentional about fostering social cohesion among workers in lieu of casual in-person conversations.
    Pandemic aside, some speakers pointed out that automation’s impact on work, though generally positive, has been unequal. Some middle-skill manufacturing jobs have been lost due to automation. But those losses aren’t inevitable — they can be avoided through careful deployment of automation, said Bosch CEO Volkmar Denner. “You could go a very aggressive path and say ‘the robot finally could replace human workers,’” said Denner. “The path we chose was completely different.” Robots on Bosch’s manufacturing line are designed not to oust humans, but to make them even more valuable by assisting with particular tasks to make them more efficient overall.
    “We can find a balance between the economic aspects — introducing automation — and also the social aspects — keeping workers in work,” he said. “Technology always should serve human beings and not vice versa.”
    Other industry leaders agreed. Jeanne Magoulick, engineering manager for Ford Motor Company, said her team is developing artificial intelligence for predictive maintenance of machinery. “It’s going to notify us when a machine seems to be trending out of control, and then we can schedule that for maintenance during the next available window,” she said. “It’s going to make us more efficient.”
    “It’s a choice”
    Rus also discussed the use of machines as guardian systems — safeguards that help ensure human workers are performing at their best. She cited a study where radiologists and an artificial intelligence algorithm were separately shown images of lymph node cells and tasked with determining whether they were cancerous or not. The humans’ error rate was 7.5 percent, and the computer’s was 3.5 percent. However, when an image was scanned by both a human and a computer, the resulting error rate was just 0.5 percent, “which is extraordinary,” said Rus.
    Julie Shah, MIT associate professor in the Department of Aeronautics and Astronautics, added that this sort of “guardian” relationship between humans and automation could extend to many domains, including self-driving cars and manufacturing systems.
    Nadella envisioned that one day the very tools of automation — the ability to design and program computers and robots — will become accessible to those without specialized training. He pointed to examples, like word processing and spreadsheet programs like Excel, where automation turbocharged productivity without requiring users to learn computer code.
    “Knowledge work got fundamentally transformed,” said Nadella. In the future, “this notion of a citizen-app developer, a citizen-data scientist — I think it’s real.”
    Denner also cautioned, however, that certain tasks — like valuing human lives in an automated driving scenario — are best left to ethicists and society as a whole, not to industrial programmers.
    In an afternoon panel about shaping workplace technologies in the future, MIT professor of economics Daron Acemoglu reiterated the refrain that technology isn’t an inevitable force — it’s shaped by humans. Ultimately, he said policymakers and managers will decide how automation fits into the workplace. “There isn’t an ironclad rule of what it is that humans can do and technologies cannot do. They are both fluid. It depends on what we value and how we use technology,” Acemoglu said. “It’s a choice.” More

  • in

    Why we shouldn’t fear the future of work

    The American workforce is at a crossroads. Digitization and automation have replaced millions of middle-class jobs, while wages have stagnated for many who remain employed. A lot of labor has become insecure, low-income freelance work.
    Yet there is reason for optimism on behalf of workers, as scholars and business leaders outlined in an MIT conference on Wednesday. Automation and artificial intelligence do not just replace jobs; they also create them. And many labor, education, and safety-net policies could help workers greatly as well.
    That was the outlook of many participants at the conference, the “AI and the Work of the Future Congress,” marking the release of the final report of MIT’s Task Force on the Work of the Future. The report concludes that there is no technology-driven jobs wipeout on the horizon, but new policies are needed to match the steady march of innovation; technology has mostly helped white-collar workers, but not the rest of the work force in the U.S.
    “We’re not going to run out of work,” Elisabeth Beck Reynolds, executive director of the task force, and executive director of the MIT Industrial Performance Center, said Wednesday.
    She added: “Clearly the distributional effects of technological change are uneven. We’ve seen the reduction of middle-skill jobs [due] to automation, [along with] jobs in manufacturing, administration, in clerical work, while we’ve seen an increase in jobs for those with higher education and higher skill sets. … Our challenge is to try to train [workers] and make sure we have workers in good positions for those jobs.”
    Indeed, the notion of social responsibility was a leading motif of the conference, which drew an audience of about 1,500 online viewers. 
    “I believe that those of us who are technologists, and who educate tomorrow’s technologists, have a special role to play,” said MIT President L. Rafael Reif, in his introductory remarks at the conference. “It means that, while we are teaching students, in every field, to be fluent in the use of AI strategies and tools, we must be sure that we equip tomorrow’s technologists with equal fluency in the cultural values and ethical principles that should ground and govern how those tools are designed and how they’re used.”
    The daylong event was organized by MIT’s Task Force on the Work of the Future, along with the Initiative on the Digital Economy and the Computer Science and Artificial Intelligence Laboratory.
    Conditions on the ground
    The report notes that over the last four decades, innovation has driven increases in productivity, but that earnings have not followed in step. Since 1978, overall U.S. productivity has risen by 66 percent; yet over the same time, compensation for production and nonsupervisory workers has only risen by 10 percent.
    “Work has become a lot more fragile,” said James Manyika, a senior partner at the consulting firm McKinsey and Company, chair of the McKinsey Global Institute, and a member of McKinsey’s board of directors. “This has affected both middle-wage and lower-wage workers.”
    To be sure, information technology in particular has helped people in engineering, design, medicine, marketing, and many other white-collar fields; and while middle-income jobs have become more scarce, service-sector jobs have expanded but tend to be lower-income.
    “Certainly the United States is a good place for high-wage workers to be, but not for lower-wage [workers] and those in the middle,” said Susan Houseman, vice president and director of research at the W.E. Upjohn Institute for Employment Research. “We should be concerned about the growth of nontraditional work arangements.”
    Moreover, “The U.S. doesn’t seem to be getting a very positive return on its inequality,” said David Autor, the Ford Professor of Economics at MIT, associate head of MIT’s Department of Economics, and a co-chair of the task force. “That is, we have a lot of inequality, but we do not have faster growth.”
    In general, most workers are “not seeming to share in the prosperity that improved technology has got us,” said Robert M. Solow, Institute Professor Emeritus and 1987 Nobel laureate in economics, in recorded remarks shown during the conference.
    That said, Solow observed, “There’s room for a lot of ingenuity here, because since the nature of employment has changed, as we become a service economy rather than a goods-producing economy, there’s room for innovation in how to organize union work. … More active enforcement of antitrust laws, to try to increase the degree of competition in the production of goods and services, would also have the effect of improving the prospects for wages and salaries.”
    He added: “The main factor in the disturbance in the distribution of incomes is probably not technological change.”
    What are the next steps?
    But if there is room for policy interventions to ease the social jolts resulting from technology, which ones make the most sense? In general terms, some conference participants advocated for an openness to market-driven technological change, paired with a substantial safety net to help people handle those disruptive waves of innovation.
    “The real fundamental shift is, we have to think of service jobs the way 100 years ago we thought about manufacturing jobs. In other words, we have to start putting in place … protections and benefits,” said Fareed Zakaria, author and host of the CNN show, “Fareed Zakaria GPS.” He added, “Ultimately, that is the only way you are going to really address this problem. We are not going to bring back tens of millions of manufacturing jobs to the United States. We are going to take these service jobs and make them better jobs. And companies can do that.”
    One conference panel focused on the support of education, particularly public universities and community colleges, where traditionally overlooked pools of workplace talent reside.
    “One of the most important skills or approaches that we need to talk about is how to make sure that people know how to think, how to learn, how to adapt,” said Freeman Hrabowski, president of the University of Maryland at Baltimore County. That said, he noted, people receiving a broad college education can also receive specialist certificates and credentials in particular technical areas and add layers to their skills that are more closely linked to evolving job opportunities. “Both are very important,” he noted.
    Juan Salgado, chancellor of the City Colleges of Chicago, a group of community colleges, pointed out that there are 11.8 million community college students in America — many of whom already hold jobs and have workplace skills in addition to the academic skills they are acquiring.
    “It’s about the assets that are in our institutions, our students, and the fact that we’re not paying enough attention to them,” said Salgado.
    “We know what works,” said Paul Osterman, a professor of human resources and management at the MIT Sloan School of Management, pointing out that many training programs, internships, and other work-directed educational programs have been rigorously assessed and proven to be effective. “It’s taking what we know works and making it work at scale.”
    Saru Jayaraman, president of the advocacy group One Fair Wage and director of the Food Labor Research Center at the University of California at Berkeley, noted that simply raising the minimum wage, especially for food service workers, would have multiple benefits that only start with the increased earnings for roughly 10 percent of the workforce.
    “Increased wages reduce turnover in an industry that has some of the highest turnover rates in any industry in the United States,” said Jayaraman, adding that better wages have “increased employee morale, [and] increased employee productivity and consumer service.”
    Karen Mills, a senior fellow at the Harvard Business School and a former administrator of the Small Business Administration, suggested that good policies are especially important for small businesses, which may not be able to capitalize on technology as much as bigger firms.
    “In the jobs of the future, not all robots are going to be serving you coffee,” said Mills. “There’s still going to be Main Street.” She emphasized the continued need for supportive policies for small businesses, including access to health care for employees and access to capital for firm founders, which would also help small businesses owned by women and people of color.
    Rep. Lisa Blunt Rochester of Delaware, who will start her third term as a congresswoman in January, helped found the Congressional Future of Work Caucus, and suggested there is more bipartisan support for federal action that observers may suspect. 
    “We launched the caucus right before Covid-19 struck,” she said. “We literally had standing room only. Democrats, Republicans, we had the council on Black mayors, we had the unions, AFL-CIO, just this diversity, academics — I held up your [interim] report — there was this common agreement that we need to have the conversation.”
    “Something we shape and create”
    The conference also included extended discussion about the state of technology itself, especially artificial intelligence, examining its paths of progress and forms of deployment.
    “Technology is not something that happens to us,” said David Mindell, task force co-chair, professor of aeronautics and astronautics, and the Dibner Professor of the History of Engineering and Manufacturing at MIT. “It’s something we shape and create.”
    “You can’t say, ‘AI did it,’” said Microsoft CEO Satya Nadella, in a taped conversation with Autor.  “We, as creators of AI, first and foremost have a set of design principles. … We have to go from ethics to actual engineering and design and [a] process that allows us to be more accountable.”
    Related: MIT forum examines the rise of automation in the workplace
    A number of conference participants suggested that we should be careful to construct policies that don’t rein in technological advances, but can ameliorate their effects.
    “I don’t think we should contrain technological progress, because it is a competitive advantage of nations, and we have to let innovation thrive. We have to let technology proceed,” said Indra Nooyi, the former chairman and CEO of Pepsico. “At best, what we can do is anticipate the negative consequences of technology … and put in some checks and balances.”
    As a few conference panelists noted throughout the event, the overlapping issues of work, technology, and inequality have become even more complicated and relevant during the Covid-19 pandemic, with roughly one-third of the work force able to work more securely from home, while many service workers and others have to perform their jobs in person.
    Surveying the employment landscape of 2020, Nooyi noted, “In many ways Covid has exacerbated all the societal divides.” Indeed, Reynolds said, “We believe this work is more important, not less important, in the time of Covid.”
    Overall, the task force members noted, making the work of the future better is a task that starts today.
    “I really come away from this concerned about the direction [of work], but optimistic about our ability to change it,” Autor said. More

  • in

    A neural network learns when it should not be trusted

    Increasingly, artificial intelligence systems known as deep learning neural networks are used to inform decisions vital to human health and safety, such as in autonomous driving or medical diagnosis. These networks are good at recognizing patterns in large, complex datasets to aid in decision-making. But how do we know they’re correct? Alexander Amini and his colleagues at MIT and Harvard University wanted to find out.
    They’ve developed a quick way for a neural network to crunch data, and output not just a prediction but also the model’s confidence level based on the quality of the available data. The advance might save lives, as deep learning is already being deployed in the real world today. A network’s level of certainty can be the difference between an autonomous vehicle determining that “it’s all clear to proceed through the intersection” and “it’s probably clear, so stop just in case.” 
    Current methods of uncertainty estimation for neural networks tend to be computationally expensive and relatively slow for split-second decisions. But Amini’s approach, dubbed “deep evidential regression,” accelerates the process and could lead to safer outcomes. “We need the ability to not only have high-performance models, but also to understand when we cannot trust those models,” says Amini, a PhD student in Professor Daniela Rus’ group at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).
    “This idea is important and applicable broadly. It can be used to assess products that rely on learned models. By estimating the uncertainty of a learned model, we also learn how much error to expect from the model, and what missing data could improve the model,” says Rus.
    Amini will present the research at next month’s NeurIPS conference, along with Rus, who is the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science, director of CSAIL, and deputy dean of research for the MIT Stephen A. Schwarzman College of Computing; and graduate students Wilko Schwarting of MIT and Ava Soleimany of MIT and Harvard.
    Efficient uncertainty
    After an up-and-down history, deep learning has demonstrated remarkable performance on a variety of tasks, in some cases even surpassing human accuracy. And nowadays, deep learning seems to go wherever computers go. It fuels search engine results, social media feeds, and facial recognition. “We’ve had huge successes using deep learning,” says Amini. “Neural networks are really good at knowing the right answer 99 percent of the time.” But 99 percent won’t cut it when lives are on the line.
    “One thing that has eluded researchers is the ability of these models to know and tell us when they might be wrong,” says Amini. “We really care about that 1 percent of the time, and how we can detect those situations reliably and efficiently.”
    Neural networks can be massive, sometimes brimming with billions of parameters. So it can be a heavy computational lift just to get an answer, let alone a confidence level. Uncertainty analysis in neural networks isn’t new. But previous approaches, stemming from Bayesian deep learning, have relied on running, or sampling, a neural network many times over to understand its confidence. That process takes time and memory, a luxury that might not exist in high-speed traffic.
    The researchers devised a way to estimate uncertainty from only a single run of the neural network. They designed the network with bulked up output, producing not only a decision but also a new probabilistic distribution capturing the evidence in support of that decision. These distributions, termed evidential distributions, directly capture the model’s confidence in its prediction. This includes any uncertainty present in the underlying input data, as well as in the model’s final decision. This distinction can signal whether uncertainty can be reduced by tweaking the neural network itself, or whether the input data are just noisy.
    Confidence check
    To put their approach to the test, the researchers started with a challenging computer vision task. They trained their neural network to analyze a monocular color image and estimate a depth value (i.e. distance from the camera lens) for each pixel. An autonomous vehicle might use similar calculations to estimate its proximity to a pedestrian or to another vehicle, which is no simple task.
    Their network’s performance was on par with previous state-of-the-art models, but it also gained the ability to estimate its own uncertainty. As the researchers had hoped, the network projected high uncertainty for pixels where it predicted the wrong depth. “It was very calibrated to the errors that the network makes, which we believe was one of the most important things in judging the quality of a new uncertainty estimator,” Amini says.
    To stress-test their calibration, the team also showed that the network projected higher uncertainty for “out-of-distribution” data — completely new types of images never encountered during training. After they trained the network on indoor home scenes, they fed it a batch of outdoor driving scenes. The network consistently warned that its responses to the novel outdoor scenes were uncertain. The test highlighted the network’s ability to flag when users should not place full trust in its decisions. In these cases, “if this is a health care application, maybe we don’t trust the diagnosis that the model is giving, and instead seek a second opinion,” says Amini.
    The network even knew when photos had been doctored, potentially hedging against data-manipulation attacks. In another trial, the researchers boosted adversarial noise levels in a batch of images they fed to the network. The effect was subtle — barely perceptible to the human eye — but the network sniffed out those images, tagging its output with high levels of uncertainty. This ability to sound the alarm on falsified data could help detect and deter adversarial attacks, a growing concern in the age of deepfakes.
    Deep evidential regression is “a simple and elegant approach that advances the field of uncertainty estimation, which is important for robotics and other real-world control systems,” says Raia Hadsell, an artificial intelligence researcher at DeepMind who was not involved with the work. “This is done in a novel way that avoids some of the messy aspects of other approaches —  e.g. sampling or ensembles — which makes it not only elegant but also computationally more efficient — a winning combination.”
    Deep evidential regression could enhance safety in AI-assisted decision making. “We’re starting to see a lot more of these [neural network] models trickle out of the research lab and into the real world, into situations that are touching humans with potentially life-threatening consequences,” says Amini. “Any user of the method, whether it’s a doctor or a person in the passenger seat of a vehicle, needs to be aware of any risk or uncertainty associated with that decision.” He envisions the system not only quickly flagging uncertainty, but also using it to make more conservative decision making in risky scenarios like an autonomous vehicle approaching an intersection.
    “Any field that is going to have deployable machine learning ultimately needs to have reliable uncertainty awareness,” he says.
    This work was supported, in part, by the National Science Foundation and Toyota Research Institute through the Toyota-CSAIL Joint Research Center. More

  • in

    Vibrations of coronavirus proteins may play a role in infection

    When someone struggles to open a lock with a key that doesn’t quite seem to work, sometimes jiggling the key a bit will help. Now, new research from MIT suggests that coronaviruses, including the one that causes Covid-19, may use a similar method to trick cells into letting the viruses inside. The findings could be useful for determining how dangerous different strains or mutations of coronaviruses may be, and might point to a new approach for developing treatments.
    Studies of how spike proteins, which give coronaviruses their distinct crown-like appearance, interact with human cells typically involve biochemical mechanisms, but for this study the researchers took a different approach. Using atomistic simulations, they looked at the mechanical aspects of how the spike proteins move, change shape, and vibrate. The results indicate that these vibrational motions could account for a strategy that coronaviruses use, which can trick a locking mechanism on the cell’s surface into letting the virus through the cell wall so it can hijack the cell’s reproductive mechanisms.
    The team found a strong direct relationship between the rate and intensity of the spikes’ vibrations and how readily the virus could penetrate the cell. They also found an opposite relationship with the fatality rate of a given coronavirus. Because this method is based on understanding the detailed molecular structure of these proteins, the researchers say it could be used to screen emerging coronaviruses or new mutations of Covid-19, to quickly assess their potential risk.
    The findings, by MIT professor of civil and environmental engineering Markus Buehler and graduate student Yiwen Hu, are being published today in the journal Matter.
    All the images we see of the SARS-CoV-2 virus are a bit misleading, according to Buehler. “The virus doesn’t look like that,” he says, because in reality all matter down at the nanometer scale of atoms, molecules, and viruses “is continuously moving and vibrating. They don’t really look like those images in a chemistry book or a website.”
    Buehler’s lab specializes in atom-by-atom simulation of biological molecules and their behavior. As soon as Covid-19 appeared and information about the virus’ protein composition became available, Buehler and Hu, a doctoral student in mechanical engineering, swung into action to see if the mechanical properties of the proteins played a role in their interaction with the human body.
    The tiny nanoscale vibrations and shape changes of these protein molecules are extremely difficult to observe experimentally, so atomistic simulations are useful in understanding what is taking place. The researchers applied this technique to look at a crucial step in infection, when a virus particle with its protein spikes attaches to a human cell receptor called the ACE2 receptor. Once these spikes bind with the receptor, that unlocks a channel that allows the virus to penetrate the cell.
    That binding mechanism between the proteins and the receptors works something like a lock and key, and that’s why the vibrations matter, according to Buehler. “If it’s static, it just either fits or it doesn’t fit,” he says. But the protein spikes are not static; “they’re vibrating and continuously changing their shape slightly, and that’s important. Keys are static, they don’t change shape, but what if you had a key that’s continuously changing its shape — it’s vibrating, it’s moving, it’s morphing slightly? They’re going to fit differently depending on how they look at the moment when we put the key in the lock.”
    The more the “key” can change, the researchers reason, the likelier it is to find a fit.
    Buehler and Hu modeled the vibrational characteristics of these protein molecules and their interactions, using analytical tools such as “normal mode analysis.” This method is used to study the way vibrations develop and propagate, by modeling the atoms as point masses connected to each other by springs that represent the various forces acting between them.
    They found that differences in vibrational characteristics correlate strongly with the different rates of infectivity and lethality of different kinds of coronaviruses, taken from a global database of confirmed case numbers and case fatality rates. The viruses studied included SARS-CoV, MERS-CoV, SATS-CoV-2, and of one known mutation of the SARS-CoV-2 virus that is becoming increasingly prevalent around the world. This makes this method a promising tool for predicting the potential risks from new coronaviruses that emerge, as they likely will, Buehler says.

    In all the cases they have studied, Hu says, a crucial part of the process is fluctuations in an upward swing of one branch of the protein molecule, which helps make it accessible to bind to the receptor. “That movement is of significant functional importance,” she says. Another key indicator has to do with the ratio between two different vibrational motions in the molecule. “We find that these two factors show a direct relationship to the epidemiological data, the virus infectivity and also the virus lethality,” she says.
    The correlations they found mean that when new viruses or new mutations of existing ones appear, “you could screen them from a purely mechanical side,” Hu says. “You can just look at the fluctuations of these spike proteins and find out how they may act on the epidemiological side, like how infectious and how serious would the disease be.”
    Potentially, these findings could also provide a new avenue for research on possible treatments for Covid-19 and other coronavirus diseases, Buehler says, speculating that it might be possible to find a molecule that would bind to the spike proteins in a way that would stiffen them and limit their vibrations. Another approach might be to induce opposite vibrations to cancel out the natural ones in the spikes, similarly to the way noise-canceling headphones suppress unwanted sounds.
    As biologists learn more about the various kinds of mutations taking place in coronaviruses, and identify which areas of the genomes are most subject to change, this methodology could also be used predictively, Buehler says. The most likely kinds of mutations to emerge could all be simulated, and those that have the most dangerous potential could be flagged so that the world could be alerted to watch for any signs of the actual emergence of those particular strains. Buehler adds, “The G614 mutation, for instance, that is currently dominating the Covid-19 spread around the world, is predicted to be slightly more infectious, according to our findings, and slightly less lethal.”
    Mihri Ozkan, a professor of electrical and computer engineering at the University of California at Riverside, who was not connected to this research, says this analysis “points out the direct correlation between nanomechanical features and the lethality and infection rate of coronavirus. I believe his work leads the field forward significantly to find insights on the mechanics of diseases and infections.”
    Ozkan adds that “If under the natural environmental conditions, overall flexibility and mobility ratios predicted in this work do happen, identifying an effective inhibitor that can lock the spike protein to prevent binding could be a holy grail of preventing SARS-CoV-2 infections, which we all need now desperately.”
    The research was supported by the MIT-IBM Watson AI Lab, the Office of Naval Research, and the National Institutes of Health. More

  • in

    Phiala Shanahan receives Kenneth G. Wilson Award for work in lattice field theory

    Class of 1957 Career Development Assistant Professor of Physics Phiala Shanahan will receive the 2020 Kenneth G. Wilson Award for Excellence in Lattice Field Theory.
    The award, given by the international lattice field theory community, recognizes her research of hadrons and nuclei using the tools of lattice Quantum Chromodynamics, or lattice QCD, and her pioneering application of machine learning and artificial intelligence techniques to lattice field theory.
    Shanahan’s research interests are focused around theoretical nuclear and particle physics, specifically regarding the structure and interactions of hadrons and nuclei from the fundamental (quark and gluon) degrees of freedom encoded in the Standard Model of particle physics.
    In recent work she has used supercomputers to reveal the role of gluons, the force carriers of the strong interactions described by QCD, in hadron and nuclear structure. She and her group recently also achieved the first calculation of the gluon structure of light nuclei, making predictions that will be testable in new experiments proposed at Jefferson National Accelerator Facility and at the planned Electron-Ion Collider. 
    “To be recognized by those closest to my work on a technical level is an incredible honor,” says Shanahan, who is also a researcher in the Center for Theoretical Physics within the Laboratory for Nuclear Science. “The award reflects not only my work, but the efforts of my awesome students and postdocs, as well as my wonderful colleagues in the Center for Theoretical Physics who create such a vibrant and positive community to work in.”
    This year’s award ceremony will be part of the Nov. 12 virtual Bethe colloquium series, where Shanahan will receive a certificate citing her contributions, a modest monetary award, and the opportunity to present her cited work.
    Shanahan’s work to understand the structure of matter from first principles also aims to enrich nuclear physics experimental programs seeking to constrain physics beyond the current Standard Model, such as dark matter. The group’s research into nuclear structure and reactions “are at the cusp of entering the beginning of a precision era of understanding how nuclei emerge from particle physics,” says Shanahan. “It is just so exciting to begin to bridge that gap.” 
    For the third area of her cited research, machine learning, she says, “We are working hard to reinvent how numerical lattice field theory calculations are done with new algorithms to enable studies that are computationally intractable right now. When Aurora, which will be the new largest supercomputer in the world, comes online in the next couple of years, we plan to be ready to exploit it in a new way.”
    Shanahan obtained her BS in 2012 and her PhD in 2015 from Australia’s University of Adelaide. After graduation she began at MIT as a postdoc, then held a joint position as assistant professor at the College of William & Mary and senior staff scientist at the Thomas Jefferson National Accelerator Facility until she came back to MIT in 2018. Shanahan is the recipient of a National Science Foundation CAREER award as well as a U.S. Department of Energy Early Career Award, was named as an Emmy Noether fellow in 2018, and was listed in the Forbes Magazine 30 under 30 in Science in 2017 and as one of Science News’ 10 Scientists to Watch in 2020.
    Since its inception in 2011, the annual Kenneth G. Wilson Award for Excellence in Lattice Field Theory has recognized physicists who have made recent, outstanding contributions to lattice field theory. The award is named after Nobel laureate Kenneth Wilson (1936–2013), who in 1974 founded lattice gauge theory, permitting such theories to be studied numerically using powerful computers. More

  • in

    Advancing artificial intelligence research

    The broad applicability of artificial intelligence in today’s society necessitates the need to develop and deploy technologies that can build trust in emerging areas, counter asymmetric threats, and adapt to the ever-changing needs of complex environments.
    As part of a new collaboration to advance and support AI research, the MIT Stephen A. Schwarzman College of Computing and the Defense Science and Technology Agency in Singapore are awarding funding to 13 projects led by researchers within the college that target one or more of the following themes: trustworthy AI, enhancing human cognition in complex environments, and AI for everyone. The 13 research projects selected are highlighted below.
    “SYNTHBOX: Establishing Real-World Model Robustness and Explainability Using Synthetic Environments” by Aleksander Madry, professor of computer science. Emerging machine learning technology has the potential to significantly help with and even fully automate many tasks that have confidently been entrusted only to humans so far. Leveraging recent advances in realistic graphics rendering, data modeling, and inference, Madry’s team is building a radically new toolbox to fuel streamlined development and deployment of trustworthy machine learning solutions.
    “Next-Generation NLP Technologies for Low-Resource Tasks” by Regina Barzilay, the Delta Electronics Professor of Electrical Engineering and Computer Science; and Tommi Jaakkola, the Thomas Siebel Professor of Electrical Engineering and Computer Science. In natural language technologies, most languages in the world are not richly annotated. This lack of direct supervision often results in inaccurate, indefensible, and brittle outputs. In a project led by Barzilay and Jaakkola, researchers are developing new text-generation tools for controlled style transfer and novel algorithms for detecting misinformation or suspicious news online. 
    “Computationally-Supported Role-playing for Social Perspective Taking” by D. Fox Harrell, professor of digital media and artificial intelligences. Drawing on computer science and social science approaches, this project aims to create tools, techniques, and methods to model social phenomena for users of computer-supported role-playing systems — online gaming, augmented reality, and virtual reality — to better understand the perspectives of others with different social identities.
    “Improving Situational Awareness for Collaborative Human-Machine First Responder Teams” by Nick Roy, professor of aeronautics and astronautics. When responding to emergencies in urban environments, achieving situational awareness is essential. In a project led by Roy, researchers are developing a multi-agent system that encompasses a team of autonomous air and ground vehicles designed to arrive at the scene of an emergency, a map of the scene to provide a situation report to the first responders in advance, and the ability to search for people and entities of interest.
    “New Representations for Vision” by William Freeman, the Thomas and Gerd Perkins Professor of Electrical Engineering and Computer Science; and Josh Tenenbaum, professor of cognitive science and computation. An unrealized goal of AI is to model the rich and complicated shapes and textures of real-world scenes depicted in an image. This project will focus on developing neural network representations for images which are better suited to the requirements for image representations in vision and graphics to represent a 3D world efficiently, capturing its richness.
    “Data-driven Optimization Under Categorical Uncertainty, and Applications to Smart City Operations” by Alexandre Jacquillat, assistant professor of operations research and statistics. Smart city technologies can help aid major metropolitan areas that are facing increasing pressure to manage congestion, cut greenhouse gas emissions, improve public safety, and enhance health-care delivery. In a project led by Jacquillat, researchers are working on new AI tools to help manage the cyber-physical infrastructure in smart cities and on the development and deployment of automated decision tools for smart city operations.
    “Provably Robust Reinforcement Learning” by Ankur Moitra, the Rockwell International Career Development Associate Professor of Applied Mathematics. Moitra and his team are building on their new framework for robust supervised learning to explore more complex learning problems, including the design of robust algorithms for reinforcement learning in Massart noise models, a space that has yet to be fully explored.
    “Audio Forensics” by James Glass, senior research scientist. The ongoing improvements in capabilities that manipulate or generate multi-media content such as speech, images, and video are resulting in ever-more natural and realistic “deepfake” content that is increasingly difficult to discern from the real thing. In a project led by Glass, researchers are developing a set of deep learning models that can be used to identify manipulated or synthetic speech content, as well as detect the nature of deepfakes to help analysts better understand the underlying objective of the manipulation and how much effort is required to create the fake content.
    “Building Dependable Autonomous Systems through Learning Certified Decisions and Control” by Chuchu Fan, assistant professor of aeronautics and astronautics. Machine learning creates unprecedented opportunities for achieving full autonomy, but ­learning-based methods in autonomous systems can and do fail, due to poor-quality data, modeling errors, the coupling with other agents, and the complex interaction with human and computer systems in modern operational environments. Fan and her research group are building a framework consisting of algorithms, theories, and software tools for learning certified planner and control, as well as developing firmware platforms for the automatic plug-and-play design of quadrotors and the formation control of mixed ground and aerial vehicles.
    “Online Learning and Decision-making Under Uncertainty in Complex Environments” by Patrick Jaillet, the Dugald C. Jackson Professor of Electrical Engineering and Computer Science. Technical advances in computing, telecommunication, sensing capabilities, and other information technologies provide tremendous opportunities to use dynamic information in order to enhance productivity, optimize performance, and solve new complex online problems of great practical interests. However, many of these opportunities bring significant methodological challenges on how to formulate and solve these new problems. In a project led by Jaillet, researchers are using machine learning techniques to systematically integrate online optimization and online learning in order to help human decision-making under uncertainty.
    “Analytics-Guided Communication to Counteract Filter Bubbles and Echo Chambers” by Deb Roy, professor of media arts and sciences. Social media technologies that promised to open up our worlds have instead driven us algorithmically into cocoons of homogeneity. Roy and his team are developing language models and methods to counteract the effects of these technologies that has exacerbated socioeconomic divides and limited exposure to different perspectives, curbing opportunities for users to learn from others who may not necessarily look, think, or live like them.
    “Decentralized Learning with Diverse Data” by Costis Daskalakis, professor of electrical engineering and computer science; Asu Ozdaglar, the MathWorks Professor of Electrical Engineering and Computer Science, department head of electrical engineering and computer science, and deputy dean of academics for MIT Schwarzman College of Computing; and Russ Tedrake, Toyota Professor of Electrical Engineering and Computer Science. In many AI settings, it is important to combine diverse experiences of, and decentralized data collected by, heterogeneous agents in order to develop better models for predictions and decision-making in the various different new tasks these agents are performing. Bringing tools from machine learning, optimization, control, statistics, statistical physics, and game theory, this project aims to advance the fundamental science of federated or fleet learning – learning from decentralized agents with diverse data — using robotics as an application area to provide a rich and relevant source of data.
    “Trustworthy, Deployable 3D Scene Perception via Neuro-symbolic Probabilistic Programs” by Vikash Mansinghka, principal research scientist; Joshua Tenenbaum, professor of cognitive science and computation; and Antonio Torralba, Thomas and Gerd Perkins Professor of Electrical Engineering and Computer Science. To be deployable in the real world, 3D scene perception systems need to generalize across environments and sensor configurations, and adapt to scene and environment changes, without costly re-training or fine-tuning. Building on the researchers’ breakthroughs in probabilistic programming and in real-time neural Monte Carlo inference for symbolic generative models, the project team is developing a domain-general approach to trustworthy, deployable 3D scene perception that addresses fundamental limitations of state-of-the-art deep learning systems. More

  • in

    Versatile building blocks make structures with surprising mechanical properties

    Researchers at MIT’s Center for Bits and Atoms have created tiny building blocks that exhibit a variety of unique mechanical properties, such as the ability to produce a twisting motion when squeezed. These subunits could potentially be assembled by tiny robots into a nearly limitless variety of objects with built-in functionality, including vehicles, large industrial parts, or specialized robots that can be repeatedly reassembled in different forms.
    The researchers created four different types of these subunits, called voxels (a 3D variation on the pixels of a 2D image). Each voxel type exhibits special properties not found in typical natural materials, and in combination they can be used to make devices that respond to environmental stimuli in predictable ways. Examples might include airplane wings or turbine blades that respond to changes in air pressure or wind speed by changing their overall shape.
    The findings, which detail the creation of a family of discrete “mechanical metamaterials,” are described in a paper published today in the journal Science Advances, authored by recent MIT doctoral graduate Benjamin Jenett PhD ’20, Professor Neil Gershenfeld, and four others.
    “This remarkable, fundamental, and beautiful synthesis promises to revolutionize the cost, tailorability, and functional efficiency of ultralight, materials-frugal structures,” says Amory Lovins, an adjunct professor of civil and environmental engineering at Stanford University and founder of Rocky Mountain Institute, who was not associated with this work.
    Metamaterials get their name because their large-scale properties are different from the microlevel properties of their component materials. They are used in electromagnetics and as “architected” materials, which are designed at the level of their microstructure. “But there hasn’t been much done on creating macroscopic mechanical properties as a metamaterial,” Gershenfeld says.
    With this approach, engineers should be able to build structures incorporating a wide range of material properties — and produce them all using the same shared production and assembly processes, Gershenfeld says.

    The voxels are assembled from flat frame pieces of injection-molded polymers, then combined into three-dimensional shapes that can be joined into larger functional structures. They are mostly open space and thus provide an extremely lightweight but rigid framework when assembled. Besides the basic rigid unit, which provides an exceptional combination of strength and light weight, there are three other variations of these voxels, each with a different unusual property.
    The “auxetic” voxels have a strange property in which a cube of the material, when compressed, instead of bulging out at the sides, actually bulges inward. This is the first demonstration of such a material produced through conventional and inexpensive manufacturing methods.
    There are also “compliant” voxels, with a zero Poisson ratio, which is somewhat similar to the auxetic property, but in this case, when the material is compressed, the sides do not change shape at all. Few known materials exhibit this property, which can now be produced through this new approach.
    Finally, “chiral” voxels respond to axial compression or stretching with a twisting motion. Again, this is an uncommon property; research that produced one such material through complex fabrication techniques was hailed last year as a significant finding. This work makes this property easily accessible at macroscopic scales.
    “Each type of material property we’re showing has previously been its own field,” Gershenfeld says. “People would write papers on just that one property. This is the first thing that shows all of them in one single system.”
    To demonstrate the real-world potential of large objects constructed in a LEGO-like manner out of these mass-produced voxels, the team, working in collaboration with engineers at Toyota, produced a functional super-mileage race car, which they demonstrated on a rece track during an international robotics conference earlier this year.
    They were able to assemble the lightweight, high-performance structure in just a month, Jenett says, whereas building a comparable structure using conventional fiberglass construction methods had previously taken a year.
    During the race, the track was slick from rain, and the race car ended up crashing into a barrier. To the surprise of everyone involved, the car’s lattice-like internal structure deformed and then bounced back, absorbing the shock with little damage. A conventionally built car, Jenett says, would likely have been severely dented if it was made of metal, or shattered if it was composite.
    The car provided a vivid demonstration of the fact that these tiny parts can indeed be used to make functional devices at human-sized scales. And, Gershenfeld points out, in the structure of the car, “these aren’t parts connected to something else. The whole thing is made out of nothing but these parts,” except for the motors and power supply.
    Because the voxels are uniform in size and composition, they can be combined in any way needed to provide different functions for the resulting device. “We can span a wide range of material properties that before now have been considered very specialized,” Gershenfeld says. “The point is that you don’t have to pick one property. You can make, for example, robots that bend in one direction and are stiff in another direction and move only in certain ways. And so, the big change over our earlier work is this ability to span multiple mechanical material properties, that before now have been considered in isolation.”
    Jenett, who carried out much of this work as the basis for his doctoral thesis, says “these parts are low-cost, easily produced, and very fast to assemble, and you get this range of properties all in one system. They’re all compatible with each other, so there’s all these different types of exotic properties, but they all play well with each other in the same scalable, inexpensive system.”
    “Think about all the rigid parts and moving parts in cars and robots and boats and planes,” Gershenfeld says. “And we can span all of that with this one system.”
    A key factor is that a structure made up of one type of these voxels will behave exactly the same way as the subunit itself, Jenett says. “We were able to demonstrate that the joints effectively disappear when you assemble the parts together. It behaves as a continuum, monolithic material.”
    Whereas robotics research has tended to be divided between hard and soft robots, “this is very much neither,” Gershenfeld says, because of its potential to mix and match these properties within a single device.
    One of the possible early application of this technology, Jenett says, could be for building the blades of wind turbines. As these structures become ever larger, transporting the blades to their operating site becomes a serious logistical issue, whereas if they are assembled from thousands of tiny subunits, that job can be done at the site, eliminating the transportation issue. Similarly, the disposal of used turbine blades is already becoming a serious problem because of their large size and lack of recyclability. But blades made up of tiny voxels could be disassembled on site, and the voxels then reused to make something else.
    And in addition, the blades themselves could be more efficient, because they could have a mix of mechanical properties designed into the structure that would allow them to respond dynamically, passively, to changes in wind strength, he says.
    Overall, Jenett says, “Now we have this low-cost, scalable system, so we can design whatever we want to. We can do quadrupeds, we can do swimming robots, we can do flying robots. That flexibility is one of the key benefits of the system.”
    Stanford’s Lovins says that this technology “could make inexpensive, durable, extraordinarily lightweight aeronautical flight surfaces that passively and continuously optimize their shape like a bird’s wing. It could also make automobiles’ empty mass more nearly approach their payload, as their crashworthy structure becomes mostly air. It may even permit spherical shells whose crush strength allows a vacuum balloon (with no helium) buoyant in the atmosphere to lift a couple of dozen times the net payload of a jumbo jet.”
    He adds, “Like biomimicry and integrative design, this new art of cellular metamaterials is a powerful new tool for helping us do more with less.”
    The research team included Filippos Tourlomousis, Alfonso Parra Rubio, and Megan Ochalek at MIT, and Christopher Cameron at the U.S. Army Research Laboratory. The work was supported by NASA, the U.S. Army Research Laboratory and the Center for Bits and Atoms Consortia. More

  • in

    STEM Week event encourages students to see themselves in science and technology careers

    Covid-19 has given the public a crash course in what it is like to be a medical researcher. The evening news displays graphs and charts describing case counts and statistical data, while the status of vaccine trials is front page news. Now, more than ever, the public is seeing how STEM (science, technology, engineering, and math) fields are rising to the challenge of Covid-19.
    It is in this spirit that MIT and the Massachusetts STEM Advisory Council encouraged students to “see themselves in STEM” by producing a week of programming aimed at fostering a lifelong love of STEM.
    Partnering across the Commonwealth
    The Massachusetts STEM Week kicked-off Oct. 19 with opening remarks by MIT President L. Rafael Reif. Speaking to a stream of over 480 viewers, President Reif reflected on how reading MIT textbooks in his native Venezuela put him on a pathway to a career in STEM. “I realized there was a world of other people who loved science and engineering, and places like MIT where I could join them,” he said. President Reif challenged students to participate in STEM spaces, declaring “we need you to do more than see yourself in STEM, we invite you to step up and take your place.”
    Governor Charlie Baker joined the kickoff event, describing the importance of the Massachusetts STEM economy and its global impact. “Massachusetts is enormously lucky to have MIT among the constellation of amazing colleges and universities that are a part of this Commonwealth,” the governor said, noting that STEM institutions “provide an incredible collection of ideas, gadgets, and solutions that become a big part of the way the world works.”
    Lieutenant Governor Karyn Polito, a co-chair of the STEM Advisory Council, echoed those sentiments, adding “with the pipeline of talent we have, we need to make sure that pipeline includes more women and communities of color.” Fellow STEM Advisory Council co-chairs Jeffrey Leiden of Vertex Pharmaceuticals and U.S. Congressman Joe Kennedy also offered inspiring welcoming remarks. 
    MIT Media Lab Associate Director Cynthia Breazeal presented the kickoff’s keynote address, diving into the number of ways that artificial intelligence permeates the platforms that students frequently utilize. Noting examples of how AI can sometimes encode bias into software, Breazeal argued that the antidote is to prepare students to be “AI-Literate” and encourage them toward the field. Expanding access to open K-12 curriculum, educator resources, and easy-to-use platforms that introduce AI concepts will excite “a far more diverse and inclusive group of students [who] have the potential to become the ethical designers of the AI solutions of tomorrow.”
    70 years of supporting student scientists through the Massachusetts Science and Engineering Fair
    The second hour of the kickoff event celebrated student scientists by featuring two prize-winning projects from this year’s Massachusetts Science and Engineering Fair (MSEF). MIT’s relationship with MSEF spans over 70 years, starting with a small gathering on the dirt floor of Rockwell Cage in 1949. The fair was started by the American Academy of Arts and Sciences, MIT professors, and a group of pioneering K-12 science educators organized as the Massachusetts Science Fair Committee. 
    The first two presenters were Hopkinton High School students Archita Nemalikanti and Sreeja Bolla, winners of the Sanofi Genzyme award. Their invention combined light sensors, computer programming, and a heavily researched set of calculations to create a non-invasive test for anemia in newborns, similar to current O2 finger sensors. Archita and Sreeja described the months-long process of developing the device and demonstrated their expertise to an international audience.
    Mathworks prize winner and Westfield High School student Suvin Sundararajan presented his work on testing different additives to create safer and more environmentally friendly plastics for 3D printing. His biodegradable, flame retardant, non-toxic plastic was synthesized using lab equipment and guidance from the Emrick Group in the Department of Polymer Science and Engineering at the University of Massachusetts at Amherst. Reflecting on how the group’s  mentorship made the project possible, Suvin recounted how he was trained on multimillion-dollar equipment typical in chemistry labs, which “allowed me to understand how they function and provided an opportunity to generate more ideas” for his work. 
    More #MassSTEMWeek events across campus
    In addition to Monday’s kickoff event, members of the MIT community participated in a range of Massachusetts STEM Week events across the Commonwealth.
    Former astronaut and aeronautics and astronautics Professor Jeff Hoffman discussed NASA’s latest Mars mission and the Mars 2020 Perseverance Rover on “AstroNights: LIVE  Mars Mania!”
    Professor of comparative media studies Fox Harrell, an affiliate of the MIT Computer Science and Artificial Intelligence Laboratory, participated in a panel discussion with Lt. Governor Polito and four inspiring Boston Public Schools students organized by the United Way’s BoSTEM program.
    Lemelson-MIT held two STEM Week events, “Helping Youth See Themselves in Biotech” and “The Wonderful World of Biotech,” co-sponsored by the Massachusetts Black and Latino Legislative Caucus.
    MIT Media Lab Research Scientist Katlyn Turner presented “Antiracism in Technology Design.”
    The MIT Museum held two Virtual Idea Hubs, which encourage creating, tinkering, and engineering using everyday materials around your home. Participating families built whimsical structures that explored balance and centers of gravity. 
    Brian Mernoff, an educator at the MIT Museum, participated in the MetroNorth/Region IV MSEF event. This panel — Supporting Science at Home, featuring  MSEF’s Region IV Science Fair representatives — included a robust conversation about supporting students through the science fair process and how university and corporate partners add perspective and value along the way.
    In addition to these events, undergraduate students led over 250 hours of hands-on STEM explorations with 60 high school students around the Commonwealth as part of MIT’s Full STEAM Into Fall after-school program.
    The Massachusetts STEM Week is organized by the Massachusetts Executive Office of Education and the STEM Advisory Council. More