More stories

  • in

    Six MIT faculty elected 2020 AAAS Fellows

    Six MIT faculty members have been elected as fellows of the American Association for the Advancement of Science (AAAS).
    The new fellows are among a group of 489 AAAS members elected by their peers in recognition of their scientifically or socially distinguished efforts to advance science.
    A virtual induction ceremony for the new fellows will be held on Feb. 13, 2021. 
    Nazli Choucri is a professor of political science, a senior faculty member at the Center of International Studies (CIS), and a faculty affiliate at the Institute for Data, Science, and Society (IDSS). She works in the areas of international relations, conflict and violence, and the international political economy, with a focus on cyberspace and the global environment. Her current research is on cyberpolitics in international relations, focusing on linking integrating cyberspace into the fabric of international relations.
    Catherine Drennan is a professor in the departments of Biology and Chemistry. Her research group seeks to understand how nature harnesses and redirects the reactivity of enzyme metallocenters in order to perform challenging reactions. By combining X-ray crystallography with other biophysical methods, the researchers’ goal is to “visualize” molecular processes by obtaining snapshots of enzymes in action.
    Peter Fisher is a professor in the Department of Physics and currently serves as department head. He carries out research in particle physics in the areas of dark matter detection and the development of new kinds of particle detectors. He is also interested in compact energy supplies and wireless energy transmission.
    Neil Gershenfeld is the director of MIT’s Center for Bits and Atoms, which works to break down boundaries between the digital and physical worlds, from pioneering quantum computing to digital fabrication to the “internet of things.” He is the founder of a global network of over 1,000 fab labs, chairs the Fab Foundation, and leads the Fab Academy.
    Ju Li is the Battelle Energy Alliance Professor of Nuclear Science and Engineering and a professor of materials science and engineering. He studies how atoms and electrons behave and interact, to inform the design new materials from the atomic level on up. His research areas include overcoming timescale challenges in atomistic simulations, energy storage and conversion, and materials in extreme environments and far from equilibrium.
    Daniela Rus is the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science and director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT. Her research interests include robotics, mobile computing, and data science. Rus is a Class of 2002 MacArthur Fellow, a fellow of ACM, AAAI and IEEE, and a member of the National Academy of Engineering, and the American Academy for Arts and Science.
    This year’s fellows will be formally announced in the AAAS News and Notes section of Science on Nov. 27. More

  • in

    An antidote to “fast fashion”

    In today’s world of fast fashion, retailers sell only a fraction of their inventory, and consumers keep their clothes for about half as long as they did 15 years ago. As a result, the clothing industry has become associated with swelling greenhouse gas emissions and wasteful practices.
    The startup Armoire is addressing these issues with a clothing rental service designed to increase the utilization of clothes and save customers time. The service is based on machine-learning algorithms that use feedback from users to make better predictions about what they’ll wear.
    Customers pay a flat monthly price to get access to a range of high-end styles. Each time they log into Armoire, they get a personalized list of items to choose from. When they don’t want the clothing anymore, they return it to be used by someone else.
    “Our whole goal is to help clothes achieve end of life with a customer rather than at the back of your closet or ending up in a landfill,” Armoire co-founder and CEO Ambika Singh MBA ’16 says. “The metric we look at is the utilization of our clothes, and, amazingly, 95 percent of the things we own have been rented — unlike a normal retailer who might sell 35 percent of what they bring in at the beginning of the season.”
    The company says its service is tailored toward busy women who don’t have time to browse cluttered clothing aisles or endless webpages for new outfits.
    According to Singh, Armoire has grown 300 to 500 percent a year since its founding in 2016. The company now has thousands of customers across the U.S.
    “A typical customer response after a while is they feel really happy when they look at their closet instead of overwhelmed,” Singh says. “It’s fun to have this asset-light way of living.”
    Leaning on MIT’s community
    Singh came to MIT in 2014 with plans to start a company. She had previously spent seven years working in the tech industry, first with Microsoft then as an early hire at two startups.
    She says the first thing that struck her about MIT was the integration between its business and engineering schools. The second was how supportive MIT’s community of professors and students were. She quickly took advantage of both attributes.
    Singh spoke at length with professors about the potential for machine-learning algorithms to provide personalized recommendations and leaned on classmates for early idea validation and testing.
    In fact, when Singh started Armoire, classmates used it as a case study for marketing and analytics research projects. Others became early customers. Singh jokes that by the time she graduated, half of her Sloan class had touched Armoire in some way.
    Singh also worked with various entrepreneurial organizations at MIT, receiving support from the MIT Sandbox Innovation Fund and participating in the Martin Trust Center for MIT Entrepreneurship’s delta v summer accelerator.
    Singh remembers showing up on the first day of delta v with huge racks of clothes and seeing the small desks each team was given as workspace. Fortunately, someone found a nearby conference room with a closet.
    During delta v, Singh and her team bought inventory, got the clothes shipped to the Trust Center, packaged the items, and finally delivered them around campus or to the post office by scooter.
    In the fall of 2016, Singh was joined by Armoire co-founder Zachary Owen PhD ’18, who helped build the company’s recommendation systems but is no longer with Armoire.
    Armoire’s core algorithm is something called a collaborative filter, which makes predictions about user preferences based on data collected on many other users. Such filters work on the assumption that if two people have similar tastes around one item, they share preferences on others. Armoire’s algorithms also make use of dozens of labels the company manually enters for each item around things like color, fit, and seasonality.
    At the heart of Armoire is the idea that a clothing rental company can gather more data about customer preferences than a company that sells clothing to customers once. That data can then be used to deliver better service.
    A new model for fashion
    Armoire offers customers three tiers of service depending on how many clothes they want to keep at one time. Customers can keep their clothes as long as they like. The company curates selections from thousands of top designers and independent labels, with styles for being comfortable at home, attending formal business events, working out, and more.
    The Covid-19 pandemic has slowed the company’s growth trajectory, but Singh says it’s also given Armoire’s leadership team a chance to refocus on their existing customers.
    “The good thing about the Covid-19 disruptions is they’ve given us a chance to take a step back and focus on the product,” Singh says. “We’ve focused on our existing base, which is good because with subscription it’s always about adding more value to the customers you have.”
    Singh is also proud of the culture Armoire has fostered. All of Armoire’s warehouse workers are women or nonbinary, an uncommon breakdown in warehouses. Singh credits Armoire’s leadership team with creating a welcoming work environment, noting there’s been very little turnover in Armoire’s warehouses.
    “Some of [our workers] are single moms, and they come with a different set of challenges,” Singh says. “Most warehouses don’t allow people to carry their phone because they’re worried about employees slacking off. If you’re a single mom, that makes the job impractical because you can’t be walking around without your phone and then find out something happened to your kid.”
    Ultimately, Singh credits many companies with trying to innovate in the fashion industry, citing companies helping to clean up clothing production and increase recycling.
    For Armoire, though, meaningful impact will continue to come from helping customers cut down on waste.
    “We don’t get 95 percent of our inventory rented because I’m so good at picking out clothes,” Singh says. “We do it because we took all the data our customers gave us and built a model that helped us understand what we should be buying. It shows the capital efficiency of the business, it shows we make good on our sustainability desire, and when I look forward, it’s about what kind of innovations we can achieve that help us better serve our customers and the world.” More

  • in

    Lincoln Laboratory establishes Biotechnology and Human Systems Division

    MIT Lincoln Laboratory has established a new research and development division, the Biotechnology and Human Systems Division. The division will address emerging threats to both national security and humanity. Research and development will encompass advanced technologies and systems for improving chemical and biological defense, human health and performance, and global resilience to climate change, conflict, and disasters.
    “We strongly believe that research and development in biology, biomedical systems, biological defense, and human systems is a critically important part of national and global security. The new division will focus on improving human conditions on many fronts,” says Eric Evans, Lincoln Laboratory director.
    The new division unifies four research groups: Humanitarian Assistance and Disaster Relief (HADR) Systems, Counter-Weapons of Mass Destruction Systems, Biological and Chemical Technologies, and Human Health and Performance Systems.
    “We are in a historic moment in the country, and it is a historic moment for Lincoln Laboratory to create a new division. The nation and laboratory are faced with several growing security threats, and there is a pressing need to focus our research and development efforts to address these challenges,” says Edward Wack, who is head of the division.
    The laboratory began its initial work in biotechnology in 1995, through several programs that leveraged expertise in sensors and signal processing for chemical and biological defense systems. Work has since grown to include prototyping systems for protecting high-value facilities and transportation systems, architecting integrated early-warning biodefense systems for the U.S. Department of Defense (DoD), and applying artificial intelligence and synthetic biology technologies to accelerate the development of new drugs. In recent years, synthetic biology programs have expanded to include complex metabolic engineering for the production of novel materials and therapeutic molecules. 
    “The ability to leverage the laboratory’s deep technical expertise to solve today’s challenges has long laid the foundation for the new division,” says Christina Rudzinski, who is an assistant head of the division and formerly led the Counter-Weapons of Mass Destruction Systems Group.
    In recent years, the laboratory has also been growing its work for improving the health and performance of service members, veterans, and civilians. Laboratory researchers have applied decades of expertise in human language technology to understand disorders and injuries of the brain. Other programs have used physiological signals captured with wearable devices to detect heat strain, injury, and infection. The laboratory’s AI and robotics expertise has been leveraged to create prototypes of semi-autonomous medical interventions to help medics save lives on the battlefield and in disaster environments.
    The laboratory’s transition to disaster response technology extends over the past decade. Its rich history developing sensors and decision-support software translated well to the area of emergency response, leading to the development in 2010 of an emergency communications platform now in use worldwide, and the deployment of its advanced laser detection and ranging imaging system to quickly assess earthquake damage in Haiti. In 2015, the HADR Systems Group was established to build on this work.
    Today, the group develops novel sensors, communication tools, and decision-support systems to aid national and global responses to disasters and humanitarian crises. Last year, the group launched its climate change initiative to develop new programs to monitor, predict, and address current and future climate change impacts.
    Through these initiatives, the laboratory has come to view its work not only in the context of national security, but also global security.
    “Pandemics and climate change can cause instability, and that instability can breed conflict,” says Wack. “It benefits the United States to have a stable world. To the degree that we can, mitigating future pandemics and reducing the impacts of climate change would improve global stability and national security.”
    In anticipation of the growing importance of these global security issues, the laboratory has been significantly increasing program development, strategic hiring, and investment in biotechnology and human systems research over the past few years. Now, that strategic planning and investment in biotechnology research has come to fruition.
    One of the division’s initial goals is to continue to build relationships with MIT partners, including the Department of Biological Engineering, the Institute for Medical Engineering and Science, and the McGovern Institute for Brain Research, as well as Harvard University and local hospitals such as Massachusetts General Hospital. These collaborators have helped bring the laboratory’s sensor technology and algorithms to clinical applications for Covid-19 diagnostics, lung and liver disorders, bone injury, and spinal surgical tools. “We can have a bigger impact by drawing on some of the great expertise on campus and in our Boston medical ecosystem,” says Wack. 
    Another goal is to lead the nation in research surrounding the intersection of AI and biology. This research includes developing advanced AI algorithms for analyzing multimodal biological data, prototyping intelligent autonomous systems, and making AI-enabled biotechnology that is ethical and transparent.
    “Because of our extensive experience supporting the DoD, the laboratory is in a unique position to translate this cutting-edge research, including that from the commercial sector, into a government and national security context,” says Bill Streilein, principal staff in the Biotechnology and Human System Division. “This means not only addressing typical AI application issues of data collection and curation, model selection and training, and human-machine teaming, but also issues related to traceability, explainability, and fairness.”
    Leadership also sees this new division as an opportunity to continue to shape an innovative, diverse, and inclusive culture at the laboratory. They will be emphasizing the importance of an interdisciplinary approach to solving the complex research challenges the division faces. 
    “We want help from the rest of the laboratory,” says Jeffrey Palmer, an assistant head of the division who previously led the Human Health and Performance Systems Group. “I think there are many ways that we can help other divisions in their missions, and we absolutely need them for success in ours. These challenges are too big to face without applying the combined capabilities of the entire laboratory.”
    The Biotechnology and Human Systems Division joins Lincoln Laboratory’s eight other divisions: Advanced Technology; Air, Missile, and Maritime Defense Technology; Communication Systems; Cyber Security and Information Sciences; Engineering; Homeland Protection and Air Traffic Control; ISR and Tactical Systems; and Space Systems and Technology. Lincoln Laboratory is a federally funded research and development center. More

  • in

    3Q: Christine Walley on the evolving perception of robots in the US

    Christine J. Walley, professor of anthropology at MIT and member of the MIT Task Force on the Work of the Future, explores how robots have often been a symbol for anxiety about artificial intelligence and automation. Walley provides a unique perspective in the recent research brief “Robots as Symbols and Anxiety Over Work Loss.” She highlights the historical context of technology and job displacement and illustrates examples of how other countries approach policies regarding robots, skills, and learning. Here, Walley provides an overview of the brief.Q: How are robots seen as a symbol when we think about the changing nature of work in the United States? A: In the media, there has been a great deal of concern about robots taking people’s jobs, but, as became clear during conversations with robotics experts for MIT’s Task Force on the Work of the Future, the concerns have outstripped what the technologies are at this point actually capable of. For an anthropologist, however, the point is not that people’s concerns are “irrational,” but that robots have become symbolic encapsulations of much broader anxieties about the changing nature of work in the United States. These anxieties are well-founded. In order to put the technology questions into perspective, however, we have to confront more explicitly the dynamics that are creating more precarious forms of employment, particularly for those on the lower end of the economic spectrum, who are most vulnerable to displacement by AI and automation.Q: What can history and anthropology teach us about job displacement and technology and how this affects current anxiety about AI and automation today?A: First, we have to remember that technologies are inherently social. How and why they get created or used depends, of course, on what people or corporations want to do with them and what legal, cultural, and institutional frameworks allow or encourage. From the point of view of the companies, they can be used either to complement what workers do in order to increase productivity or be used to displace workers as a cost-cutting measure. There is a need for policies that encourage the former.My own research uses both history and ethnography to study former industrial communities in the United States. In the late 19th century, mechanization was used in many industries to displace skilled workers, who were more likely to be unionized and have higher wages. Our recent era has had a strong emphasis on shareholder value and what management scholar David Weil calls “the fissured workplace” — settings in which previously in-house work gets externalized through subcontracting and other non-standard work arrangements. Consequently, there is again a strong tendency to view workers primarily as costs to be eliminated. So, there is good reason for people to be anxious. However, we have to keep in mind that these are primarily political and social questions that need to be addressed, rather than anything inevitable about the technology itself.Earlier ethnographies of industrial workplaces found that even with dangerous and repetitive jobs, workers often managed to find ways to take pride in their work and make those jobs meaningful, often through social relationships forged with co-workers. Ethnographies of deindustrialization have also shown how devastating the effects of job loss can be, including long-term transgenerational or cumulative effects on families and entire regions. These effects are found across ethnic and racial groups, with those of color particularly hard hit. The upshot is two-fold. First, we have to be aware of socially and politically destabilizing long-term effects of job loss. There is a need for policies that are better at minimizing this kind of displacement for emergent forms of automation and AI than what we saw with early rounds of deindustrialization in the 1980s and 1990s — particularly since the new jobs being created due to technological innovation won’t necessarily go to those who are losing their jobs. And, second, we need to be thinking not only about numbers of jobs, but how emergent technologies influence workplace sociality and what makes labor meaningful to workers — realities that are crucial to creating a more vibrant future economy that works for ordinary people, and not just Wall Street and corporations.Q: What are some of the key takeaways, including policies, that the United States can learn from other countries in the way they think about technology, skills, and learning?A: Not everyone in the world is as afraid of job displacement by robots or automation as workers are in the United States. This is not surprising, given that among wealthier countries the United States is an outlier in terms of its lack of universal health-care coverage and often in terms of other benefits and protections. Since health-care coverage in the U.S. is often provided through employers, it makes the possibility of being displaced by robots or automation that much more anxiety-provoking (just as it puts companies that provide health care at a disadvantage by saddling them with rising costs, contributing to the desire to save money by replacing workers with automation). In addition, the U.S. public school system is based on local taxes and is highly inequitable along lines of race and class, with relatively little spent on job retraining or vocational education in comparison to many European countries. Given employers’ need for more educated workers and given rapid technological change and job turnover, this puts many Americans at a strong disadvantage. It’s not surprising that we’re seeing declining social mobility rates in the United States in comparison to many other wealthy countries.Policy differences make a substantial difference in how technologies are taken up and the impact they have, or will have, on workers. Some European countries, like Germany and Sweden, have policies in which workers select representatives who participate in decision-making on shop floors or even on management boards, increasing worker input into how new technologies will be used. Some countries, particularly Nordic ones, have also made social benefits more flexible, just as corporations have become more flexible, and are emphasizing continuing education and job retraining as technological transformation creates more job turnover. Although we have seen economic inequality on the rise in many parts of the world, it’s been particularly severe in the U.S. — and emergent technologies are poised to contribute to that. So, it is key for the U.S. to look seriously at what policies are working better in other countries and what we might learn from them. More

  • in

    MIT forum examines the rise of automation in the workplace

    “Pop culture does a great job of scaring us that AI will take over the world,” said Professor Daniela Rus, speaking at a virtual MIT event on Wednesday. But realistically, said Rus, who directs the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), robots aren’t going to steal everyone’s jobs overnight — they’re not yet good enough at tasks requiring high dexterity or generalized processing of different kinds of information.
    Still, automation has crept into some workplaces in recent years, a trend that’s likely to continue. Throughout the daylong conference, the “AI and the Work of the Future Congress,” which convened speakers from academia, industry, and government, one key theme consistently emerged: Task automation shouldn’t be viewed as a replacement for human work, but a partner for it. With the exception of some middle-skilled manufacturing jobs, automation has generally improved human productivity, not eliminated the need for it. If people thoughtfully guide the development and deployment of new workplace technologies, the speakers agreed, we could see improvements in both productivity and well-being.
    The daylong event was organized by MIT’s Task Force on the Work of the Future, which released its final report this week, along with the Initiative on the Digital Economy and CSAIL. During the forum, task force participants and other science and industry leaders discussed both the [social] and technological dimensions of these changes.
    Narrow AI
    Rus emphasized that current industrial applications of artificial intelligence are relatively narrow. “What today’s AI systems can do is specialized intelligence, or the ability to solve a very fixed, limited number of problems,” she said. In select industries like insurance and health care, artificial intelligence has been used to boost efficiency for individual tasks, but it hasn’t generally displaced human workers. Fully automated systems, like driverless cars, remain decades in the future. 
    While the rise of artificial intelligence in industry remains gradual, multiple speakers noted how other technologies have rocketed to widespread adoption due to the Covid-19 pandemic. Microsoft CEO Satya Nadella described how videoconferencing and related technologies have enabled the transmission of potentially lifesaving information. “The expert can be remote, but can perhaps more seamlessly transfer their knowledge to the person on the front line,” he said.
    Nadella added that, since so many companies have grown used to videoconferencing, they may never return to 100 percent face-to-face interactions. “There’s going to be real, structural change,” he said. “People are going to question what really requires presence that is physical, versus telepresence. And I think the workflow will adjust.” He noted that workplaces would have to be more intentional about fostering social cohesion among workers in lieu of casual in-person conversations.
    Pandemic aside, some speakers pointed out that automation’s impact on work, though generally positive, has been unequal. Some middle-skill manufacturing jobs have been lost due to automation. But those losses aren’t inevitable — they can be avoided through careful deployment of automation, said Bosch CEO Volkmar Denner. “You could go a very aggressive path and say ‘the robot finally could replace human workers,’” said Denner. “The path we chose was completely different.” Robots on Bosch’s manufacturing line are designed not to oust humans, but to make them even more valuable by assisting with particular tasks to make them more efficient overall.
    “We can find a balance between the economic aspects — introducing automation — and also the social aspects — keeping workers in work,” he said. “Technology always should serve human beings and not vice versa.”
    Other industry leaders agreed. Jeanne Magoulick, engineering manager for Ford Motor Company, said her team is developing artificial intelligence for predictive maintenance of machinery. “It’s going to notify us when a machine seems to be trending out of control, and then we can schedule that for maintenance during the next available window,” she said. “It’s going to make us more efficient.”
    “It’s a choice”
    Rus also discussed the use of machines as guardian systems — safeguards that help ensure human workers are performing at their best. She cited a study where radiologists and an artificial intelligence algorithm were separately shown images of lymph node cells and tasked with determining whether they were cancerous or not. The humans’ error rate was 7.5 percent, and the computer’s was 3.5 percent. However, when an image was scanned by both a human and a computer, the resulting error rate was just 0.5 percent, “which is extraordinary,” said Rus.
    Julie Shah, MIT associate professor in the Department of Aeronautics and Astronautics, added that this sort of “guardian” relationship between humans and automation could extend to many domains, including self-driving cars and manufacturing systems.
    Nadella envisioned that one day the very tools of automation — the ability to design and program computers and robots — will become accessible to those without specialized training. He pointed to examples, like word processing and spreadsheet programs like Excel, where automation turbocharged productivity without requiring users to learn computer code.
    “Knowledge work got fundamentally transformed,” said Nadella. In the future, “this notion of a citizen-app developer, a citizen-data scientist — I think it’s real.”
    Denner also cautioned, however, that certain tasks — like valuing human lives in an automated driving scenario — are best left to ethicists and society as a whole, not to industrial programmers.
    In an afternoon panel about shaping workplace technologies in the future, MIT professor of economics Daron Acemoglu reiterated the refrain that technology isn’t an inevitable force — it’s shaped by humans. Ultimately, he said policymakers and managers will decide how automation fits into the workplace. “There isn’t an ironclad rule of what it is that humans can do and technologies cannot do. They are both fluid. It depends on what we value and how we use technology,” Acemoglu said. “It’s a choice.” More

  • in

    Why we shouldn’t fear the future of work

    The American workforce is at a crossroads. Digitization and automation have replaced millions of middle-class jobs, while wages have stagnated for many who remain employed. A lot of labor has become insecure, low-income freelance work.
    Yet there is reason for optimism on behalf of workers, as scholars and business leaders outlined in an MIT conference on Wednesday. Automation and artificial intelligence do not just replace jobs; they also create them. And many labor, education, and safety-net policies could help workers greatly as well.
    That was the outlook of many participants at the conference, the “AI and the Work of the Future Congress,” marking the release of the final report of MIT’s Task Force on the Work of the Future. The report concludes that there is no technology-driven jobs wipeout on the horizon, but new policies are needed to match the steady march of innovation; technology has mostly helped white-collar workers, but not the rest of the work force in the U.S.
    “We’re not going to run out of work,” Elisabeth Beck Reynolds, executive director of the task force, and executive director of the MIT Industrial Performance Center, said Wednesday.
    She added: “Clearly the distributional effects of technological change are uneven. We’ve seen the reduction of middle-skill jobs [due] to automation, [along with] jobs in manufacturing, administration, in clerical work, while we’ve seen an increase in jobs for those with higher education and higher skill sets. … Our challenge is to try to train [workers] and make sure we have workers in good positions for those jobs.”
    Indeed, the notion of social responsibility was a leading motif of the conference, which drew an audience of about 1,500 online viewers. 
    “I believe that those of us who are technologists, and who educate tomorrow’s technologists, have a special role to play,” said MIT President L. Rafael Reif, in his introductory remarks at the conference. “It means that, while we are teaching students, in every field, to be fluent in the use of AI strategies and tools, we must be sure that we equip tomorrow’s technologists with equal fluency in the cultural values and ethical principles that should ground and govern how those tools are designed and how they’re used.”
    The daylong event was organized by MIT’s Task Force on the Work of the Future, along with the Initiative on the Digital Economy and the Computer Science and Artificial Intelligence Laboratory.
    Conditions on the ground
    The report notes that over the last four decades, innovation has driven increases in productivity, but that earnings have not followed in step. Since 1978, overall U.S. productivity has risen by 66 percent; yet over the same time, compensation for production and nonsupervisory workers has only risen by 10 percent.
    “Work has become a lot more fragile,” said James Manyika, a senior partner at the consulting firm McKinsey and Company, chair of the McKinsey Global Institute, and a member of McKinsey’s board of directors. “This has affected both middle-wage and lower-wage workers.”
    To be sure, information technology in particular has helped people in engineering, design, medicine, marketing, and many other white-collar fields; and while middle-income jobs have become more scarce, service-sector jobs have expanded but tend to be lower-income.
    “Certainly the United States is a good place for high-wage workers to be, but not for lower-wage [workers] and those in the middle,” said Susan Houseman, vice president and director of research at the W.E. Upjohn Institute for Employment Research. “We should be concerned about the growth of nontraditional work arangements.”
    Moreover, “The U.S. doesn’t seem to be getting a very positive return on its inequality,” said David Autor, the Ford Professor of Economics at MIT, associate head of MIT’s Department of Economics, and a co-chair of the task force. “That is, we have a lot of inequality, but we do not have faster growth.”
    In general, most workers are “not seeming to share in the prosperity that improved technology has got us,” said Robert M. Solow, Institute Professor Emeritus and 1987 Nobel laureate in economics, in recorded remarks shown during the conference.
    That said, Solow observed, “There’s room for a lot of ingenuity here, because since the nature of employment has changed, as we become a service economy rather than a goods-producing economy, there’s room for innovation in how to organize union work. … More active enforcement of antitrust laws, to try to increase the degree of competition in the production of goods and services, would also have the effect of improving the prospects for wages and salaries.”
    He added: “The main factor in the disturbance in the distribution of incomes is probably not technological change.”
    What are the next steps?
    But if there is room for policy interventions to ease the social jolts resulting from technology, which ones make the most sense? In general terms, some conference participants advocated for an openness to market-driven technological change, paired with a substantial safety net to help people handle those disruptive waves of innovation.
    “The real fundamental shift is, we have to think of service jobs the way 100 years ago we thought about manufacturing jobs. In other words, we have to start putting in place … protections and benefits,” said Fareed Zakaria, author and host of the CNN show, “Fareed Zakaria GPS.” He added, “Ultimately, that is the only way you are going to really address this problem. We are not going to bring back tens of millions of manufacturing jobs to the United States. We are going to take these service jobs and make them better jobs. And companies can do that.”
    One conference panel focused on the support of education, particularly public universities and community colleges, where traditionally overlooked pools of workplace talent reside.
    “One of the most important skills or approaches that we need to talk about is how to make sure that people know how to think, how to learn, how to adapt,” said Freeman Hrabowski, president of the University of Maryland at Baltimore County. That said, he noted, people receiving a broad college education can also receive specialist certificates and credentials in particular technical areas and add layers to their skills that are more closely linked to evolving job opportunities. “Both are very important,” he noted.
    Juan Salgado, chancellor of the City Colleges of Chicago, a group of community colleges, pointed out that there are 11.8 million community college students in America — many of whom already hold jobs and have workplace skills in addition to the academic skills they are acquiring.
    “It’s about the assets that are in our institutions, our students, and the fact that we’re not paying enough attention to them,” said Salgado.
    “We know what works,” said Paul Osterman, a professor of human resources and management at the MIT Sloan School of Management, pointing out that many training programs, internships, and other work-directed educational programs have been rigorously assessed and proven to be effective. “It’s taking what we know works and making it work at scale.”
    Saru Jayaraman, president of the advocacy group One Fair Wage and director of the Food Labor Research Center at the University of California at Berkeley, noted that simply raising the minimum wage, especially for food service workers, would have multiple benefits that only start with the increased earnings for roughly 10 percent of the workforce.
    “Increased wages reduce turnover in an industry that has some of the highest turnover rates in any industry in the United States,” said Jayaraman, adding that better wages have “increased employee morale, [and] increased employee productivity and consumer service.”
    Karen Mills, a senior fellow at the Harvard Business School and a former administrator of the Small Business Administration, suggested that good policies are especially important for small businesses, which may not be able to capitalize on technology as much as bigger firms.
    “In the jobs of the future, not all robots are going to be serving you coffee,” said Mills. “There’s still going to be Main Street.” She emphasized the continued need for supportive policies for small businesses, including access to health care for employees and access to capital for firm founders, which would also help small businesses owned by women and people of color.
    Rep. Lisa Blunt Rochester of Delaware, who will start her third term as a congresswoman in January, helped found the Congressional Future of Work Caucus, and suggested there is more bipartisan support for federal action that observers may suspect. 
    “We launched the caucus right before Covid-19 struck,” she said. “We literally had standing room only. Democrats, Republicans, we had the council on Black mayors, we had the unions, AFL-CIO, just this diversity, academics — I held up your [interim] report — there was this common agreement that we need to have the conversation.”
    “Something we shape and create”
    The conference also included extended discussion about the state of technology itself, especially artificial intelligence, examining its paths of progress and forms of deployment.
    “Technology is not something that happens to us,” said David Mindell, task force co-chair, professor of aeronautics and astronautics, and the Dibner Professor of the History of Engineering and Manufacturing at MIT. “It’s something we shape and create.”
    “You can’t say, ‘AI did it,’” said Microsoft CEO Satya Nadella, in a taped conversation with Autor.  “We, as creators of AI, first and foremost have a set of design principles. … We have to go from ethics to actual engineering and design and [a] process that allows us to be more accountable.”
    Related: MIT forum examines the rise of automation in the workplace
    A number of conference participants suggested that we should be careful to construct policies that don’t rein in technological advances, but can ameliorate their effects.
    “I don’t think we should contrain technological progress, because it is a competitive advantage of nations, and we have to let innovation thrive. We have to let technology proceed,” said Indra Nooyi, the former chairman and CEO of Pepsico. “At best, what we can do is anticipate the negative consequences of technology … and put in some checks and balances.”
    As a few conference panelists noted throughout the event, the overlapping issues of work, technology, and inequality have become even more complicated and relevant during the Covid-19 pandemic, with roughly one-third of the work force able to work more securely from home, while many service workers and others have to perform their jobs in person.
    Surveying the employment landscape of 2020, Nooyi noted, “In many ways Covid has exacerbated all the societal divides.” Indeed, Reynolds said, “We believe this work is more important, not less important, in the time of Covid.”
    Overall, the task force members noted, making the work of the future better is a task that starts today.
    “I really come away from this concerned about the direction [of work], but optimistic about our ability to change it,” Autor said. More

  • in

    A neural network learns when it should not be trusted

    Increasingly, artificial intelligence systems known as deep learning neural networks are used to inform decisions vital to human health and safety, such as in autonomous driving or medical diagnosis. These networks are good at recognizing patterns in large, complex datasets to aid in decision-making. But how do we know they’re correct? Alexander Amini and his colleagues at MIT and Harvard University wanted to find out.
    They’ve developed a quick way for a neural network to crunch data, and output not just a prediction but also the model’s confidence level based on the quality of the available data. The advance might save lives, as deep learning is already being deployed in the real world today. A network’s level of certainty can be the difference between an autonomous vehicle determining that “it’s all clear to proceed through the intersection” and “it’s probably clear, so stop just in case.” 
    Current methods of uncertainty estimation for neural networks tend to be computationally expensive and relatively slow for split-second decisions. But Amini’s approach, dubbed “deep evidential regression,” accelerates the process and could lead to safer outcomes. “We need the ability to not only have high-performance models, but also to understand when we cannot trust those models,” says Amini, a PhD student in Professor Daniela Rus’ group at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).
    “This idea is important and applicable broadly. It can be used to assess products that rely on learned models. By estimating the uncertainty of a learned model, we also learn how much error to expect from the model, and what missing data could improve the model,” says Rus.
    Amini will present the research at next month’s NeurIPS conference, along with Rus, who is the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science, director of CSAIL, and deputy dean of research for the MIT Stephen A. Schwarzman College of Computing; and graduate students Wilko Schwarting of MIT and Ava Soleimany of MIT and Harvard.
    Efficient uncertainty
    After an up-and-down history, deep learning has demonstrated remarkable performance on a variety of tasks, in some cases even surpassing human accuracy. And nowadays, deep learning seems to go wherever computers go. It fuels search engine results, social media feeds, and facial recognition. “We’ve had huge successes using deep learning,” says Amini. “Neural networks are really good at knowing the right answer 99 percent of the time.” But 99 percent won’t cut it when lives are on the line.
    “One thing that has eluded researchers is the ability of these models to know and tell us when they might be wrong,” says Amini. “We really care about that 1 percent of the time, and how we can detect those situations reliably and efficiently.”
    Neural networks can be massive, sometimes brimming with billions of parameters. So it can be a heavy computational lift just to get an answer, let alone a confidence level. Uncertainty analysis in neural networks isn’t new. But previous approaches, stemming from Bayesian deep learning, have relied on running, or sampling, a neural network many times over to understand its confidence. That process takes time and memory, a luxury that might not exist in high-speed traffic.
    The researchers devised a way to estimate uncertainty from only a single run of the neural network. They designed the network with bulked up output, producing not only a decision but also a new probabilistic distribution capturing the evidence in support of that decision. These distributions, termed evidential distributions, directly capture the model’s confidence in its prediction. This includes any uncertainty present in the underlying input data, as well as in the model’s final decision. This distinction can signal whether uncertainty can be reduced by tweaking the neural network itself, or whether the input data are just noisy.
    Confidence check
    To put their approach to the test, the researchers started with a challenging computer vision task. They trained their neural network to analyze a monocular color image and estimate a depth value (i.e. distance from the camera lens) for each pixel. An autonomous vehicle might use similar calculations to estimate its proximity to a pedestrian or to another vehicle, which is no simple task.
    Their network’s performance was on par with previous state-of-the-art models, but it also gained the ability to estimate its own uncertainty. As the researchers had hoped, the network projected high uncertainty for pixels where it predicted the wrong depth. “It was very calibrated to the errors that the network makes, which we believe was one of the most important things in judging the quality of a new uncertainty estimator,” Amini says.
    To stress-test their calibration, the team also showed that the network projected higher uncertainty for “out-of-distribution” data — completely new types of images never encountered during training. After they trained the network on indoor home scenes, they fed it a batch of outdoor driving scenes. The network consistently warned that its responses to the novel outdoor scenes were uncertain. The test highlighted the network’s ability to flag when users should not place full trust in its decisions. In these cases, “if this is a health care application, maybe we don’t trust the diagnosis that the model is giving, and instead seek a second opinion,” says Amini.
    The network even knew when photos had been doctored, potentially hedging against data-manipulation attacks. In another trial, the researchers boosted adversarial noise levels in a batch of images they fed to the network. The effect was subtle — barely perceptible to the human eye — but the network sniffed out those images, tagging its output with high levels of uncertainty. This ability to sound the alarm on falsified data could help detect and deter adversarial attacks, a growing concern in the age of deepfakes.
    Deep evidential regression is “a simple and elegant approach that advances the field of uncertainty estimation, which is important for robotics and other real-world control systems,” says Raia Hadsell, an artificial intelligence researcher at DeepMind who was not involved with the work. “This is done in a novel way that avoids some of the messy aspects of other approaches —  e.g. sampling or ensembles — which makes it not only elegant but also computationally more efficient — a winning combination.”
    Deep evidential regression could enhance safety in AI-assisted decision making. “We’re starting to see a lot more of these [neural network] models trickle out of the research lab and into the real world, into situations that are touching humans with potentially life-threatening consequences,” says Amini. “Any user of the method, whether it’s a doctor or a person in the passenger seat of a vehicle, needs to be aware of any risk or uncertainty associated with that decision.” He envisions the system not only quickly flagging uncertainty, but also using it to make more conservative decision making in risky scenarios like an autonomous vehicle approaching an intersection.
    “Any field that is going to have deployable machine learning ultimately needs to have reliable uncertainty awareness,” he says.
    This work was supported, in part, by the National Science Foundation and Toyota Research Institute through the Toyota-CSAIL Joint Research Center. More

  • in

    Vibrations of coronavirus proteins may play a role in infection

    When someone struggles to open a lock with a key that doesn’t quite seem to work, sometimes jiggling the key a bit will help. Now, new research from MIT suggests that coronaviruses, including the one that causes Covid-19, may use a similar method to trick cells into letting the viruses inside. The findings could be useful for determining how dangerous different strains or mutations of coronaviruses may be, and might point to a new approach for developing treatments.
    Studies of how spike proteins, which give coronaviruses their distinct crown-like appearance, interact with human cells typically involve biochemical mechanisms, but for this study the researchers took a different approach. Using atomistic simulations, they looked at the mechanical aspects of how the spike proteins move, change shape, and vibrate. The results indicate that these vibrational motions could account for a strategy that coronaviruses use, which can trick a locking mechanism on the cell’s surface into letting the virus through the cell wall so it can hijack the cell’s reproductive mechanisms.
    The team found a strong direct relationship between the rate and intensity of the spikes’ vibrations and how readily the virus could penetrate the cell. They also found an opposite relationship with the fatality rate of a given coronavirus. Because this method is based on understanding the detailed molecular structure of these proteins, the researchers say it could be used to screen emerging coronaviruses or new mutations of Covid-19, to quickly assess their potential risk.
    The findings, by MIT professor of civil and environmental engineering Markus Buehler and graduate student Yiwen Hu, are being published today in the journal Matter.
    All the images we see of the SARS-CoV-2 virus are a bit misleading, according to Buehler. “The virus doesn’t look like that,” he says, because in reality all matter down at the nanometer scale of atoms, molecules, and viruses “is continuously moving and vibrating. They don’t really look like those images in a chemistry book or a website.”
    Buehler’s lab specializes in atom-by-atom simulation of biological molecules and their behavior. As soon as Covid-19 appeared and information about the virus’ protein composition became available, Buehler and Hu, a doctoral student in mechanical engineering, swung into action to see if the mechanical properties of the proteins played a role in their interaction with the human body.
    The tiny nanoscale vibrations and shape changes of these protein molecules are extremely difficult to observe experimentally, so atomistic simulations are useful in understanding what is taking place. The researchers applied this technique to look at a crucial step in infection, when a virus particle with its protein spikes attaches to a human cell receptor called the ACE2 receptor. Once these spikes bind with the receptor, that unlocks a channel that allows the virus to penetrate the cell.
    That binding mechanism between the proteins and the receptors works something like a lock and key, and that’s why the vibrations matter, according to Buehler. “If it’s static, it just either fits or it doesn’t fit,” he says. But the protein spikes are not static; “they’re vibrating and continuously changing their shape slightly, and that’s important. Keys are static, they don’t change shape, but what if you had a key that’s continuously changing its shape — it’s vibrating, it’s moving, it’s morphing slightly? They’re going to fit differently depending on how they look at the moment when we put the key in the lock.”
    The more the “key” can change, the researchers reason, the likelier it is to find a fit.
    Buehler and Hu modeled the vibrational characteristics of these protein molecules and their interactions, using analytical tools such as “normal mode analysis.” This method is used to study the way vibrations develop and propagate, by modeling the atoms as point masses connected to each other by springs that represent the various forces acting between them.
    They found that differences in vibrational characteristics correlate strongly with the different rates of infectivity and lethality of different kinds of coronaviruses, taken from a global database of confirmed case numbers and case fatality rates. The viruses studied included SARS-CoV, MERS-CoV, SATS-CoV-2, and of one known mutation of the SARS-CoV-2 virus that is becoming increasingly prevalent around the world. This makes this method a promising tool for predicting the potential risks from new coronaviruses that emerge, as they likely will, Buehler says.

    In all the cases they have studied, Hu says, a crucial part of the process is fluctuations in an upward swing of one branch of the protein molecule, which helps make it accessible to bind to the receptor. “That movement is of significant functional importance,” she says. Another key indicator has to do with the ratio between two different vibrational motions in the molecule. “We find that these two factors show a direct relationship to the epidemiological data, the virus infectivity and also the virus lethality,” she says.
    The correlations they found mean that when new viruses or new mutations of existing ones appear, “you could screen them from a purely mechanical side,” Hu says. “You can just look at the fluctuations of these spike proteins and find out how they may act on the epidemiological side, like how infectious and how serious would the disease be.”
    Potentially, these findings could also provide a new avenue for research on possible treatments for Covid-19 and other coronavirus diseases, Buehler says, speculating that it might be possible to find a molecule that would bind to the spike proteins in a way that would stiffen them and limit their vibrations. Another approach might be to induce opposite vibrations to cancel out the natural ones in the spikes, similarly to the way noise-canceling headphones suppress unwanted sounds.
    As biologists learn more about the various kinds of mutations taking place in coronaviruses, and identify which areas of the genomes are most subject to change, this methodology could also be used predictively, Buehler says. The most likely kinds of mutations to emerge could all be simulated, and those that have the most dangerous potential could be flagged so that the world could be alerted to watch for any signs of the actual emergence of those particular strains. Buehler adds, “The G614 mutation, for instance, that is currently dominating the Covid-19 spread around the world, is predicted to be slightly more infectious, according to our findings, and slightly less lethal.”
    Mihri Ozkan, a professor of electrical and computer engineering at the University of California at Riverside, who was not connected to this research, says this analysis “points out the direct correlation between nanomechanical features and the lethality and infection rate of coronavirus. I believe his work leads the field forward significantly to find insights on the mechanics of diseases and infections.”
    Ozkan adds that “If under the natural environmental conditions, overall flexibility and mobility ratios predicted in this work do happen, identifying an effective inhibitor that can lock the spike protein to prevent binding could be a holy grail of preventing SARS-CoV-2 infections, which we all need now desperately.”
    The research was supported by the MIT-IBM Watson AI Lab, the Office of Naval Research, and the National Institutes of Health. More