Deep learning is advancing at lightning speed, and Alexander Amini ’17 and Ava Soleimany ’16 want to make sure they have your attention as they dive deep on the math behind the algorithms and the ways that deep learning is transforming daily life.
Last year, their blockbuster course, 6.S191 (Introduction to Deep Learning) opened with a fake video welcome from former President Barack Obama. This year, the pair delivered their lectures “live” from Stata Center — after taping them weeks in advance from their kitchen, outfitted for the occasion with studio lights, a podium, and a green screen for projecting the blackboard in Kirsch Auditorium on their Zoom backgrounds.
“It’s hard for students to stay engaged when they’re looking at a static image of an instructor,” says Amini. “We wanted to recreate the dynamic of a real classroom.”
Amini is a graduate student in MIT’s Department of Electrical Engineering and Computer Science (EECS), and Soleimany a graduate student at MIT and Harvard University. They co-developed 6.S191’s curriculum and have taught it during MIT’s Independent Activities Period (IAP) for four of the last five years. Their lectures and software labs are updated each year, but this year’s pandemic edition posed a special challenge. They responded with a mix of low- and high-tech solutions, from filming the lectures in advance to holding help sessions on a Minecraft-like platform that mimics the feel of socializing in person.
Some students realized the lectures weren’t live after noticing clues like the abrupt wardrobe change as the instructors shifted from lecture mode to the help session immediately after class. Those who caught on congratulated the pair in their course evaluations. Those who didn’t reacted with amazement. “You mean they weren’t livestreamed?” asked PhD student Nada Tarkhan, after a long pause. “It absolutely felt like one instructor was giving the lecture, while the other was answering questions in the chat box.”
The growing popularity of 6.S191 — both as a for-credit class at MIT, and a self-paced course online — mirrors the rise of deep neural networks for everything from language translation to facial recognition. In a series of clear and engaging lectures, Amini and Soleimany cover the technical foundations of deep nets, and how the algorithms pick out patterns in reams of data to make predictions. They also explore deep learning’s myriad applications, and how students can evaluate a model’s predictions for accuracy and bias.
Responding to student feedback, Amini and Soleimany this year extended the course from one week to two, giving students more time to absorb the material and put together final projects. They also added two new lectures: one on uncertainty estimation, the other on algorithmic bias and fairness. By moving the class online, they were also able to admit an extra 200 students who would have been turned away by Kirsch Auditorium’s 350-seat limit.
To make it easier for students to connect with teaching assistants and each other, Amini and Soleimany introduced Gather.Town, a platform they discovered at a machine learning conference this past fall. Students moved their avatars about in the virtual 6.S191auditorium to ask homework questions, or find collaborators and troubleshoot problems tied to their final projects.
Students gave the course high marks for its breadth and organization. “I knew the buzzwords like reinforcement learning and RNNs, but I never really grasped the details, like creating parameters in TensorFlow and setting activation functions,” says sophomore Claire Dong. “I came out of the class clearer and more energized about the field.”
This year, 50 teams presented final projects, twice as many as the year before, and they covered an even broader range of applications, say Amini and Soleimany, from trading cryptocurrencies to predicting forest fires to simulating protein folding in a cell.
“The extra week really helped them craft their idea, create some of it, code it up, and put together the pieces into a presentation,” says Amini.
“They were just brilliant,” adds Soleimany. “The quality and organization of their ideas, the talks.”
Four projects were picked for prizes.
The first was a proposal for classifying brain signals to differentiate right hand movements from the left. Before transferring to MIT from Miami-Dade Community College, Nelson Hidalgo had worked on brain-computer interfaces to help people with paralysis regain control of their limbs. For his final project, Hidalgo, a sophomore in EECS, used EEG brain wave recordings to build a model for sorting the signals of someone attempting to move their right hand and left.
His neural network architecture featured a combined convolutional and recurrent neural net working in parallel to extract sequential and spatial patterns in the data. The result was a model that improved on other methods for predicting the brain’s intention to move either hand, he says. “A more accurate classifier could really make this technology accessible to patients on a daily basis.”
A second project explored the potential of AI-based forestry. Tree planting has become a popular way for companies to offset their carbon emissions, but tracking how much carbon dioxide those trees actually absorb is still an inexact science. Peter McHale, a master’s student at MIT Sloan School of Management, proposed that his recently launched startup, Gaia AI, could fly drones over forests to take detailed images of the canopy above and below.
Those high-resolution pictures could help forest managers better estimate tree growth, he says, and calculate how much carbon they’ve soaked up from the air. The footage could also provide clues about what kinds of trees grow best in certain climates and conditions. “Drones can take measurements more cheaply and accurately than humans can,” he says.
Under Gaia AI’s first phase of development, McHale says he plans to focus on selling high-quality, drone-gathered sensor data to timber companies in need of cheaper, more accurate surveying methods, as well as companies providing third-party validation for carbon offsets. In phase two, McHale envisions turning those data, and the profits they generate, to attack climate change through drone-based tree-planting.
A third project explored the state of the art for encoding intelligent behavior into robots. As a SuperUROP in Professor Sangbae Kim’s lab, Savva Morozov works with the mini cheetah and is interested in figuring out ways that robots like it might learn how to learn.
For his project, Morozov, a junior in the Department of Aeronautics and Astronautics, presented a scenario: a mini cheetah-like robot is struggling to scale a pile of rubble. It spots a wooden plank that could be picked up with its robotic arm and turned into a ramp. But it has neither the imagination nor repertoire of skills to build a tool to reach the summit. Morozov explained how different learning-to-learn methods could help to solve the problem.
A fourth project proposed the use of deep learning to make it easier to analyze street-view images of buildings to model an entire city’s energy consumption. An algorithm developed by MIT’s Sustainable Design Lab and PhD student Jakub Szczesniak estimates the window-to-wall ratio for a building based on details captured in the photo, but processing the image requires a lot of tedious work at the front end.
Nada Tarkhan, a PhD student in the School of Architecture and Planning, proposed adding an image-processing convolutional neural net to the workflow to make the analysis faster and more reliable. “We hope it can help us gather more accurate data to understand building features in our cities — the façade characteristics, materials, and wall-to-window ratios,” she says. “The ultimate goal is to improve our understanding of how buildings perform citywide.”
Based on student feedback, Amini and Soleimany say they plan to keep the added focus on uncertainty and bias while pushing the course into new areas. “We love hearing that students were inspired to take further AI/ML classes after taking 6.S191,” says Soleimany. “We hope to continue innovating to keep the course relevant.”
Funding for the class was provided by Ernst & Young, Google, the MIT-IBM Watson AI Lab, and NVIDIA. More
Deep learning is advancing at lightning speed, and Alexander Amini ’17 and Ava Soleimany ’16 want to make sure they have your attention as they dive deep on the math behind the algorithms and the ways that deep learning is transforming daily life.
In the era of social distancing, using robots for some health care interactions is a promising way to reduce in-person contact between health care workers and sick patients. However, a key question that needs to be answered is how patients will react to a robot entering the exam room.
Researchers from MIT and Brigham and Women’s Hospital recently set out to answer that question. In a study performed in the emergency department at Brigham and Women’s, the team found that a large majority of patients reported that interacting with a health care provider via a video screen mounted on a robot was similar to an in-person interaction with a health care worker.
“We’re actively working on robots that can help provide care to maximize the safety of both the patient and the health care workforce. The results of this study give us some confidence that people are ready and willing to engage with us on those fronts,” says Giovanni Traverso, an MIT assistant professor of mechanical engineering, a gastroenterologist at Brigham and Women’s Hospital, and the senior author of the study.
In a larger online survey conducted nationwide, the researchers also found that a majority of respondents were open to having robots not only assist with patient triage but also perform minor procedures such as taking a nose swab.
Peter Chai, an assistant professor of emergency medicine at Brigham and Women’s Hospital and a research affiliate in Traverso’s lab, is the lead author of the study, which appears today in JAMA Network Open.
Triage by robot
After the Covid-19 pandemic began early last year, Traverso and his colleagues turned their attention toward new strategies to minimize interactions between potentially sick patients and health care workers. To that end, they worked with Boston Dynamics to create a mobile robot that could interact with patients as they waited in the emergency department. The robots were equipped with sensors that allow them to measure vital signs, including skin temperature, breathing rate, pulse rate, and blood oxygen saturation. The robots also carried an iPad that allowed for remote video communication with a health care provider.
This kind of robot could reduce health care workers’ risk of exposure to Covid-19 and help to conserve the personal protective equipment that is needed for each interaction. However, the question still remained whether patients would be receptive to this type of interaction.
“Often as engineers, we think about different solutions, but sometimes they may not be adopted because people are not fully accepting of them,” Traverso says. “So, in this study we were trying to tease that out and understand if the population is receptive to a solution like this one.”
The researchers first conducted a nationwide survey of about 1,000 people, working with a market research company called YouGov. They asked questions regarding the acceptability of robots in health care, including whether people would be comfortable with robots performing not only triage but also other tasks such as performing nasal swabs, inserting a catheter, or turning a patient over in bed. On average, the respondents stated that they were open to these types of interactions.
The researchers then tested one of their robots in the emergency department at Brigham and Women’s Hospital last spring, when Covid-19 cases were surging in Massachusetts. Fifty-one patients were approached in the waiting room or a triage tent and asked if they would be willing to participate in the study, and 41 agreed. These patients were interviewed about their symptoms via video connection, using an iPad carried by a quadruped, dog-like robot developed by Boston Dynamics. More than 90 percent of the participants reported that they were satisfied with the robotic system.
“For the purposes of gathering quick triage information, the patients found the experience to be similar to what they would have experienced talking to a person,” Chai says.
The numbers from the study suggest that it could be worthwhile to try to develop robots that can perform procedures that currently require a lot of human effort, such as turning a patient over in bed, the researchers say. Turning Covid-19 patients onto their stomachs, also known as “proning,” has been shown to boost their blood oxygen levels and make breathing easier. Currently the process requires several people to perform. Administering Covid-19 tests is another task that requires a lot of time and effort from health care workers, who could be deployed for other tasks if robots could help perform swabs.
“Surprisingly, people were pretty accepting of the idea of having a robot do a nasal swab, which suggests that potential engineering efforts could go into thinking about building some of these systems,” Chai says.
The MIT team is continuing to develop sensors that can obtain vital sign data from patients remotely, and they are working on integrating these systems into smaller robots that could operate in a variety of environments, such as field hospitals or ambulances.
Other authors of the paper include Farah Dadabhoy, Hen-wei Huang, Jacqueline Chu, Annie Feng, Hien Le, Joy Collins, Marco da Silva, Marc Raibert, Chin Hur, and Edward Boyer. The research was funded by the National Institutes of Health, the Hans and Mavis Lopater Psychosocial Foundation, e-ink corporation, the Karl Van Tassel (1925) Career Development Professorship, MIT’s Department of Mechanical Engineering, and the Brigham and Women’s Hospital Division of Gastroenterology. More
Since the start of the Covid-19 pandemic, charts and graphs have helped communicate information about infection rates, deaths, and vaccinations. In some cases, such visualizations can encourage behaviors that reduce virus transmission, like wearing a mask. Indeed, the pandemic has been hailed as the breakthrough moment for data visualization.
But new findings suggest a more complex picture. A study from MIT shows how coronavirus skeptics have marshalled data visualizations online to argue against public health orthodoxy about the benefits of mask mandates. Such “counter-visualizations” are often quite sophisticated, using datasets from official sources and state-of-the-art visualization methods.
The researchers combed through hundreds of thousands of social media posts and found that coronavirus skeptics often deploy counter-visualizations alongside the same “follow-the-data” rhetoric as public health experts, yet the skeptics argue for radically different policies. The researchers conclude that data visualizations aren’t sufficient to convey the urgency of the Covid-19 pandemic, because even the clearest graphs can be interpreted through a variety of belief systems.
“A lot of people think of metrics like infection rates as objective,” says Crystal Lee. “But they’re clearly not, based on how much debate there is on how to think about the pandemic. That’s why we say data visualizations have become a battleground.”
The research will be presented at the ACM Conference on Human Factors in Computing Systems in May. Lee is the study’s lead author and a PhD student in MIT’s History, Anthropology, Science, Technology, and Society (HASTS) program and MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), as well as a fellow at Harvard University’s Berkman Klein Center for Internet and Society. Co-authors include Graham Jones, a Margaret MacVicar Faculty Fellow in Anthropology; Arvind Satyanarayan, the NBX Career Development Assistant Professor in the Department of Electrical Engineering and Computer Science and CSAIL; Tanya Yang, an MIT undergraduate; and Gabrielle Inchoco, a Wellesley College undergraduate.
As data visualizations rose to prominence early in the pandemic, Lee and her colleagues set out to understand how they were being deployed throughout the social media universe. “An initial hypothesis was that if we had more data visualizations, from data collected in a systematic way, then people would be better informed,” says Lee. To test that hypothesis, her team blended computational techniques with innovative ethnographic methods.
They used their computational approach on Twitter, scraping nearly half a million tweets that referred to both “Covid-19” and “data.” With those tweets, the researchers generated a network graph to find out “who’s retweeting whom and who likes whom,” says Lee. “We basically created a network of communities who are interacting with each other.” Clusters included groups like the “American media community” or “antimaskers.” The researchers found that antimask groups were creating and sharing data visualizations as much as, if not more than, other groups.
And those visualizations weren’t sloppy. “They are virtually indistinguishable from those shared by mainstream sources,” says Satyanarayan. “They are often just as polished as graphs you would expect to encounter in data journalism or public health dashboards.”
“It’s a very striking finding,” says Lee. “It shows that characterizing antimask groups as data-illiterate or not engaging with the data, is empirically false.”
Lee says this computational approach gave them a broad view of Covid-19 data visualizations. “What is really exciting about this quantitative work is that we’re doing this analysis at a huge scale. There’s no way I could have read half a million tweets.”
But the Twitter analysis had a shortcoming. “I think it misses a lot of the granularity of the conversations that people are having,” says Lee. “You can’t necessarily follow a single thread of conversation as it unfolds.” For that, the researchers turned to a more traditional anthropology research method — with an internet-age twist.
Lee’s team followed and analyzed conversations about data visualizations in antimask Facebook groups — a practice they dubbed “deep lurking,” an online version of the ethnographic technique called “deep hanging out.” Lee says “understanding a culture requires you to observe the day-to-day informal goings-on — not just the big formal events. Deep lurking is a way to transpose these traditional ethnography approaches to digital age.”
The qualitative findings from deep lurking appeared consistent with the quantitative Twitter findings. Antimaskers on Facebook weren’t eschewing data. Rather, they discussed how different kinds of data were collected and why. “Their arguments are really quite nuanced,” says Lee. “It’s often a question of metrics.” For example, antimask groups might argue that visualizations of infection numbers could be misleading, in part because of the wide range of uncertainty in infection rates, compared to measurements like the number of deaths. In response, members of the group would often create their own counter-visualizations, even instructing each other in data visualization techniques.
“I’ve been to livestreams where people screen share and look at the data portal from the state of Georgia,” says Lee. “Then they’ll talk about how to download the data and import it into Excel.”
Jones says the antimask groups’ “idea of science is not listening passively as experts at a place like MIT tell everyone else what to believe.” He adds that this kind of behavior marks a new turn for an old cultural current. “Antimaskers’ use of data literacy reflects deep-seated American values of self-reliance and anti-expertise that date back to the founding of the country, but their online activities push those values into new arenas of public life.”
He adds that “making sense of these complex dynamics would have been impossible” without Lee’s “visionary leadership in masterminding an interdisciplinary collaboration that spanned SHASS and CSAIL.”
The mixed methods research “advances our understanding of data visualizations in shaping public perception of science and politics,” says Jevin West, a data scientist at the University of Washington, who was not involved with the research. Data visualizations “carry a veneer of objectivity and scientific precision. But as this paper shows, data visualizations can be used effectively on opposite sides of an issue,” he says. “It underscores the complexity of the problem — that it is not enough to ‘just teach media literacy.’ It requires a more nuanced sociopolitical understanding of those creating and interpreting data graphics.”
Combining computational and anthropological insights led the researchers to a more nuanced understanding of data literacy. Lee says their study reveals that, compared to public health orthodoxy, “antimaskers see the pandemic differently, using data that is quite similar. I still think data analysis is important. But it’s certainly not the salve that I thought it was in terms of convincing people who believe that the scientific establishment is not trustworthy.” Lee says their findings point to “a larger rift in how we think about science and expertise in the U.S.” That same rift runs through issues like climate change and vaccination, where similar dynamics often play out in social media discussions.
To make these results accessible to the public, Lee and her collaborator, CSAIL PhD student Jonathan Zong, led a team of seven MIT undergraduate researchers to develop an interactive narrative where readers can explore the visualizations and conversations for themselves.
Lee describes the team’s research as a first step in making sense of the role of data and visualizations in these broader debates. “Data visualization is not objective. It’s not absolute. It is in fact an incredibly social and political endeavor. We have to be attentive to how people interpret them outside of the scientific establishment.”
This research was funded, in part, by the National Science Foundation and the Social Science Research Council. More
For 17 years, the Microsystems Annual Research Conference (MARC) has brought together an audience of over 200 every January for a two-day exploration of research achievements. A gathering that prizes personal interactions as much academic presentations, MARC traditionally takes place in New Hampshire, where skiing, snowshoeing, and social activities intermingle with poster sessions and technical talks.
So, how could the spirit of MARC be preserved during a pandemic, when coming together is not possible?
This was the major challenge faced by MIT student co-chairs Jessica Boles and Qingyun Xie when they met in April 2020 to start planning for MARC 2021, held on Jan. 26–27 and co-sponsored by the Microsystems Technology Laboratories (MTL) and, for the second year, jointly with MIT.nano. The two electrical engineering and computer science (EECS) PhD candidates convened a student committee of 16 individuals who, over 10 months, managed traditional responsibilities such as reviewing abstracts, finding keynote speakers, and organizing social activities, while also navigating new hurdles — the selection of an online platform, concurrent virtual poster presentations, physical and digital distribution of materials, and online networking.
“The events of the past year have shown us that one of the most precious pillars of an organization is the community it fosters,” said Boles and Xie in a letter to this year’s attendees. “When it became clear that a virtual MARC 2021 was our only path forward, our committee was determined to retain the uniqueness and community of MARC at a time when many of us need it most.”
MARC 2021 was held on Gather, an online platform with the look and feel of a retro video game in which attendees use their keyboard to navigate a virtual space as digital characters. Participants could “bump into” other conference goers, sit at virtual tables to eat lunch together, settle into pixelated armchairs for discussion, visit an auditorium with a center stage for lectures, and browse a poster session hall — all reminiscent of a traditional MARC.
The selection and design of this virtual setting was just one of the many tasks accomplished by the student core committee, which broke responsibilities into six categories. Outreach efforts were led by Sarah Muschinske (EECS), conference package logistics by John Niroula (EECS), website and proceedings by Jatin Patil (Department of Materials Science and Engineering), communication training by Nili Persits (EECS), social/networking activities by Kaidong Peng (EECS), and conference platform by Haoquan “Tony” Zhang (EECS).
Keeping attendees engaged was a high priority for the MARC committee — and their efforts proved successful. A record-breaking 327 individuals attended the conference, which was open to MIT students, faculty, and members of MTL’s Microsystems Industrial Group and MIT.nano’s Consortium. MARC 2021 featured 87 student abstracts from over 30 research groups, on par with student presentations from previous years. A networking lunch for students and industry partners — a traditional highlight for the in-person event — saw representation from 12 companies.
A student-run comedy show replaced winter sports. Evening social activities included online games and a virtual escape room. A new series of MIT faculty rap sessions was added this year following each technical block and moderated by a MARC student committee member. To preserve a tangible aspect to the conference, attendees received a package in the mail containing, among other things, a face mask with a nano-silver filter, a chocolate bar featuring the Boston skyline, and a handwritten postcard from the co-chairs. Conference meals were also provided via online ordering.
One benefit of a virtual MARC? Keynote speakers could join from anywhere in the world as long as they had an internet connection. Irwin Jacobs SM ’57 ScD ’59, founding chairman and CEO emeritus of Qualcomm, opened the conference from California with a fireside chat with MIT PhD student Kruthika Kikkeri about his time working in academia and why he decided to move into the startup world. Jacobs offered words of advice to entrepreneurs thinking about starting their own business.
“You have to have perseverance,” he said. “There are always people who will tell you ‘You can’t do that’ or ‘It doesn’t make sense’ because they don’t want to make changes to what they’re doing, and you might be competing with something that’s ongoing. Find the right people to work with, give them the right environment to work, a lot of elbow room, [and] freedom to come up with new ideas.”
MIT student research was presented over the course of the two days through prerecorded pitches and virtual poster sessions, which were broken into three technical blocks, each containing three research categories. Topics included quantum technologies, power, electronic devices, biotechnologies, energy-efficient AI, nanostructures and nanomaterials, integrated circuits, optics and photonics, and Covid-19. Each category was carefully curated by an EECS graduate student session chair who reviewed abstracts, provided feedback, and ensured all pitch and poster deadlines were met. The 2021 session chairs were Eric Bersin, Benjamin Cary, Nadim Chowdhury, Kruthika Kikkeri, Hsin-Yu (Jane) Lai, Ting-An Lin, Elaine McVay, Rishabh Mittal, and Milica Notaros.
The second day began with a special session featuring lightning talks showcasing technologies being developed at MIT that are applicable to the fight against Covid-19 or similar threats in the future.
Adam Wentworth, research affiliate at the Koch Institute for Integrative Cancer Research, and Sirma Orguc, postdoc with the Institute for Medical Engineering and Science, presented the TEAL respirator — an N95 alternative with a flexible fit and health- and environment-monitoring sensors. Research Laboratory of Electronics postdoc Dohyun Lee discussed rapid monitoring of sepsis using microfluids. Kikkeri, a PhD student in Joel Voldman’s research group, presented an at-home sensing platform that could be used for sensitive and rapid measure of protein biomarkers. Michael Specter, PhD student in the Computer Science and Artificial Intelligence Laboratory, explained SonicPACT, an ultrasonic ranging method for contact tracing and exposure notifications. Finally, PhD students Mantian Xue and Jiadi Zhu, from Tomás Palacios’ research group, presented new bioelectronic sensing technology for fast, accurate Covid-19 screening, and UV-C light for human-friendly sanitizing, respectively.
In their closing remarks, MTL Director Hae-Seung Lee and MIT.nano Director Vladimir Bulović both spoke to the fun, interactive, and enlightening elements of MARC that the students were able to translate to a virtual setting.
“Our success is our community — and MARC 2021 has just demonstrated it,” said Bulović. “Thank you for bringing us together and giving us a chance to exchange the best of our ideas and to strike new friendships that will lead to the next set of great innovations.”
“The scope of the research covered is truly staggering,” said Lee. “Despite all the challenges and concerns during the planning stages of the conference, I believe this year’s MARC far exceeded expectations.” More
Imagine a robot.
Perhaps you’ve just conjured a machine with a rigid, metallic exterior. While robots armored with hard exoskeletons are common, they’re not always ideal. Soft-bodied robots, inspired by fish or other squishy creatures, might better adapt to changing environments and work more safely with people.
Roboticists generally have to decide whether to design a hard- or soft-bodied robot for a particular task. But that tradeoff may no longer be necessary.
Working with computer simulations, MIT researchers have developed a concept for a soft-bodied robot that can turn rigid on demand. The approach could enable a new generation of robots that combine the strength and precision of rigid robots with the fluidity and safety of soft ones.
“This is the first step in trying to see if we can get the best of both worlds,” says James Bern, the paper’s lead author and a postdoc in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL).
Bern will present the research at the IEEE International Conference on Soft Robotics next month. Bern’s advisor, Daniela Rus, who is the CSAIL director and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science, is the paper’s other author.
Roboticists have experimented with myriad mechanisms to operate soft robots, including inflating balloon-like chambers in a robot’s arm or grabbing objects with vacuum-sealed coffee grounds. However, a key unsolved challenge for soft robotics is control — how to drive the robot’s actuators in order to achieve a given goal.
Until recently, most soft robots were controlled manually, but in 2017 Bern and his colleagues proposed that an algorithm could take the reigns. Using a simulation to help control a cable-driven soft robot, they picked a target position for the robot and had a computer figure out how much to pull on each of the cables in order to get there. A similar sequence happens in our bodies each time we reach for something: A target position for our hand is translated into contractions of the muscles in our arm.
Now, Bern and his colleagues are using similar techniques to ask a question that goes beyond the robot’s movement: “If I pull the cables in just the right way, can I get the robot to act stiff?” Bern says he can — at least in a computer simulation — thanks to inspiration from the human arm. While contracting the biceps alone can bend your elbow to a certain degree, contracting the biceps and triceps simultaneously can lock your arm rigidly in that position. Put simply, “you can get stiffness by pulling on both sides of something,” says Bern. So, he applied the same principle to his robots.
The researchers’ paper lays out a way to simultaneously control the position and stiffness of a cable-driven soft robot. The method takes advantage of the robots’ multiple cables — using some to twist and turn the body, while using others to counterbalance each other to tweak the robot’s rigidity. Bern emphasizes that the advance isn’t a revolution in mechanical engineering, but rather a new twist on controlling cable-driven soft robots.
“This is an intuitive way of expanding how you can control a soft robot,” he says. “It’s just encoding that idea [of on-demand rigidity] into something a computer can work with.” Bern hopes his roadmap will one day allow users to control a robot’s rigidity as easily as its motion.
On the computer, Bern used his roadmap to simulate movement and rigidity adjustment in robots of various shapes. He tested how well the robots, when stiffened, could resist displacement when pushed. Generally, the robots remained rigid as intended, though they were not equally resistant from all angles.
“Dual-mode materials that can change stiffness are always fascinating,” says Muhammad Hussain, an electrical engineer at the University of California at Berkeley, who was not involved with the research. He suggested potential applications in health care, where soft robots could one day travel through the blood stream then stiffen to perform microsurgery at a particular site in the body. Hussain say Bern’s demonstration “shows a viable path toward that future.”
Bern is building a prototype robot to test out his rigidity-on-demand control system. But he hopes to one day take the technology out of the lab. “Interacting with humans is definitely a vision for soft robotics,” he says. Bern points to potential applications in caring for human patients, where a robot’s softness could enhance safety, while its ability to become rigid could allow for lifting when necessary.
“The core message is to make it easy to control robots’ stiffness,” says Bern. “Let’s start making soft robots that are safe but can also act rigid on demand, and expand the spectrum of tasks robots can perform.” More
On March 1, MIT Solve launched its 2021 Global Challenges, with over $1.5 million in prize funding available to innovators worldwide.
Solve seeks tech-based solutions from social entrepreneurs around the world that address five challenges. Anyone, anywhere can apply to address the challenges by the June 16 deadline. Solve also announced Eric s. Yuan, founder and CEO of Zoom, and Karlie Kloss, founder of Kode With Klossy, as 2021 Challenge Ambassadors.
To help with the challenge application process, Solve runs a course with MITx entitled “Business and Impact Planning for Social Enterprises,” which introduces core business model and theory-of-change concepts to early stage entrepreneurs.
Finalists will be invited to attend Solve Challenge Finals on Sept. 19 in New York during U.N. General Assembly week. At the event, they will pitch their solutions to Solve’s Challenge Leadership Groups, judging panels comprised of industry leaders and MIT faculty. The judges will select the most promising solutions as Solver teams.
“After a year of turmoil, including a major threat to our collective health, disruption in schooling, lack of access to digital connectivity and meaningful work, a reckoning in the U.S. after centuries of institutionalized racism, or worsening natural hazards — supporting diverse innovators who are solving these challenges is more urgent than ever,” says Alex Amouyel, executive director of MIT Solve. “Solve is committed to bolstering communities in the U.S. and across the world by supporting innovators who are addressing our 2021 Global Challenges — wherever they are — through funding, mentorship, and an MIT-backed community. Whether you’re a prospective Solve partner or applicant, we hope you’ll join us!”
Solver teams participate in a nine-month program that connects them to the resources they need to scale. Thanks to its partners, to date Solve has provided over $40 million in commitments for Solver teams and entrepreneurs.
Solve’s challenge design process collects insights and ideas from industry leaders, MIT faculty, and local community voices alike.
Solve’s 2021 Global Challenges are:
Funders include the Patrick J. McGovern Foundation, General Motors, Comcast NBCUniversal, Vodafone Americas Foundation, HP, Ewing Marion Kauffman Foundation, American Student Assistance, The Robert Wood Johnson Foundation, Andan Foundation, Good Energies Foundation and the Elevate Prize Foundation. The Solve community will convene at Virtual Solve at MIT on May 3-4 with 2020 Solver teams, Solve members, and partners to build partnerships and tackle global challenges in real-time.
As a marketplace for social impact innovation, Solve’s mission is to solve world challenges. Solve finds promising tech-based social entrepreneurs around the world, then brings together MIT’s innovation ecosystem and a community of members to fund and support these entrepreneurs to help scale their impact. Organizations interested in joining the Solve community can learn more and apply for membership here. More
An international team of scholars has read an unopened letter from early modern Europe — without breaking its seal or damaging it in any way — using an automated computational flattening algorithm. The team, including MIT Libraries and Computer Science and Artificial Intelligence Laboratory (CSAIL) researchers and an MIT student and alumna, published their findings today in a Nature Communications article titled, “Unlocking history through automated virtual unfolding of sealed documents imaged by X-ray microtomography.”
The senders of these letters had closed them using “letterlocking,” the historical process of folding and securing a flat sheet of paper to become its own envelope. Jana Dambrogio, the Thomas F. Peterson Conservator at MIT Libraries, developed letterlocking as a field of study with Daniel Starza Smith, a lecturer in early modern English literature at King’s College London, and the Unlocking History research team. Since the papers’ folds, tucks, and slits are themselves valuable evidence for historians and conservators, being able to examine the letters’ contents without irrevocably damaging them is a major advancement in the study of historic documents.
“Letterlocking was an everyday activity for centuries, across cultures, borders, and social classes,” explains Dambrogio. “It plays an integral role in the history of secrecy systems as the missing link between physical communications security techniques from the ancient world and modern digital cryptography. This research takes us right into the heart of a locked letter.”
This breakthrough technique was the result of an international and interdisciplinary collaboration between conservators, historians, engineers, imaging experts, and other scholars. “The power of collaboration is that we can combine our different interests and tools to solve bigger problems,” says Martin Demaine, artist-in-residence in MIT’s Department of Electrical Engineering and Computer Science (EECS) and a member of the research team.
The algorithm that makes the virtual unfolding possible was developed by Amanda Ghassaei SM ’17, a graduate of the Center for Bits and Atoms, and Holly Jackson, an undergraduate student in electrical engineering and computer science and a participant in MIT’s Undergraduate Research Opportunity Program (UROP). The virtual unfolding code is openly available on GitHub.
“When we got back the first scans of the letter packets, we were instantly hooked,” says Ghassaei. “Sealed letters are very intriguing objects, and these examples are particularly interesting because of the special attention paid to securing them shut.”
“We’re X-raying history,” says team member David Mills, X-ray microtomography facilities manager at Queen Mary University of London. Mills, together with Graham Davis, professor of 3D X-ray imaging at Queen Mary, used machines specially designed for use in dentistry to scan unopened “locked” letters from the 17th century. This resulted in high-resolution volumetric scans, produced by high-contrast time delay integration X-ray microtomography.
“Who would have thought that a scanner designed to look at teeth would take us so far?” says Davis.
Computational flattening algorithms were then applied to the scans of the letters. This has been done successfully before with scrolls, books, and documents with one or two folds. The intricate folding configurations of the “locked” letters, however, posed unique technical challenges.
“The algorithm ends up doing an impressive job at separating the layers of paper, despite their extreme thinness and tiny gaps between them, sometimes less than the resolution of the scan,” says Erik Demaine, professor of computer science at MIT and an expert in computational origami. “We weren’t sure it would be possible.”
The team’s approach utilizes a fully 3D geometric analysis that requires no prior information about the number or types of folds or letters in a letter packet. The virtual unfolding generates 2D and 3D reconstructions of the letters in both folded and flat states, plus images of the letters’ writing surfaces and crease patterns.
“One of coolest technical contributions of the work is a technique that explores the folded and flattened representations of a letter simultaneously,” says Holly Jackson. “Our new technology enables conservators to preserve a letter’s internal engineering, while still giving historians insight into the lives of the senders and recipients.”
This virtual unfolding technique was used to reveal the contents of a letter dated July 31, 1697. It contains a request from Jacques Sennacques to his cousin Pierre Le Pers, a French merchant in The Hague, for a certified copy of a death notice of one Daniel Le Pers. The letter comes from the Brienne Collection, a European postmaster’s trunk preserving 300-year-old undelivered mail, which has provided a rare opportunity for researchers to study sealed locked letters.
“The trunk is a unique time capsule,” says David van der Linden, assistant professor in early modern history, Radboud University Nijmegen. “It preserves precious insights into the lives of thousands of people from all levels of society, including itinerant musicians, diplomats, and religious refugees. As historians, we regularly explore the lives of people who lived in the past, but to read an intimate story that has never seen the light of day — and never even reached its recipient — is truly extraordinary.”Advancing a new fieldIn the Nature Communications article, the team also unveils the first systematization of letterlocking techniques. After studying 250,000 historical letters, they devised a chart of categories and formats that assigns letter examples a security score. Understanding these security techniques of historical correspondence means archival collections can be conserved in ways that protect small but important material details, such as slits, locks, and creases.
“Sometimes the past resists scrutiny,” explains Daniel Starza Smith. “We could simply have cut these letters open, but instead we took the time to study them for their hidden, secret, and inaccessible qualities. We’ve learned that letters can be a lot more revealing when they are left unopened.”
The research team hopes to make a study collection of letterlocking examples available to scholars and students from a range of disciplines. The virtual unfolding algorithm could also have broad applications: Because it can handle flat, curved, and sharply folded materials, it can be used on many types of historical texts, including letters, scrolls, and books.
“What we have achieved is more than simply opening the unopenable, and reading the unreadable,” says Nadine Akkerman, reader in early modern English literature at Leiden University. “We have shown how truly interdisciplinary work breaks down boundaries to investigate what neither humanities nor the sciences can hope to understand alone.”
Computational tools promise to accelerate research on letterlocking as well as reveal new historical evidence. Thanks to this research, adds Rebekah Ahrendt, associate professor of musicology at Utrecht University, “we can now imagine new affective histories that physically connect the past and the present, the human and the nonhuman, the tangible and the digital.”
The research team includes Jana Dambrogio, Thomas F. Peterson Conservator, MIT Libraries; Amanda Ghassaei, research engineer at Adobe Research; Daniel Starza Smith, lecturer in early modern English literature at King’s College London; Holly Jackson, undergraduate student at MIT; Erik Demaine, professor in EECS; Martin Demaine, robotics engineer in CSAIL and Angelika and Barton Weller Artist-in-Residence in EECS; Graham Davis and David Mills, Queen Mary University of London’s Institute of Dentistry; Rebekah Ahrendt, associate professor of musicology at Utrecht University; Nadine Akkerman, reader in early modern English literature at Leiden University; and David van der Linden, assistant professor in early modern history at Radboud University Nijmegen.
This research was supported in part by grants from the Seaver Foundation, the Delmas Foundation, the British Academy, and the Nederlandse Organisatie voor Wetenschappelijk Onderzoek. More
If you’ve ever swatted a mosquito away from your face, only to have it return again (and again and again), you know that insects can be remarkably acrobatic and resilient in flight. Those traits help them navigate the aerial world, with all of its wind gusts, obstacles, and general uncertainty. Such traits are also hard to build into flying robots, but MIT Assistant Professor Kevin Yufeng Chen has built a system that approaches insects’ agility.
Chen, a member of the Department of Electrical Engineering and Computer Science and the Research Laboratory of Electronics, has developed insect-sized drones with unprecedented dexterity and resilience. The aerial robots are powered by a new class of soft actuator, which allows them to withstand the physical travails of real-world flight. Chen hopes the robots could one day aid humans by pollinating crops or performing machinery inspections in cramped spaces.
Chen’s work appears this month in the journal IEEE Transactions on Robotics. His co-authors include MIT PhD student Zhijian Ren, Harvard University PhD student Siyi Xu, and City University of Hong Kong roboticist Pakpong Chirarattananon.
Typically, drones require wide open spaces because they’re neither nimble enough to navigate confined spaces nor robust enough to withstand collisions in a crowd. “If we look at most drones today, they’re usually quite big,” says Chen. “Most of their applications involve flying outdoors. The question is: Can you create insect-scale robots that can move around in very complex, cluttered spaces?”
According to Chen, “The challenge of building small aerial robots is immense.” Pint-sized drones require a fundamentally different construction from larger ones. Large drones are usually powered by motors, but motors lose efficiency as you shrink them. So, Chen says, for insect-like robots “you need to look for alternatives.”
The principal alternative until now has been employing a small, rigid actuator built from piezoelectric ceramic materials. While piezoelectric ceramics allowed the first generation of tiny robots to take flight, they’re quite fragile. And that’s a problem when you’re building a robot to mimic an insect — foraging bumblebees endure a collision about once every second.
Chen designed a more resilient tiny drone using soft actuators instead of hard, fragile ones. The soft actuators are made of thin rubber cylinders coated in carbon nanotubes. When voltage is applied to the carbon nanotubes, they produce an electrostatic force that squeezes and elongates the rubber cylinder. Repeated elongation and contraction causes the drone’s wings to beat — fast.
Chen’s actuators can flap nearly 500 times per second, giving the drone insect-like resilience. “You can hit it when it’s flying, and it can recover,” says Chen. “It can also do aggressive maneuvers like somersaults in the air.” And it weighs in at just 0.6 grams, approximately the mass of a large bumble bee. The drone looks a bit like a tiny cassette tape with wings, though Chen is working on a new prototype shaped like a dragonfly.
“Achieving flight with a centimeter-scale robot is always an impressive feat,” says Farrell Helbling, an assistant professor of electrical and computer engineering at Cornell University, who was not involved in the research. “Because of the soft actuators’ inherent compliance, the robot can safely run into obstacles without greatly inhibiting flight. This feature is well-suited for flight in cluttered, dynamic environments and could be very useful for any number of real-world applications.”
Helbling adds that a key step toward those applications will be untethering the robots from a wired power source, which is currently required by the actuators’ high operating voltage. “I’m excited to see how the authors will reduce operating voltage so that they may one day be able to achieve untethered flight in real-world environments.”
Building insect-like robots can provide a window into the biology and physics of insect flight, a longstanding avenue of inquiry for researchers. Chen’s work addresses these questions through a kind of reverse engineering. “If you want to learn how insects fly, it is very instructive to build a scale robot model,” he says. “You can perturb a few things and see how it affects the kinematics or how the fluid forces change. That will help you understand how those things fly.” But Chen aims to do more than add to entomology textbooks. His drones can also be useful in industry and agriculture.
Chen says his mini-aerialists could navigate complex machinery to ensure safety and functionality. “Think about the inspection of a turbine engine. You’d want a drone to move around [an enclosed space] with a small camera to check for cracks on the turbine plates.”
Other potential applications include artificial pollination of crops or completing search-and-rescue missions following a disaster. “All those things can be very challenging for existing large-scale robots,” says Chen. Sometimes, bigger isn’t better. More
In October, a modified Dallara-15 Indy Lights race car programmed by MIT Driverless will hit the famed Indianapolis Motor Speedway at speeds of up to 120 miles per hour. The Indy Autonomous Challenge (IAC) is the world’s first head-to-head, high-speed autonomous race. It offers MIT Driverless a chance to grab a piece of the $1.5 million purse while outmaneuvering fellow university innovators on what is arguably the most iconic racecourse.
But the IAC has implications beyond the track. Stakeholders for the event include Sebastian Thrun, a former winner of the DARPA Grand Challenge for autonomous vehicles, and Reilly Brennan, a lecturer at Stanford University’s Center for Automotive Research and a partner at Trucks Venture Capital. The hosts are well aware that, much like the DARPA Grand Challenge, the IAC has the potential to catalyze a new wave of innovation in the private sector.
Formed in 2018 and hosted by the Edgerton Center at MIT, MIT Driverless comprises 50 highly motivated engineers with diverse skill sets. The team is intent on learning by doing, pushing the boundaries of the autonomous driving field. “There is so much strategy involved in multiagent autonomous racing, from reinforcement learning to AI and game theory,” says systems architecture lead and chief engineer Nick Stathas, a graduate student in electrical engineering and computer science (EECS). “What excites us the most is coming up with our own approaches to problems in autonomous driving — we’re looking to define state-of the-art solutions.”
In the lead up to the big day, the team has been testing their algorithms at hackathons and competing in a championship series called RoboRace. The series features 12 races hosted over six events covered by livestream. In this format, MIT Driverless and their competitors program and race a sleek electric vehicle dubbed the DEVBot 2.0. Reminiscent of a Tesla Roadster, the DEVBot was designed specifically to explore the relationship between human and machine.
The twist is that RoboRace blends the physical world with a virtual world dubbed the Metaverse. Teams must traverse the track while interacting with an augmented reality replete with virtual obstacles that raise lap times and collectibles that lower them. “Think of it as real-life racing meets Mario Kart,” says Yueyang “Kylie” Ying ’19, a graduate student in EECS who works in the Path Planning division at MIT Driverless.
For this challenge, Ying and her teammates have developed a unique planning algorithm they call Spline Racer, which determines if and when their vehicle needs to deviate from the most expedient course around the track to avoid obstacles or collect rewards. “Spline Racer essentially computes potential paths and then chooses the best one to take based on total time to negotiate the path and total cost or reward from bumping into obstacles or collectibles along that path,” explains Ying.
MIT is home to cutting-edge research that benefits MIT Driverless whenever the checkered flag is waved. Roboticist and Professor Daniela Rus is just one of their trusted advisors. Rus is director of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), the associate director of MIT’s Quest for Intelligence Core, and director of the Toyota-CSAIL Joint Research Center, which focuses on the advancement of AI research and its applications to intelligent vehicles.
Sertac Karaman of the MIT Department of Aeronautics and Astronautics also serves as an advisor to the team. In addition to pioneering research in controls and robotics theory, Karaman is a co-founder of Optimus Ride, the leading self-driving vehicle technology company developing systems for geo-fenced environments.
“One of the competitive advantages of our team is that by virtue of being at MIT, we have firsthand access to a rich concentration of research expertise that we can apply to our own development,” says team captain Jorge Castillo, a graduate student in the MIT Sloan School of Management.
Consider the connection between the Han Lab at MIT and MIT Driverless. Assistant professor of electrical engineering and computer science Song Han’s work on efficient computing, particularly his innovative algorithms and hardware systems based on his own deep compression technique for machine learning, is a boon for an autonomous racing team looking to make their algorithms run faster.
“Dr. Han is a big fan of MIT Driverless, and he’s been extremely helpful,” says Castillo. “We can only put a limited amount of computing in our car,” he explains, “so the faster we can make our algorithms run, the better we will be able to make them and the faster the car will be able to go safely.”
Think of MIT Driverless as an essential pit stop in the autonomous knowledge pipeline that flows between the Institute and industry. Their mission is to become the hub of applied autonomy at MIT, leveraging the research done on campus to help their engineers develop a broad skill set that is applicable beyond just the specific use case of autonomous driving.
“There are labs at MIT working to solve some of the most complex problems in the world,” says Castillo. “At MIT Driverless, we believe it’s vital to have a place that functions as a proving ground for this research while training the engineers that will help re-imagine the future of the tech industry when it comes to autonomous systems and robotics.”
And the MIT Driverless approach to autonomous vehicle racing, particularly as it pertains to architecture and data processing, is similar to the way industry addresses the self-driving problem for streets and highways — which is just one reason why the team has no shortage of industry sponsors who want to get involved. “We have a tight integration between the components that make the car run,” says Stathas. “From a systems perspective, we have well-defined sub-systems that our industry partners appreciate because it aligns with real-world autonomous vehicle development.”
In addition to gaining access to some of the most brilliant young talent in the world, industry partners can boost brand awareness while participating in the emerging sport of autonomous racing. “We’ve formed tight bonds with industry-leading companies,” says Castillo. “Very often, our sponsors are our biggest fans. They also place their trust in us and want to recruit from us because our engineers are well equipped to perform in the real world.” More