More stories

  • in

    Slender robotic finger senses buried items

    Over the years, robots have gotten quite good at identifying objects — as long as they’re out in the open.

    Discerning buried items in granular material like sand is a taller order. To do that, a robot would need fingers that were slender enough to penetrate the sand, mobile enough to wriggle free when sand grains jam, and sensitive enough to feel the detailed shape of the buried object.

    MIT researchers have now designed a sharp-tipped robot finger equipped with tactile sensing to meet the challenge of identifying buried objects. In experiments, the aptly named Digger Finger was able to dig through granular media such as sand and rice, and it correctly sensed the shapes of submerged items it encountered. The researchers say the robot might one day perform various subterranean duties, such as finding buried cables or disarming buried bombs.

    Play video

    The research will be presented at the next International Symposium on Experimental Robotics. The study’s lead author is Radhen Patel, a postdoc in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). Co-authors include CSAIL PhD student Branden Romero, Harvard University PhD student Nancy Ouyang, and Edward Adelson, the John and Dorothy Wilson Professor of Vision Science in CSAIL and the Department of Brain and Cognitive Sciences.

    Seeking to identify objects buried in granular material — sand, gravel, and other types of loosely packed particles — isn’t a brand new quest. Previously, researchers have used technologies that sense the subterranean from above, such as Ground Penetrating Radar or ultrasonic vibrations. But these techniques provide only a hazy view of submerged objects. They might struggle to differentiate rock from bone, for example.

    “So, the idea is to make a finger that has a good sense of touch and can distinguish between the various things it’s feeling,” says Adelson. “That would be helpful if you’re trying to find and disable buried bombs, for example.” Making that idea a reality meant clearing a number of hurdles.

    The team’s first challenge was a matter of form: The robotic finger had to be slender and sharp-tipped.

    In prior work, the researchers had used a tactile sensor called GelSight. The sensor consisted of a clear gel covered with a reflective membrane that deformed when objects pressed against it. Behind the membrane were three colors of LED lights and a camera. The lights shone through the gel and onto the membrane, while the camera collected the membrane’s pattern of reflection. Computer vision algorithms then extracted the 3D shape of the contact area where the soft finger touched the object. The contraption provided an excellent sense of artificial touch, but it was inconveniently bulky.

    For the Digger Finger, the researchers slimmed down their GelSight sensor in two main ways. First, they changed the shape to be a slender cylinder with a beveled tip. Next, they ditched two-thirds of the LED lights, using a combination of blue LEDs and colored fluorescent paint. “That saved a lot of complexity and space,” says Ouyang. “That’s how we were able to get it into such a compact form.” The final product featured a device whose tactile sensing membrane was about 2 square centimeters, similar to the tip of a finger.

    With size sorted out, the researchers turned their attention to motion, mounting the finger on a robot arm and digging through fine-grained sand and coarse-grained rice. Granular media have a tendency to jam when numerous particles become locked in place. That makes it difficult to penetrate. So, the team added vibration to the Digger Finger’s capabilities and put it through a battery of tests.

    “We wanted to see how mechanical vibrations aid in digging deeper and getting through jams,” says Patel. “We ran the vibrating motor at different operating voltages, which changes the amplitude and frequency of the vibrations.” They found that rapid vibrations helped “fluidize” the media, clearing jams and allowing for deeper burrowing — though this fluidizing effect was harder to achieve in sand than in rice.

    They also tested various twisting motions in both the rice and sand. Sometimes, grains of each type of media would get stuck between the Digger-Finger’s tactile membrane and the buried object it was trying to sense. When this happened with rice, the trapped grains were large enough to completely obscure the shape of the object, though the occlusion could usually be cleared with a little robotic wiggling. Trapped sand was harder to clear, though the grains’ small size meant the Digger Finger could still sense the general contours of target object.

    Patel says that operators will have to adjust the Digger Finger’s motion pattern for different settings “depending on the type of media and on the size and shape of the grains.” The team plans to keep exploring new motions to optimize the Digger Finger’s ability to navigate various media.

    Adelson says the Digger Finger is part of a program extending the domains in which robotic touch can be used. Humans use their fingers amidst complex environments, whether fishing for a key in a pants pocket or feeling for a tumor during surgery. “As we get better at artificial touch, we want to be able to use it in situations when you’re surrounded by all kinds of distracting information,” says Adelson. “We want to be able to distinguish between the stuff that’s important and the stuff that’s not.”

    Funding for this research was provided, in part, by the Toyota Research Institute through the Toyota-CSAIL Joint Research Center; the Office of Naval Research; and the Norwegian Research Council. More

  • in

    Twelve from MIT awarded 2021 Fulbright Fellowships

    Twelve MIT student affiliates have won fellowships for the Fulbright 2021-22 grant year. Their host country destinations include Brazil, Iceland, India, the Netherlands, New Zealand, Norway, South Korea, Spain, and Taiwan, where they will conduct research, earn a graduate degree, or teach English.

    Sponsored by the U.S. Department of State, the Fulbright U.S. Student Program offers opportunities for American student scholars in over 160 countries. Last fall, Fulbright received a record number of applications, making this the most competitive cycle in the 75-year history of the program.

    Jenny Chan is a senior studying mechanical engineering. Growing up in Philadelphia as the child of Vietnamese and Cambodian immigrants gave her an appreciation for how education could be used to uplift others. This led to her joining many activities that would continue to ignite her passion for education, including CodeIt, Global Teaching Labs, Full STEAM Ahead, and DynaMIT. At MIT, Chan also enjoys holding Friday night events with SaveTFP, sailing on the Charles River, and dancing as a member of DanceTroupe. Her Fulbright grant will take her to Taiwan, where she will serve as an English teaching assistant.

    Gretchen Eggers ’20 graduated with double majors in brain and cognitive sciences and computer science. As a Fulbright student in Brazil, Eggers will head to the Arts and Artificial Intelligence group at the University of São Paulo to research graffiti, street art, and the design of creative artificial intelligence. With a lifelong passion for painting and the arts, Eggers is excited to spend time with and learn about mural painting from local artists in São Paulo. Upon completing her Fulbright, Eggers plans to pursue a PhD in human-computer interaction.

    Miki Hansen is a senior majoring in mechanical engineering. As the winner of the Delft University of Technology’s Industrial Design Engineering Award, she will pursue a MS in integrated product design at TU Delft in the Netherlands. In tandem with her studies, she hopes to conduct research into sustainable product design for a circular economy. At MIT, Hansen was involved in Design for America, Pi Tau Sigma (MechE Honor Society), DanceTroupe, the MissBehavior dance team, and Alpha Chi Omega. After completing Fulbright, Hansen plans on working as a product designer focused on sustainable materials and packaging.

    Olivia Wynne Houck is a doctoral student in the History, Theory, and Criticism of Architecture program. She focuses on urban planning in the 20th century, with an interest in the intersections of transportation, economic, and diplomatic policies in Iceland, the United States, and Sweden. She also conducts research on infrastructure in the Arctic. As a Fulbright National Science Foundation Arctic Research Award recipient, Houck will be hosted by the political science department at the University of Iceland, where she will pursue archival research on Route 1, the ring road that encircles Iceland. Houck has also received a fellowship from the American-Scandinavian Foundation. 

    Laura Huang is a senior majoring in mechanical engineering. At the National Taiwan University of Science and Technology, Huang will combine engineering and art to develop an assistive calligraphy robot to better understand human-computer interaction. At MIT, she has done research with the Human Computer Interaction Engineering group in the Computer Science and Artificial Intelligence Laboratory, and has helped run assistive technology workshops in India and Saudi Arabia. Outside of research, Huang creates art, plays with the women’s volleyball club, and leads STEM educational outreach through MIT CodeIt and Global Teaching Labs. While in Taiwan, she hopes to continue STEM outreach, explore the culinary scene, and learn calligraphy.

    Teis Jorgensen graduates in June with an MS from the Integrated Design and Management program. He is a designer, researcher, and behavioral scientist with seven years’ experience designing products and services with a social mission. His passion is designing games that inspire and challenge players to be the best version of themselves. For his Fulbright research grant in Kerala, India, Teis will interview women about their challenges balancing home and professional responsibilities. His goal is to use these interviews as the inspiration for the design of a board game that shares their stories and ultimately helps remove barriers to female employment.

    Meghana Kamineni will graduate this spring with a major in computer science and engineering and a minor in biology. At the University of Oslo in Norway, Kamineni will implement statistical models to understand and predict the impact of vaccinations and other interventions on the spread of Covid-19. At MIT, she pursued interests in computational research for health care through work on the bacterial infection C. difficile in the laboratory of Professor John Guttag. Outside of research, she has been involved with STEM educational outreach for middle school students through dynaMIT and MIT CodeIt, and hopes to continue outreach in Norway. After Fulbright, Kamineni plans to attend medical school.

    Andrea Shinyoung Kim will graduate in June with an MS in comparative media studies. Her master’s thesis looks at the relation between digital avatars and personhood in social virtual reality, advised by D. Fox Harrell. Her Fulbright research in South Korea will investigate how virtual reality can facilitate cross-cultural learning and live performance art. Kim will observe Korean mask dances and its craft to better inform the design of online virtual worlds. She will collaborate with her hosts at the Seoul Arts Institute and CultureHub. After Fulbright, she plans to pursue a PhD to further explore her interdisciplinary interests and arts praxis.

    Kevin Lujan Lee is a PhD candidate in the Department of Urban Studies and Planning. In Aotearoa/New Zealand, he will study the transnational processes shaping how low-wage Pacific Islander workers navigate the institutions of labor market regulation. This will comprise one-half of his broader dissertation project — a comparative study of Indigenous Pacific Islanders and low-wage work in 21st-century empires. His research is only made possible by activists in the U.S. immigrant labor movement and global LANDBACK movement, who envision a world beyond labor precarity and Indigenous dispossession. Lee hopes to pursue an academic career to support the work of these movements.

    Anjali Nambrath is a senior double majoring in physics and mathematics. She has worked on projects related to nuclear structure, neutrino physics, and dark matter detection at MIT and at two national labs. At MIT, she was president of the Society for Physics Students, a member of the MIT Shakespeare Ensemble, an organizer of HackMIT, and a teacher for the MIT Educational Studies Program. For her Fulbright grant to India, Nambrath will be based at the Tata Institute for Fundamental Research in Mumbai, where she will work on models of neutrino production and interaction in supernovae. After Fulbright, Nambrath will begin graduate school in physics at the University of California at Berkeley. 

    Abby Stein will graduate in June with a double major in physics and electrical engineering. At MIT, she researched communication theory in the Research Laboratory of Electronics, and optical network hardware at Lincoln Laboratory. Stein discovered an interest in international research and education through her MISTI experience in Chile, where she studied optics for astronomy, and through teaching engineering workshops in Israel with MIT’s Global Teaching Labs. For her Fulbright, Stein will conduct research on quantum optical satellite networks at the Institute of Photonic Sciences in Barcelona, Spain. After completing Fulbright, Stein will head to Stanford University to pursue a PhD in applied physics. 

    Tony Terrasa is a senior majoring in mechanical engineering and music. As a Fulbright English teaching assistant in Spain, he will be teaching in Galicia. Previously, Terrasa taught English, math, and physics to secondary school students in Lübeck, Germany as part of the MIT Global Teaching Labs program. He also taught for three years in the English as a Second Language Program for MIT Facilities Department employees. An MIT Emerson Fellow of Jazz Saxophone, he looks forward to listening to and learning about Galician music traditions while sharing some of his own.

    MIT students and recent alumni interested in applying to the Fulbright U.S. Student Program should contact Julia Mongo in the Office of Distinguished Fellowships at MIT Career Advising and Professional Development. Students are also supported in the process by the Presidential Committee on Distinguished Fellowships. More

  • in

    There’s a symphony in the antibody protein the body makes to neutralize the coronavirus

    The pandemic reached a new milestone this spring with the rollout of Covid-19 vaccines. MIT Professor Markus Buehler marked the occasion by writing “Protein Antibody in E Minor,” an orchestral piece performed last month by South Korea’s Lindenbaum Festival Orchestra. The room was empty, but the message was clear.

    “It’s a hopeful piece as we enter this new phase in the pandemic,” says Buehler, the McAfee Professor of Engineering at MIT, and also a composer of experimental music.

    “This is the beginning of a musical healing project,” adds Hyung Joon Won, a Seoul-based violinist who initiated the collaboration.

    “Protein Antibody in E Minor” is the sequel to “Viral Counterpoint of the Spike Protein,” a piece Buehler wrote last spring during the first wave of coronavirus infections. Picked up by the media, “Viral Counterpoint” went global, like the virus itself, reaching Won, who at the time was performing for patients hospitalized with Covid-19. Won became the first in a series of artists to approach Buehler about collaborating.

    At Won’s request, Buehler adapted “Viral Counterpoint” for the violin. This spring, the two musicians teamed up again, with Buehler translating the coronavirus-attacking antibody protein into a score for a 10-piece orchestra.

    The two pieces are as different as the proteins they are based on. “Protein Antibody” is harmonious and playful; “Viral Counterpoint” is foreboding, even sinister. “Protein Antibody,” which is based on the part of the protein that attaches to SARS-CoV-2, runs for five minutes; “Viral Counterpoint,” which represents the virus’s entire spike protein, meanders for 50.

    The antibody protein’s straightforward shape lent itself to a classical composition, says Buehler. The intricate folds of the spike protein, by contrast, required a more complex representation.

    Both pieces use a theory that Buehler devised for translating protein structures into musical scores. Both proteins — antigen and pathogen — have 20 amino acids, which can be expressed as 20 unique vibrational tones. Proteins, like other molecules, vibrate at different frequencies, a phenomenon Buehler has used to “see” the virus and its variants, capturing their complex entanglements in a musical score.

    In work with the MIT-IBM Watson AI Lab and PhD student Yiwen Hu, Buehler discovered that the proteins that stud SARS-Cov-2 vibrate less frequently and intensely than its more lethal cousins, SARS and MERS. He hypothesizes that the viruses use vibrations to jimmy their way into cells; the more energetic the protein, the deadlier the virus or mutation.

    Play video

    The molecular mechanics of the pandemic: MERS, SARS and COVID-19

    “As the coronavirus continues to mutate, this method gives us another way of studying the variants and the threat they pose,” says Buehler. “It also shows the importance of considering proteins as vibrating objects in their biological context.”

    Translating proteins into music is part of Buehler’s larger work designing new proteins by borrowing ideas from nature and harnessing the power of AI. He has trained deep-learning algorithms to both translate the structure of existing proteins into their vibrational patterns and run the operation in reverse to infer structure from vibrational patterns. With these tools, he hopes to take existing proteins and create entirely new ones targeted for specific technological or medical needs.

    The process of turning science into art is like finding another “microscope” to observe nature, says Buehler. It has also opened his work to a broader audience. More than a year after “Viral Counterpoint’s” debut, the piece has racked up more than a million downloads on SoundCloud. Some listeners were so moved they asked Buehler for permission to create their own interpretation of his work. In addition to Won, the violinist in South Korea, the piece was picked up by a ballet company in South Africa, a glass artist in Oregon, and a dance professor in Michigan, among others.

    A “suite” of homespun ballets

    The Joburg Ballet shut down last spring with the rest of South Africa. But amid the lockdown, “Viral Counterpoint” reached Iain MacDonald, artistic director of Joburg Ballet. Then, as now, the company’s dancers were quarantined at home. Putting on a traditional ballet was impossible, so MacDonald improvised; he assigned each dancer a fragment of Buehler’s music and asked them to choreograph a response. They performed from home as friends and family filmed from their cellphones. Stitched together, the segments became “The Corona Suite,” a six-minute piece that aired on YouTube last July.

    In it, the dancers twirl and pirouette on a set of unlikely stages: in the stairwell of an apartment building, on a ladder in a garden, and beside a glimmering swimming pool. With no access to costumes, the dancers made do with their own leotards, tights, and even boxer briefs, in whatever shade of red they could find. “Red became the socially-distant cohesive thread that tied the company together,” says MacDonald.

    MacDonald says the piece was intended as a public service announcement, to encourage people to stay home. It was also meant to inspire hope: that the company’s dancers would return to the stage, stay mentally and physically fit, and that everyone would pull through. “We all hoped that the virus would not cause harm to our loved ones,” he says. “And that we, as a people, could come out of this stronger and united than ever before.” 

    A Covid “sonnet” cast in glass

    Jerri Bartholomew, a microbiologist at Oregon State University, was supposed to spend her sabbatical last year at a lab in Spain. When Covid intervened, she retreated to the glass studio in her backyard. There, she focused on her other passion: making art from her research on fish parasites. She had previously worked with musicians to translate her own data into music; when she heard “Viral Counterpoint” she was moved to reinterpret Buehler’s music as glass art. 

    She found his pre-print paper describing the sonification process, digitized the figures, and transferred them to silkscreen. She then printed them on a sheet of glass, fusing and casting the images to create a series of increasingly abstract representations. After, she spent hours polishing each glass work. “It’s a lot of grinding,” she says. Her favorite piece, Covid Sonnet, shows the spike protein flowing into Buehler’s musical score. “His musical composition is an abstraction,” she says. “I hope people will be curious about why it looks and sounds the way it does. It makes the science more interesting.”

    Translating a lethal virus into movement

    Months into the pandemic, Covid’s impact on immigrants in the United States was becoming clear; Rosely Conz, a choreographer and native of Brazil, wanted to channel her anxiety into art. When she heard “Viral Counterpoint,” she knew she had a score for her ballet. She would make the virus visible, she decided, in the same way Buehler had made it audible. “I looked for aspects of the virus that could be applied to movement — its machine-like characteristics, its transfer from one performer to another, its protein spike that makes it so infectious,” she says.

    “Virus” debuted this spring at Alma College, a liberal arts school in rural Michigan where Conz teaches. On a dark stage shimmering with red light, her students leaped and glided in black pointe shoes and face masks. Their elbows and legs jabbed at the air, almost robotically, as if to channel the ugliness of the virus. Those gestures were juxtaposed by “melting movements” that Rosely says embody the humanity of the dancer. The piece is literally about the virus, but also the constraints of making art in a crisis; the dancers maintained six feet of distance throughout. “I always tell my students that in choreography we should use limitation as possibility, and that is what I tried to do,” she says. 

    Back at MIT, Buehler is planning several more “Protein Antibody” performances with Won this year. In the lab, he and Hu, his PhD student, are expanding their study of the molecular vibrations of proteins to see if they might have therapeutic value. “It’s the next step in our quest to better understand the molecular mechanics of the life,” he says. More

  • in

    Jeremy Kepner named SIAM Fellow

    Jeremy Kepner, a Lincoln Laboratory Fellow in the Cyber Security and Information Sciences Division and a research affiliate of the MIT Department of Mathematics, was named to the 2021 class of fellows of the Society for Industrial and Applied Mathematics (SIAM). The fellow designation honors SIAM members who have made outstanding contributions to the 17 mathematics-related research areas that SIAM promotes through its publications, conferences, and community of scientists. Kepner was recognized for “contributions to interactive parallel computing, matrix-based graph algorithms, green supercomputing, and big data.”

    Since joining Lincoln Laboratory in 1998, Kepner has worked to expand the capabilities of computing at the laboratory and throughout the computing community. He has published broadly, served on technical committees of national conferences, and contributed to regional efforts to provide access to supercomputing.

    “Jeremy has had two decades of contributing to the important field of high performance computing, including both supercomputers and embedded systems. He has also made a seminal impact to supercomputer system research. He invented a unique way to do signal processing on sparse data, critically important for parsing through social networks and leading to more efficient use of parallel computing environments,” says David Martinez, now a Lincoln Laboratory fellow and previously a division head who hired and then worked with Kepner for many years.

    At Lincoln Laboratory, Kepner originally led the U.S. Department of Defense (DoD) High Performance Embedded Computing Software Initiative that created the Vector, Signal and Image Processing Library standard that many DoD sensor systems have utilized. In 1999, he invented the MatlabMPI software and in 2001 was the architect of pMatlab (Parallel Matlab Toolbox) that has been used by thousands of Lincoln Laboratory staff and scientists and engineers worldwide. In 2011, the Parallel Vector Tile Optimizing Library (PVTOL), developed under Kepner’s direction, won an R&D 100 Award.

    “Jeremy has been a world leader in moving the state of high performance computing forward for the past two decades,” says Stephen Rejto, head of Lincoln Laboratory’s Cyber Security and Information Sciences Division. “His vision and drive have been invaluable to the laboratory’s mission.”

    Kepner led a consortium to pioneer the Massachusetts Green High Performance Computing Center, the world’s largest and, because of its use of hydropower, “greenest” open research data center, which is enabling a dramatic increase in MIT’s computing capabilities while reducing its CO2 footprint. He led the establishment of the current Lincoln Laboratory Supercomputing Center, which boasts New England’s most powerful supercomputer. In 2019, he helped found the U.S. Air Force-MIT AI Accelerator, which leverages the expertise and resources of MIT and the Air Force to advance research in artificial intelligence.

    “These individual honors are a recognition of the achievements of our entire Lincoln team to whom I am eternally indebted,” Kepner says.

    Kepner’s recent work has been in graph analytics and big data. He created a novel database management language and schema (Dynamic Distributed Dimensional Data Model, or D4M), which is widely used in both Lincoln Laboratory and government big data systems.

    His publications range across many fields — data mining, databases, high performance computing, graph algorithms, cybersecurity, visualization, cloud computing, random matrix theory, abstract algebra, and bioinformatics. Among his works are two SIAM bestselling books, “Parallel MATLAB for Multicore and Multinode Computers” and “Graph Algorithms in the Language of Linear Algebra.” In 2018, he and coauthor Hayden Jananthan published “Mathematics of Big Data” as one of the books in the MIT Lincoln Laboratory series put out by MIT Press.

    Kepner, who joined SIAM during his graduate days at Princeton University, has not only published books and articles through SIAM but also been involved with the SIAM community’s activities. He has served as vice chair of the SIAM International Conference on Data Mining; advises a SIAM student section; and enlisted SIAM’s affiliation with the High Performance Extreme (originally Embedded) Computing (HPEC) conference, in which he has had “an instrumental role in bringing together the high performance embedded computing community and which under his leadership became an IEEE conference in 2012,” according to Martinez, who founded the Lincoln Laboratory-hosted HPEC conference in 1997.

    Kepner is the first Lincoln Laboratory researcher to attain the rank of SIAM Fellow and the ninth from MIT. More

  • in

    Improving the way videos are organized

    At any given moment, many thousands of new videos are being posted to sites like YouTube, TikTok, and Instagram. An increasing number of those videos are being recorded and streamed live. But tech and media companies still struggle to understand what’s going in all that content.

    Now MIT alumnus-founded Netra is using artificial intelligence to improve video analysis at scale. The company’s system can identify activities, objects, emotions, locations, and more to organize and provide context to videos in new ways.

    Companies are using Netra’s solution to group similar content into highlight reels or news segments, flag nudity and violence, and improve ad placement. In advertising, Netra is helping ensure videos are paired with relevant ads so brands can move away from tracking individual people, which has led to privacy concerns.

    “The industry as a whole is pivoting toward content-based advertising, or what they call affinity advertising, and away from cookie-based, pixel-based tracking, which was always sort of creepy,” Netra co-founder and CTO Shashi Kant SM ’06 says.

    Netra also believes it is improving the searchability of video content. Once videos are processed by Netra’s system, users can start a search with a keyword. From there, they can click on results to see similar content and find increasingly specific events.

    For instance, Netra’s system can process a baseball season’s worth of video and help users find all the singles. By clicking on certain plays to see more like it, they can also find all the singles that were almost outs and led the fans to boo angrily.

    “Video is by far the biggest information resource today,” Kant says. “It dwarfs text by orders of magnitude in terms of information richness and size, yet no one’s even touched it with search. It’s the whitest of white space.”

    Pursuing a vision

    Internet pioneer and MIT professor Sir Tim Berners-Lee has long worked to improve machines’ ability to make sense of data on the internet. Kant researched under Berners-Lee as a graduate student and was inspired by his vision for improving the way information is stored and used by machines.

    “The holy grail to me is a new paradigm in information retrieval,” Kant says. “I feel web search is still 1.0. Even Google is 1.0. That’s been the vision of Sir Tim Berners-Lee’s semantic web initiative and that’s what I took from that experience.”

    Kant was also a member of the winning team in the MIT $100K Entrepreneurship Competition (the MIT $50K back then). He helped write the computer code for a solution called the Active Joint Brace, which was an electromechanical orthotic device for people with disabilities.

    After graduating in 2006, Kant started a company that used AI in its solution called Cognika. AI still had a bad reputation from being overhyped, so Kant would use terms like cognitive computing when pitching his company to investors and customers.

    Kant started Netra in 2013 to use AI for video analysis. These days he has to deal with the opposite end of the hype spectrum, with so many startups claiming they use AI in their solution.

    Netra tries cutting through the hype with demonstrations of its system. Netra can quickly analyze videos and organize the content based on what’s going on in different clips, including scenes where people are doing similar things, expressing similar emotions, using similar products, and more. Netra’s analysis generates metadata for different scenes, but Kant says Netra’s system provides much more than keyword tagging.

    “What we work with are embeddings,” Kant explains, referring to how his system classifies content. “If there’s a scene of someone hitting a home run, there’s a certain signature to that, and we generate an embedding for that. An embedding is a sequence of numbers, or a ‘vector,’ that captures the essence of a piece of content. Tags are just human readable representations of that. So, we’ll train a model that detects all the home runs, but underneath the cover there’s a neural network, and it’s creating an embedding of that video, and that differentiates the scene in other ways from an out or a walk.”

    By defining the relationships between different clips, Netra’s system allows customers to organize and search their content in new ways. Media companies can determine the most exciting moments of sporting events based on fans’ emotions. They can also group content by subject, location, or by whether or not clips include sensitive or disturbing content.

    Those abilities have major implications for online advertising. An advertising company representing a brand like the outdoor apparel company Patagonia could use Netra’s system to place Patagonia’s ads next to hiking content. Media companies could offer brands like Nike advertising space around clips of sponsored athletes.

    Those capabilities are helping advertisers adhere to new privacy regulations around the world that put restrictions on gathering data on individual people, especially children. Targeting certain groups of people with ads and tracking them across the web has also become controversial.

    Kant believes Netra’s AI engine is a step toward giving consumers more control over their data, an idea long championed by Berners-Lee.

    “It’s not the implementation of my CSAIL work, but I’d say the conceptual ideas I was pursuing at CSAIL come through in Netra’s solution,” Kant says.

    Transforming the way information is stored

    Netra currently counts some of the country’s largest media and advertising companies as customers. Kant believes Netra’s system could one day help anyone search through and organize the growing ocean of video content on the internet. To that end, he sees Netra’s solution continuing to evolve.

    “Search hasn’t changed much since it was invented for web 1.0,” Kant says. “Right now there’s lots of link-based search. Links are obsolete in my view. You don’t want to visit different documents. You want information from those documents aggregated into something contextual and customizable, including just the information you need.”

    Kant believes such contextualization would greatly improve the way information is organized and shared on the internet.

    “It’s about relying less and less on keywords and more and more on examples,” Kant explains. “For instance, in this video, if Shashi makes a statement, is that because he’s a crackpot or is there more to it? Imagine a system that could say, ‘This other scientist said something similar to validate that statement and this scientist responded similarly to that question.’ To me, those types of things are the future of information retrieval, and that’s my life’s passion. That’s why I came to MIT. That’s why I’ve spent one and a half decades of my life fighting this battle of AI, and that’s what I’ll continue to do.” More

  • in

    Behind Covid-19 vaccine development

    When starting a vaccine program, scientists generally have anecdotal understanding of the disease they’re aiming to target. When Covid-19 surfaced over a year ago, there were so many unknowns about the fast-moving virus that scientists had to act quickly and rely on new methods and techniques just to even begin understanding the basics of the disease.

    Scientists at Janssen Research & Development, developers of the Johnson & Johnson-Janssen Covid-19 vaccine, leveraged real-world data and, working with MIT researchers, applied artificial intelligence and machine learning to help guide the company’s research efforts into a potential vaccine.

    “Data science and machine learning can be used to augment scientific understanding of a disease,” says Najat Khan, chief data science officer and global head of strategy and operations for Janssen Research & Development. “For Covid-19, these tools became even more important because ­­­our knowledge was rather limited. There was no hypothesis at the time. We were developing an unbiased understanding of the disease based on real-world data using sophisticated AI/ML algorithms.”

    In preparing for clinical studies of Janssen’s lead vaccine candidate, Khan put out a call for collaborators on predictive modeling efforts to partner with her data science team to identify key locations to set up trial sites. Through Regina Barzilay, the MIT School of Engineering Distinguished Professor for AI and Health, faculty lead of AI for MIT’s Abdul Latif Jameel Clinic for Machine Learning in Health, and a member of Janssen’s scientific advisory board, Khan connected with Dimitris Bertsimas, the Boeing Leaders for Global Operations Professor of Management at MIT, who had developed a leading machine learning model that tracks Covid-19 spread in communities and predicts patient outcomes, and brought him on as the primary technical partner on the project.

    DELPHI

    When the World Health Organization declared Covid-19 a pandemic in March 2020 and forced much of the world into lockdown, Bertsimas, who is also the faculty lead of entrepreneurship for the Jameel Clinic, brought his group of 25-plus doctoral and master’s students together to discuss how they could use their collective skills in machine learning and optimization to create new tools to aid the world in combating the spread of the disease.

    The group started tracking their efforts on the COVIDAnalytics platform, where their models are generating accurate real-time insight into the pandemic. One of the group’s first projects was charting the progression of Covid-19 with an epidemiological model they developed named DELPHI, which predicts state-by-state infection and mortality rates based upon each state’s policy decision.

    DELPHI is based on the standard SEIR model, a compartmental model that simplifies the mathematical modeling of infectious diseases by dividing populations in four categories: susceptible, exposed, infectious, and recovered. The ordering of the labels is intentional to show the flow patterns between the compartments. DELPHI expands on this model with a system that looks at 11 possible states of being to account for realistic effects of the pandemic, such comparing the length of time those who recovered from Covid-19 spent in the hospital versus those who died.

    “The model has some values that are hardwired, such as how long a person stays in the hospital, but we went deeper to account for the nonlinear change of infection rates, which we found were not constant and varied over different periods and locations,” says Bertsimas. “This gave us more modeling flexibility, which led the model to make more accurate predictions.”

    A key innovation of the model is capturing the behaviors of people related to measures put into place during the pandemic, such as lockdowns, mask-wearing, and social distancing, and the impact these had on infection rates.

    “By June or July, we were able to augment the model with these data. The model then became even more accurate,” says Bertsimas. “We also considered different scenarios for how various governments might respond with policy decisions, from implementing serious restrictions to no restrictions at all, and compared them to what we were seeing happening in the world. This gave us the ability to make a spectrum of predictions. One of the advantages of the DELPHI model is that it makes predictions on 120 countries and all 50 U.S. states on a daily basis.”

    A vaccine for today’s pandemic

    Being able to determine where Covid-19 is likely to spike next proved to be critical to the success of Janssen’s clinical trials, which were “event-based” — meaning that “we figure out efficacy based on how many ‘events’ are in our study population, events such as becoming sick with Covid-19,” explains Khan.

    “To run a trial like this, which is very, very large, it’s important to go to hot spots where we anticipate the disease transmission to be high so that you can accumulate those events quickly. If you can, then you can run the trial faster, bring the vaccine to market more quickly, and also, most importantly, have a very rich dataset where you can make statistically sound analysis.”

    Bertsimas assembled a core group of researchers to work with him on the project, including two doctoral students from MIT’s Operations Research Center, where he is a faculty member: Michael Li, who led implementation efforts, and Omar Skali Lami. Other members included Hamza Tazi MBN ’20, a former master of business analytics student, and Ali Haddad, a data research scientist at Dynamic Ideas LLC.

    The MIT team began collaborating with Khan and her team last May to forecast where the next surge in cases might happen. Their goal was to identify Covid-19 hot spots where Janssen could conduct clinical trials and recruit participants who were most likely to get exposed to the virus.

    With clinical trials due to start last September, the teams had to immediately hit the ground running and make predictions four months in advance of when the trials would actually take place. “We started meeting daily with the Janssen team. I’m not exaggerating — we met on a daily basis … sometimes over the weekend, and sometimes more than once a day,” says Bertsimas.

    To understand how the virus was moving around the world, data scientists at Janssen continuously monitored and scouted data sources across the world. The team built a global surveillance dashboard that pulled in data at a country, state, and even county level based on data availability, on case numbers, hospitalizations, and mortality and testing rates.

    The DELPHI model integrated these data, with additional information about local policies and behaviors, such as whether people were being compliant with mask-wearing, and was making daily predictions in the 300-400 range. “We were getting constant feedback from the Janssen team which helped to improve the quality of the model. The model eventually became quite central to the clinical trial process,” says Bertsimas.

    Remarkably, the vast majority of Janssen’s clinical trial sites that DELPHI predicted to be Covid-19 hot spots ultimately had extremely high number of cases, including in South Africa and Brazil, where new variants of the virus had surfaced by the time the trials began. According to Khan, high incidence rates typically indicate variant involvement.

    “All of the predictions the model made are publicly available, so one can go back and see how accurate the model really is. It held its own. To this day, DELPHI is one of the most accurate models the scientific community has produced,” says Bertsimas.

    “As a result of this model, we were able to have a highly data-rich package at the time of submission of our vaccine candidate,” says Khan. “We are one of the few trials that had clinical data in South Africa and Brazil. That became critical because we were able to develop a vaccine that became relevant for today’s needs, today’s world, and today’s pandemic, which consists of so many variants, unfortunately.” 

    Khan points out that the DELPHI model was further evolved with diversity in mind, taking into account biological risk factors, patient demographics, and other characteristics. “Covid-19 impacts people in different ways, so it was important to go to areas where we were able to recruit participants from different races, ethnic groups, and genders. Due to this effort, we had one of the most diverse Covid-19 trials that’s been run to date,” she says. “If you start with the right data, unbiased, and go to the right places, we can actually change a lot of the paradigms that are limiting us today.”

    In April, the MIT and Janssen R&D Data Science team were jointly recognized by the Institute for Operations Research and the Management Sciences (INFORMS) as the winner of the 2021 Innovative Applications in Analytics Award for their innovative and highly impactful work on Covid-19. Building on this success, the teams are continuing their collaboration to apply their data-driven approach and technical rigor in tackling other infectious diseases. “This was not a partnership in name only. Our teams really came together in this and continue to work together on various data science efforts across the pipeline,” says Khan. The team further appreciates the role of investigators on the ground, who contributed to site selection in combination with the model.

    “It was a very satisfying experience,” concurs Bertsimas. “I’m proud to have contributed to this effort and help the world in the fight against the pandemic.” More

  • in

    Helping students of all ages flourish in the era of artificial intelligence

    A new cross-disciplinary research initiative at MIT aims to promote the understanding and use of AI across all segments of society. The effort, called Responsible AI for Social Empowerment and Education (RAISE), will develop new teaching approaches and tools to engage learners in settings from preK-12 to the workforce.

    “People are using AI every day in our workplaces and our private lives. It’s in our apps, devices, social media, and more. It’s shaping the global economy, our institutions, and ourselves. Being digitally literate is no longer enough. People need to be AI-literate to understand the responsible use of AI and create things with it at individual, community, and societal levels,” says RAISE Director Cynthia Breazeal, a professor of media arts and sciences at MIT.

    “But right now, if you want to learn about AI to make AI-powered applications, you pretty much need to have a college degree in computer science or related topic,” Breazeal adds. “The educational barrier is still pretty high. The vision of this initiative is: AI for everyone else — with an emphasis on equity, access, and responsible empowerment.”

    Headquartered in the MIT Media Lab, RAISE is a collaboration with the MIT Schwarzman College of Computing and MIT Open Learning. The initiative will engage in research coupled with education and outreach efforts to advance new knowledge and innovative technologies to support how diverse people learn about AI as well as how AI can help to better support human learning. Through Open Learning and the Abdul Latif Jameel World Education Lab (J-WEL), RAISE will also extend its reach into a global network where equity and justice are key.

    The initiative draws on MIT’s history as both a birthplace of AI technology and a leader in AI pedagogy. “MIT already excels at undergraduate and graduate AI education,” says Breazeal, who heads the Media Lab’s Personal Robots group and is an associate director of the Media Lab. “Now we’re building on those successes. We’re saying we can take a leadership role in educational research, the science of learning, and technological innovation to broaden AI education and empower society writ large to shape our future with AI.”

    In addition to Breazeal, RAISE co-directors are Hal Abelson, professor of computer science and education; Eric Klopfer, professor and director of the Scheller Teacher Education Program; and Hae Won Park, a research scientist at the Media Lab. Other principal leaders include Professor Sanjay Sarma, vice president for open learning. RAISE draws additional participation from dozens of faculty, staff, and students across the Institute.

    “In today’s rapidly changing economic and technological landscape, a core challenge nationally and globally is to improve the effectiveness, availability, and equity of preK-12 education, community college, and workforce development. AI offers tremendous promise for new pedagogies and platforms, as well as for new content. Developing and deploying advances in computing for the public good is core to the mission of the Schwarzman College of Computing, and I’m delighted to have the College playing a role in this initiative,” says Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing.

    The new initiative will engage in research, education, and outreach activities to advance four strategic impact areas: diversity and inclusion in AI, AI literacy in preK-12 education, AI workforce training, and AI-supported learning. Success entails that new knowledge, materials, technological innovations, and programs developed by RAISE are leveraged by other stakeholder AI education programs across MIT and beyond to add value to their efficacy, experience, equity, and impact.

    RAISE will develop AI-augmented tools to support human learning across a variety of topics. “We’ve done a lot of work in the Media Lab around companion AI,” says Park. “Personalized learning companion AI agents such as social robots support individual students’ learning and motivation to learn. This work provides an effective and safe space for students to practice and explore topics such as early childhood literacy and language development.”

    Diversity and inclusion will be embedded throughout RAISE’s work, to help correct historic inequities in the field of AI. “We’re seeing story after story of unintended bias and inequities that are arising because of these AI systems,” says Breazeal. “So, a mission of our initiative is to educate a far more diverse and inclusive group of people in the responsible design and use of AI technologies, who will ultimately be more representative of the communities they will be developing these products and services for.”

    This spring, RAISE is piloting a K-12 outreach program called Future Makers. The program brings engaging, hands-on learning experiences about AI fundamentals and critical thinking about societal implications to teachers and students, primarily from underserved or under-resourced communities, such as schools receiving Title I services.

    To bring AI to young people within and beyond the classroom, RAISE is developing and distributing curricula, teacher guides, and student-friendly AI tools that enable anyone, even those with no programming background, to create original applications for desktop and mobile computing. “Scratch and App Inventor are already in the hands of millions of learners worldwide,” explains Abelson. “RAISE is enhancing these platforms and making powerful AI accessible to all people for increased creativity and personal expression.”

    Ethics and AI will be a central component to the initiative’s curricula and teaching tools. “Our philosophy is, have kids learn about the technical concepts right alongside the ethical design practices,” says Breazeal.  “Thinking through the societal implications can’t be an afterthought.”

    “AI is changing the way we interact with computers as consumers as well as designers and developers of technology,” Klopfer says. “It is creating a new paradigm for innovation and change. We want to make sure that all people are empowered to use this technology in constructive, creative, and beneficial ways.”

    “Connecting this initiative not only to [MIT’s schools of] engineering and computing, but also to the School of Humanities, Arts and Social Sciences recognizes the multidimensional nature of this effort,” Klopfer adds.

    Sarma says RAISE also aims to boost AI literacy in the workforce, in part by adapting some of their K-12 techniques. “Many of these tools — when made somewhat more sophisticated and more germane to the adult learner — will make a tremendous difference,” says Sarma. For example, he envisions a program to train radiology technicians in how AI programs interpret diagnostic imagery and, vitally, how they can err.

    “AI is having a truly transformative effect across broad swaths of society,” says Breazeal. “Children today are not only digital natives, they’re AI natives. And adults need to understand AI to be able to engage in a democratic dialogue around how we want these systems deployed.” More

  • in

    Helping robots collaborate to get the job done

    Sometimes, one robot isn’t enough.

    Consider a search-and-rescue mission to find a hiker lost in the woods. Rescuers might want to deploy a squad of wheeled robots to roam the forest, perhaps with the aid of drones scouring the scene from above. The benefits of a robot team are clear. But orchestrating that team is no simple matter. How to ensure the robots aren’t duplicating each other’s efforts or wasting energy on a convoluted search trajectory?

    MIT researchers have designed an algorithm to ensure the fruitful cooperation of information-gathering robot teams. Their approach relies on balancing a tradeoff between data collected and energy expended — which eliminates the chance that a robot might execute a wasteful maneuver to gain just a smidgeon of information. The researchers say this assurance is vital for robot teams’ success in complex, unpredictable environments. “Our method provides comfort, because we know it will not fail, thanks to the algorithm’s worst-case performance,” says Xiaoyi Cai, a PhD student in MIT’s Department of Aeronautics and Astronautics (AeroAstro).

    The research will be presented at the IEEE International Conference on Robotics and Automation in May. Cai is the paper’s lead author. His co-authors include Jonathan How, the R.C. Maclaurin Professor of Aeronautics and Astronautics at MIT; Brent Schlotfeldt and George J. Pappas, both of the University of Pennsylvania; and Nikolay Atanasov of the University of California at San Diego.

    Robot teams have often relied on one overarching rule for gathering information: The more the merrier. “The assumption has been that it never hurts to collect more information,” says Cai. “If there’s a certain battery life, let’s just use it all to gain as much as possible.” This objective is often executed sequentially — each robot evaluates the situation and plans its trajectory, one after another. It’s a straightforward procedure, and it generally works well when information is the sole objective. But problems arise when energy efficiency becomes a factor.

    Cai says the benefits of gathering additional information often diminish over time. For example, if you already have 99 pictures of a forest, it might not be worth sending a robot on a miles-long quest to snap the 100th. “We want to be cognizant of the tradeoff between information and energy,” says Cai. “It’s not always good to have more robots moving around. It can actually be worse when you factor in the energy cost.”

    The researchers developed a robot team planning algorithm that optimizes the balance between energy and information. The algorithm’s “objective function,” which determines the value of a robot’s proposed task, accounts for the diminishing      benefits of gathering additional information and the rising energy cost. Unlike prior planning methods, it doesn’t just assign tasks to the robots sequentially. “It’s more of a collaborative effort,” says Cai. “The robots come up with the team plan themselves.”

    Cai’s method, called Distributed Local Search, is an iterative approach that improves the team’s performance by adding or removing individual robot’s trajectories from the group’s overall plan. First, each robot independently generates a set of potential trajectories it might pursue. Next, each robot proposes its trajectories to the rest of the team. Then the algorithm accepts or rejects each individual’s proposal, depending on whether it increases or decreases the team’s objective function. “We allow the robots to plan their trajectories on their own,” says Cai. “Only when they need to come up with the team plan, we let them negotiate. So, it’s a rather distributed computation.”

    Distributed Local Search proved its mettle in computer simulations. The researchers ran their algorithm against competing ones in coordinating a simulated team of 10 robots. While Distributed Local Search took slightly more computation time, it guaranteed successful completion of the robots’ mission, in part by ensuring that no team member got mired in a wasteful expedition for minimal information. “It’s a more expensive method,” says Cai. “But we gain performance.”

    The advance could one day help robot teams solve real-world information gathering problems where energy is a finite resource, according to Geoff Hollinger, a roboticist at Oregon State University, who was not involved with the research. “These techniques are applicable where the robot team needs to trade off between sensing quality and energy expenditure. That would include aerial surveillance and ocean monitoring.”

    Cai also points to potential applications in mapping and search-and-rescue — activities that rely on efficient data collection. “Improving this underlying capability of information gathering will be quite impactful,” he says. The researchers next plan to test their algorithm on robot teams in the lab, including a mix of drones and wheeled robots.

    This research was funded in part by Boeing and the Army Research Laboratory’s Distributed and Collaborative Intelligent Systems and Technology Collaborative Research Alliance (DCIST CRA). More