More stories

  • in

    New integrative computational neuroscience center established at MIT’s McGovern Institute

    With the tools of modern neuroscience, researchers can peer into the brain with unprecedented accuracy. Recording devices listen in on the electrical conversations between neurons, picking up the voices of hundreds of cells at a time. Genetic tools allow us to focus on specific types of neurons based on their molecular signatures. Microscopes zoom in to illuminate the brain’s circuitry, capturing thousands of images of elaborately branched dendrites. Functional MRIs detect changes in blood flow to map activity within a person’s brain, generating a complete picture by compiling hundreds of scans.

    This deluge of data provides insights into brain function and dynamics at different levels — molecules, cells, circuits, and behavior — but the insights remain compartmentalized in separate research silos for each level. An innovative new center at MIT’s McGovern Institute for Brain Research aims to leverage them into powerful revelations of the brain’s inner workings.

    The K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center will create advanced mathematical models and computational tools to synthesize the deluge of data across scales and advance our understanding of the brain and mental health.

    The center, funded by a $24 million donation from philanthropist Lisa Yang and led by McGovern Institute Associate Investigator Ila Fiete, will take a collaborative approach to computational neuroscience, integrating cutting-edge modeling techniques and data from MIT labs to explain brain function at every level, from the molecular to the behavioral.

    “Our goal is that sophisticated, truly integrated computational models of the brain will make it possible to identify how ‘control knobs’ such as genes, proteins, chemicals, and environment drive thoughts and behavior, and to make inroads toward urgent unmet needs in understanding and treating brain disorders,” says Fiete, who is also a brain and cognitive sciences professor at MIT.

    “Driven by technologies that generate massive amounts of data, we are entering a new era of translational neuroscience research,” says Yang, whose philanthropic investment in MIT research now exceeds $130 million. “I am confident that the multidisciplinary expertise convened by the ICoN center will revolutionize how we synthesize this data and ultimately understand the brain in health and disease.”

    Connecting the data

    It is impossible to separate the molecules in the brain from their effects on behavior — although those aspects of neuroscience have traditionally been studied independently, by researchers with vastly different expertise. The ICoN Center will eliminate the divides, bringing together neuroscientists and software engineers to deal with all types of data about the brain.

    “The center’s highly collaborative structure, which is essential for unifying multiple levels of understanding, will enable us to recruit talented young scientists eager to revolutionize the field of computational neuroscience,” says Robert Desimone, director of the McGovern Institute. “It is our hope that the ICoN Center’s unique research environment will truly demonstrate a new academic research structure that catalyzes bold, creative research.”

    To foster interdisciplinary collaboration, every postdoc and engineer at the center will work with multiple faculty mentors. In order to attract young scientists and engineers to the field of computational neuroscience, the center will also provide four graduate fellowships to MIT students each year in perpetuity. Interacting closely with three scientific cores, engineers and fellows will develop computational models and technologies for analyzing molecular data, neural circuits, and behavior, such as tools to identify patterns in neural recordings or automate the analysis of human behavior to aid psychiatric diagnoses. These technologies and models will be instrumental in synthesizing data into knowledge and understanding.

    Center priorities

    In its first five years, the ICoN Center will prioritize four areas of investigation: episodic memory and exploration, including functions like navigation and spatial memory; complex or stereotypical behavior, such as the perseverative behaviors associated with autism and obsessive-compulsive disorder; cognition and attention; and sleep. Models of complex behavior will be created in collaboration with clinicians and researchers at Children’s Hospital of Philadelphia.

    The goal, Fiete says, is to model the neuronal interactions that underlie these functions so that researchers can predict what will happen when something changes — when certain neurons become more active or when a genetic mutation is introduced, for example. When paired with experimental data from MIT labs, the center’s models will help explain not just how these circuits work, but also how they are altered by genes, the environment, aging, and disease. These focus areas encompass circuits and behaviors often affected by psychiatric disorders and neurodegeneration, and models will give researchers new opportunities to explore their origins and potential treatment strategies.

    “Lisa Yang is focused on helping the scientific community realize its goals in translational research,” says Nergis Mavalvala, dean of the School of Science and the Curtis and Kathleen Marble Professor of Astrophysics. “With her generous support, we can accelerate the pace of research by connecting the data to the delivery of tangible results.” More

  • in

    How quickly do algorithms improve?

    Algorithms are sort of like a parent to a computer. They tell the computer how to make sense of information so they can, in turn, make something useful out of it.

    The more efficient the algorithm, the less work the computer has to do. For all of the technological progress in computing hardware, and the much debated lifespan of Moore’s Law, computer performance is only one side of the picture.

    Behind the scenes a second trend is happening: Algorithms are being improved, so in turn less computing power is needed. While algorithmic efficiency may have less of a spotlight, you’d definitely notice if your trusty search engine suddenly became one-tenth as fast, or if moving through big datasets felt like wading through sludge.

    This led scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) to ask: How quickly do algorithms improve?  

    Existing data on this question were largely anecdotal, consisting of case studies of particular algorithms that were assumed to be representative of the broader scope. Faced with this dearth of evidence, the team set off to crunch data from 57 textbooks and more than 1,110 research papers, to trace the history of when algorithms got better. Some of the research papers directly reported how good new algorithms were, and others needed to be reconstructed by the authors using “pseudocode,” shorthand versions of the algorithm that describe the basic details.

    In total, the team looked at 113 “algorithm families,” sets of algorithms solving the same problem that had been highlighted as most important by computer science textbooks. For each of the 113, the team reconstructed its history, tracking each time a new algorithm was proposed for the problem and making special note of those that were more efficient. Ranging in performance and separated by decades, starting from the 1940s to now, the team found an average of eight algorithms per family, of which a couple improved its efficiency. To share this assembled database of knowledge, the team also created Algorithm-Wiki.org.

    The scientists charted how quickly these families had improved, focusing on the most-analyzed feature of the algorithms — how fast they could guarantee to solve the problem (in computer speak: “worst-case time complexity”). What emerged was enormous variability, but also important insights on how transformative algorithmic improvement has been for computer science.

    For large computing problems, 43 percent of algorithm families had year-on-year improvements that were equal to or larger than the much-touted gains from Moore’s Law. In 14 percent of problems, the improvement to performance from algorithms vastly outpaced those that have come from improved hardware. The gains from algorithm improvement were particularly large for big-data problems, so the importance of those advancements has grown in recent decades.

    The single biggest change that the authors observed came when an algorithm family transitioned from exponential to polynomial complexity. The amount of effort it takes to solve an exponential problem is like a person trying to guess a combination on a lock. If you only have a single 10-digit dial, the task is easy. With four dials like a bicycle lock, it’s hard enough that no one steals your bike, but still conceivable that you could try every combination. With 50, it’s almost impossible — it would take too many steps. Problems that have exponential complexity are like that for computers: As they get bigger they quickly outpace the ability of the computer to handle them. Finding a polynomial algorithm often solves that, making it possible to tackle problems in a way that no amount of hardware improvement can.

    As rumblings of Moore’s Law coming to an end rapidly permeate global conversations, the researchers say that computing users will increasingly need to turn to areas like algorithms for performance improvements. The team says the findings confirm that historically, the gains from algorithms have been enormous, so the potential is there. But if gains come from algorithms instead of hardware, they’ll look different. Hardware improvement from Moore’s Law happens smoothly over time, and for algorithms the gains come in steps that are usually large but infrequent. 

    “This is the first paper to show how fast algorithms are improving across a broad range of examples,” says Neil Thompson, an MIT research scientist at CSAIL and the Sloan School of Management and senior author on the new paper. “Through our analysis, we were able to say how many more tasks could be done using the same amount of computing power after an algorithm improved. As problems increase to billions or trillions of data points, algorithmic improvement becomes substantially more important than hardware improvement. In an era where the environmental footprint of computing is increasingly worrisome, this is a way to improve businesses and other organizations without the downside.”

    Thompson wrote the paper alongside MIT visiting student Yash Sherry. The paper is published in the Proceedings of the IEEE. The work was funded by the Tides foundation and the MIT Initiative on the Digital Economy. More

  • in

    Last-mile routing research challenge awards $175,000 to three winning teams

    Routing is one of the most studied problems in operations research; even small improvements in routing efficiency can save companies money and result in energy savings and reduced environmental impacts. Now, three teams of researchers from universities around the world have received prize money totaling $175,000 for their innovative route optimization models.

    The three teams were the winners of the Amazon Last-Mile Routing Research Challenge, through which the MIT Center for Transportation & Logistics (MIT CTL) and Amazon engaged with a global community of researchers across a range of disciplines, from computer science to business operations to supply chain management, challenging them to build data-driven route optimization models leveraging massive historical route execution data.

    First announced in February, the research challenge attracted more than 2,000 participants from around the world. Two hundred twenty-nine researcher teams formed during the spring to independently develop solutions that incorporated driver know-how into route optimization models with the intent that they would outperform traditional optimization approaches. Out of the 48 teams whose models qualified for the final round of the challenge, three teams’ work stood out above the rest. Amazon provided real operational training data for the models and evaluated submissions, with technical support from MIT CTL scientists.

    In real life, drivers frequently deviate from planned and mathematically optimized route sequences. Drivers carry information about which roads are hard to navigate when traffic is bad, when and where they can easily find parking, which stops can be conveniently served together, and many other factors that existing optimization models simply don’t capture.

    Each model addressed the challenge data in a unique way. The methodological approaches chosen by the participants frequently combined traditional exact and heuristic optimization approaches with nontraditional machine learning methods. On the machine learning side, the most commonly adopted methods were different variants of artificial neural networks, as well as inverse reinforcement learning approaches.

    There were 45 submissions that reached the finalist phase, with team members hailing from 29 countries. Entrants spanned all levels of higher education from final-year undergraduate students to retired faculty. Entries were assessed in a double-blind review process so that the judges would not know what team was attached to each entry.

    The third-place prize of $25,000 was awarded to Okan Arslan and Rasit Abay. Okan is a professor at HEC Montréal, and Rasit is a doctoral student at the University of New South Wales in Australia. The runner-up prize at $50,000 was awarded to MIT’s own Xiaotong Guo, Qingyi Wang, and Baichuan Mo, all doctoral students. The top prize of $100,000 was awarded to Professor William Cook of the University of Waterloo in Canada, Professor Stephan Held of the University of Bonn in Germany, and Professor Emeritus Keld Helsgaun of Roskilde University in Denmark. Congratulations to all winners and contestants were held via webinar on July 30.

    Top-performing teams may be interviewed by Amazon for research roles in the company’s Last Mile organization. MIT CTL will publish and promote short technical papers written by all finalists and might invite top-performing teams to present at MIT. Further, a team led by Matthias Winkenbach, director of the MIT Megacity Logistics Lab, will guest-edit a special issue of Transportation Science, one of the most renowned academic journals in this field, featuring academic papers on topics related to the problem tackled by the research challenge. More

  • in

    Exact symbolic artificial intelligence for faster, better assessment of AI fairness

    The justice system, banks, and private companies use algorithms to make decisions that have profound impacts on people’s lives. Unfortunately, those algorithms are sometimes biased — disproportionately impacting people of color as well as individuals in lower income classes when they apply for loans or jobs, or even when courts decide what bail should be set while a person awaits trial.

    MIT researchers have developed a new artificial intelligence programming language that can assess the fairness of algorithms more exactly, and more quickly, than available alternatives.

    Their Sum-Product Probabilistic Language (SPPL) is a probabilistic programming system. Probabilistic programming is an emerging field at the intersection of programming languages and artificial intelligence that aims to make AI systems much easier to develop, with early successes in computer vision, common-sense data cleaning, and automated data modeling. Probabilistic programming languages make it much easier for programmers to define probabilistic models and carry out probabilistic inference — that is, work backward to infer probable explanations for observed data.

    “There are previous systems that can solve various fairness questions. Our system is not the first; but because our system is specialized and optimized for a certain class of models, it can deliver solutions thousands of times faster,” says Feras Saad, a PhD student in electrical engineering and computer science (EECS) and first author on a recent paper describing the work. Saad adds that the speedups are not insignificant: The system can be up to 3,000 times faster than previous approaches.

    SPPL gives fast, exact solutions to probabilistic inference questions such as “How likely is the model to recommend a loan to someone over age 40?” or “Generate 1,000 synthetic loan applicants, all under age 30, whose loans will be approved.” These inference results are based on SPPL programs that encode probabilistic models of what kinds of applicants are likely, a priori, and also how to classify them. Fairness questions that SPPL can answer include “Is there a difference between the probability of recommending a loan to an immigrant and nonimmigrant applicant with the same socioeconomic status?” or “What’s the probability of a hire, given that the candidate is qualified for the job and from an underrepresented group?”

    SPPL is different from most probabilistic programming languages, as SPPL only allows users to write probabilistic programs for which it can automatically deliver exact probabilistic inference results. SPPL also makes it possible for users to check how fast inference will be, and therefore avoid writing slow programs. In contrast, other probabilistic programming languages such as Gen and Pyro allow users to write down probabilistic programs where the only known ways to do inference are approximate — that is, the results include errors whose nature and magnitude can be hard to characterize.

    Error from approximate probabilistic inference is tolerable in many AI applications. But it is undesirable to have inference errors corrupting results in socially impactful applications of AI, such as automated decision-making, and especially in fairness analysis.

    Jean-Baptiste Tristan, associate professor at Boston College and former research scientist at Oracle Labs, who was not involved in the new research, says, “I’ve worked on fairness analysis in academia and in real-world, large-scale industry settings. SPPL offers improved flexibility and trustworthiness over other PPLs on this challenging and important class of problems due to the expressiveness of the language, its precise and simple semantics, and the speed and soundness of the exact symbolic inference engine.”

    SPPL avoids errors by restricting to a carefully designed class of models that still includes a broad class of AI algorithms, including the decision tree classifiers that are widely used for algorithmic decision-making. SPPL works by compiling probabilistic programs into a specialized data structure called a “sum-product expression.” SPPL further builds on the emerging theme of using probabilistic circuits as a representation that enables efficient probabilistic inference. This approach extends prior work on sum-product networks to models and queries expressed via a probabilistic programming language. However, Saad notes that this approach comes with limitations: “SPPL is substantially faster for analyzing the fairness of a decision tree, for example, but it can’t analyze models like neural networks. Other systems can analyze both neural networks and decision trees, but they tend to be slower and give inexact answers.”

    “SPPL shows that exact probabilistic inference is practical, not just theoretically possible, for a broad class of probabilistic programs,” says Vikash Mansinghka, an MIT principal research scientist and senior author on the paper. “In my lab, we’ve seen symbolic inference driving speed and accuracy improvements in other inference tasks that we previously approached via approximate Monte Carlo and deep learning algorithms. We’ve also been applying SPPL to probabilistic programs learned from real-world databases, to quantify the probability of rare events, generate synthetic proxy data given constraints, and automatically screen data for probable anomalies.”

    The new SPPL probabilistic programming language was presented in June at the ACM SIGPLAN International Conference on Programming Language Design and Implementation (PLDI), in a paper that Saad co-authored with MIT EECS Professor Martin Rinard and Mansinghka. SPPL is implemented in Python and is available open source. More

  • in

    Lincoln Laboratory convenes top network scientists for Graph Exploitation Symposium

    As the Covid-19 pandemic has shown, we live in a richly connected world, facilitating not only the efficient spread of a virus but also of information and influence. What can we learn by analyzing these connections? This is a core question of network science, a field of research that models interactions across physical, biological, social, and information systems to solve problems.

    The 2021 Graph Exploitation Symposium (GraphEx), hosted by MIT Lincoln Laboratory, brought together top network science researchers to share the latest advances and applications in the field.

    “We explore and identify how exploitation of graph data can offer key technology enablers to solve the most pressing problems our nation faces today,” says Edward Kao, a symposium organizer and technical staff in Lincoln Laboratory’s AI Software Architectures and Algorithms Group.

    The themes of the virtual event revolved around some of the year’s most relevant issues, such as analyzing disinformation on social media, modeling the pandemic’s spread, and using graph-based machine learning models to speed drug design.

    “The special sessions on influence operations and Covid-19 at GraphEx reflect the relevance of network and graph-based analysis for understanding the phenomenology of these complicated and impactful aspects of modern-day life, and also may suggest paths forward as we learn more and more about graph manipulation,” says William Streilein, who co-chaired the event with Rajmonda Caceres, both of Lincoln Laboratory.

    Social networks

    Several presentations at the symposium focused on the role of network science in analyzing influence operations (IO), or organized attempts by state and/or non-state actors to spread disinformation narratives.  

    Lincoln Laboratory researchers have been developing tools to classify and quantify the influence of social media accounts that are likely IO accounts, such as those willfully spreading false Covid-19 treatments to vulnerable populations.

    “A cluster of IO accounts acts as an echo chamber to amplify the narrative. The vulnerable population is then engaging in these narratives,” says Erika Mackin, a researcher developing the tool, called RIO or Reconnaissance of Influence Operations.

    To classify IO accounts, Mackin and her team trained an algorithm to detect probable IO accounts in Twitter networks based on a specific hashtag or narrative. One example they studied was #MacronLeaks, a disinformation campaign targeting Emmanuel Macron during the 2017 French presidential election. The algorithm is trained to label accounts within this network as being IO on the basis of several factors, such as the number of interactions with foreign news accounts, the number of links tweeted, or number of languages used. Their model then uses a statistical approach to score an account’s level of influence in spreading the narrative within that network.

    The team has found that their classifier outperforms existing detectors of IO accounts, because it can identify both bot accounts and human-operated ones. They’ve also discovered that IO accounts that pushed the 2017 French election disinformation narrative largely overlap with accounts influentially spreading Covid-19 pandemic disinformation today. “This suggests that these accounts will continue to transition to disinformation narratives,” Mackin says.

    Pandemic modeling

    Throughout the Covid-19 pandemic, leaders have been looking to epidemiological models, which predict how disease will spread, to make sound decisions. Alessandro Vespignani, director of the Network Science Institute at Northeastern University, has been leading Covid-19 modeling efforts in the United States, and shared a keynote on this work at the symposium.

    Besides taking into account the biological facts of the disease, such as its incubation period, Vespignani’s model is especially powerful in its inclusion of community behavior. To run realistic simulations of disease spread, he develops “synthetic populations” that are built by using publicly available, highly detailed datasets about U.S. households. “We create a population that is not real, but is statistically real, and generate a map of the interactions of those individuals,” he says. This information feeds back into the model to predict the spread of the disease. 

    Today, Vespignani is considering how to integrate genomic analysis of the virus into this kind of population modeling in order to understand how variants are spreading. “It’s still a work in progress that is extremely interesting,” he says, adding that this approach has been useful in modeling the dispersal of the Delta variant of SARS-CoV-2. 

    As researchers model the virus’ spread, Lucas Laird at Lincoln Laboratory is considering how network science can be used to design effective control strategies. He and his team are developing a model for customizing strategies for different geographic regions. The effort was spurred by the differences in Covid-19 spread across U.S. communities, and what the researchers found to be a gap in intervention modeling to address those differences.

    As examples, they applied their planning algorithm to three counties in Florida, Massachusetts, and California. Taking into account the characteristics of a specific geographic center, such as the number of susceptible individuals and number of infections there, their planner institutes different strategies in those communities throughout the outbreak duration.

    “Our approach eradicates disease in 100 days, but it also is able to do it with much more targeted interventions than any of the global interventions. In other words, you don’t have to shut down a full country.” Laird adds that their planner offers a “sandbox environment” for exploring intervention strategies in the future.

    Machine learning with graphs

    Graph-based machine learning is receiving increasing attention for its potential to “learn” the complex relationships between graphical data, and thus extract new insights or predictions about these relationships. This interest has given rise to a new class of algorithms called graph neural networks. Today, graph neural networks are being applied in areas such as drug discovery and material design, with promising results.

    “We can now apply deep learning much more broadly, not only to medical images and biological sequences. This creates new opportunities in data-rich biology and medicine,” says Marinka Zitnik, an assistant professor at Harvard University who presented her research at GraphEx.

    Zitnik’s research focuses on the rich networks of interactions between proteins, drugs, disease, and patients, at the scale of billions of interactions. One application of this research is discovering drugs to treat diseases with no or few approved drug treatments, such as for Covid-19. In April, Zitnik’s team published a paper on their research that used graph neural networks to rank 6,340 drugs for their expected efficacy against SARS-CoV-2, identifying four that could be repurposed to treat Covid-19.

    At Lincoln Laboratory, researchers are similarly applying graph neural networks to the challenge of designing advanced materials, such as those that can withstand extreme radiation or capture carbon dioxide. Like the process of designing drugs, the trial-and-error approach to materials design is time-consuming and costly. The laboratory’s team is developing graph neural networks that can learn relationships between a material’s crystalline structure and its properties. This network can then be used to predict a variety of properties from any new crystal structure, greatly speeding up the process of screening materials with desired properties for specific applications.

    “Graph representation learning has emerged as a rich and thriving research area for incorporating inductive bias and structured priors during the machine learning process, with broad applications such as drug design, accelerated scientific discovery, and personalized recommendation systems,” Caceres says. 

    A vibrant community

    Lincoln Laboratory has hosted the GraphEx Symposium annually since 2010, with the exception of last year’s cancellation due to Covid-19. “One key takeaway is that despite the postponement from last year and the need to be virtual, the GraphEx community is as vibrant and active as it’s ever been,” Streilein says. “Network-based analysis continues to expand its reach and is applied to ever-more important areas of science, society, and defense with increasing impact.”

    In addition to those from Lincoln Laboratory, technical committee members and co-chairs of the GraphEx Symposium included researchers from Harvard University, Arizona State University, Stanford University, Smith College, Duke University, the U.S. Department of Defense, and Sandia National Laboratories. More

  • in

    Contact-aware robot design

    Adequate biomimicry in robotics necessitates a delicate balance between design and control, an integral part of making our machines more like us. Advanced dexterity in humans is wrapped up in a long evolutionary tale of how our fists of fury evolved to accomplish complex tasks. With machines, designing a new robotic manipulator could mean long, manual iteration cycles of designing, fabricating, and evaluating guided by human intuition. 

    Most robotic hands are designed for general purposes, as it’s very tedious to make task-specific hands. Existing methods battle trade-offs between the complexity of designs critical for contact-rich tasks, and the practical constraints of manufacturing, and contact handling. 

    This led researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) to create a new method to computationally optimize the shape and control of a robotic manipulator for a specific task. Their system uses software to manipulate the design, simulate the robot doing a task, and then provide an optimization score to assess the design and control. 

    Such task-driven manipulator optimization has potential for a wide range of applications in manufacturing and warehouse robot systems, where each task needs to be performed repeatedly, but different manipulators would be suitable for individual tasks. 

    Play video

    A new method to represent robotic manipulators helps optimize complex and organic shapes for future machines.

    Seeking to test the functionality of the system, the team first created a single robotic finger design to flip over a box on the ground. The fingertip structure, which looked something like Captain Hook’s left hand, was automatically optimized by an algorithm to hook onto the box’s back surface and flip it. They also developed a model for an assembly task, where a two-finger design put a small cube into a larger, movable mount. Since the fingers were two different lengths, they could reach two objects of different sizes, and the larger and flatter surfaces of the fingers helped stably push the object. 

    Traditionally, this joint optimization process consists of using simple, more primitive shapes to approximate each component of a robot design. When creating a three-segment robotic finger, for example, it would likely be approximated by three connected cylinders, where the algorithm optimizes the length and radius to achieve the desired design and shape. While this would simplify the optimization problem, oversimplifying the shape would be limiting for more complex designs, and ultimately complex tasks. 

    To create more involved manipulators, the team’s method used a technique called “cage-based deformation,” which essentially lets the user change or deform the geometry of a shape in real-time.

    Using the software, you’d put something that looks like a cage around the robotic finger, for example. The algorithm can automatically change the cage dimensions to make more sophisticated, natural shapes. The different variations of designs still keep their integrity, so they can be easily fabricated.

    A simulator was developed by the team to simulate the manipulator design and control on a task, which then provides a performance score.

    “Using these simulation tools, we don’t need to evaluate the design by manufacturing and testing it in the real world,” says Jie Xu, MIT PhD student and lead author on a new paper about the research. “In contrast to reinforcement learning algorithms that are popular for manipulation, but are data-inefficient, the proposed cage-based representation and the simulator allows for the use of powerful gradient-based methods. We not only find better solutions, but also find them faster. As a result we can quickly score the design, thus significantly shortening the design cycle.”

    In the future, the team plans to extend the software to optimize the manipulators concurrently for multiple tasks.

    Xu wrote the paper alongside MIT PhD student Tao Chen, MIT graduate student Lara Zlokapa, MIT research scientist Michael Foshey, MIT Professor Wojciech Matusik, Texas A&M University Assistant professor Shinjiro Sueda, and MIT Professor Pulkit Agrawal. They presented the paper virtually at the 2021 Robotic Science and Systems conference last week. The work is supported by the Toyota Research Institute. More