More stories

  • in

    Driving on the cutting edge of autonomous vehicle tech

    In October, a modified Dallara-15 Indy Lights race car programmed by MIT Driverless will hit the famed Indianapolis Motor Speedway at speeds of up to 120 miles per hour. The Indy Autonomous Challenge (IAC) is the world’s first head-to-head, high-speed autonomous race. It offers MIT Driverless a chance to grab a piece of the $1.5 million purse while outmaneuvering fellow university innovators on what is arguably the most iconic racecourse.
    But the IAC has implications beyond the track. Stakeholders for the event include Sebastian Thrun, a former winner of the DARPA Grand Challenge for autonomous vehicles, and Reilly Brennan, a lecturer at Stanford University’s Center for Automotive Research and a partner at Trucks Venture Capital. The hosts are well aware that, much like the DARPA Grand Challenge, the IAC has the potential to catalyze a new wave of innovation in the private sector.
    Formed in 2018 and hosted by the Edgerton Center at MIT, MIT Driverless comprises 50 highly motivated engineers with diverse skill sets. The team is intent on learning by doing, pushing the boundaries of the autonomous driving field. “There is so much strategy involved in multiagent autonomous racing, from reinforcement learning to AI and game theory,” says systems architecture lead and chief engineer Nick Stathas, a graduate student in electrical engineering and computer science (EECS). “What excites us the most is coming up with our own approaches to problems in autonomous driving — we’re looking to define state-of the-art solutions.”

    Play video

    In the lead up to the big day, the team has been testing their algorithms at hackathons and competing in a championship series called RoboRace. The series features 12 races hosted over six events covered by livestream. In this format, MIT Driverless and their competitors program and race a sleek electric vehicle dubbed the DEVBot 2.0. Reminiscent of a Tesla Roadster, the DEVBot was designed specifically to explore the relationship between human and machine.
    The twist is that RoboRace blends the physical world with a virtual world dubbed the Metaverse. Teams must traverse the track while interacting with an augmented reality replete with virtual obstacles that raise lap times and collectibles that lower them. “Think of it as real-life racing meets Mario Kart,” says Yueyang “Kylie” Ying ’19, a graduate student in EECS who works in the Path Planning division at MIT Driverless.
    For this challenge, Ying and her teammates have developed a unique planning algorithm they call Spline Racer, which determines if and when their vehicle needs to deviate from the most expedient course around the track to avoid obstacles or collect rewards. “Spline Racer essentially computes potential paths and then chooses the best one to take based on total time to negotiate the path and total cost or reward from bumping into obstacles or collectibles along that path,” explains Ying.
    MIT is home to cutting-edge research that benefits MIT Driverless whenever the checkered flag is waved. Roboticist and Professor Daniela Rus is just one of their trusted advisors. Rus is director of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), the associate director of MIT’s Quest for Intelligence Core, and director of the Toyota-CSAIL Joint Research Center, which focuses on the advancement of AI research and its applications to intelligent vehicles.
    Sertac Karaman of the MIT Department of Aeronautics and Astronautics also serves as an advisor to the team. In addition to pioneering research in controls and robotics theory, Karaman is a co-founder of Optimus Ride, the leading self-driving vehicle technology company developing systems for geo-fenced environments.
    “One of the competitive advantages of our team is that by virtue of being at MIT, we have firsthand access to a rich concentration of research expertise that we can apply to our own development,” says team captain Jorge Castillo, a graduate student in the MIT Sloan School of Management.
    Consider the connection between the Han Lab at MIT and MIT Driverless. Assistant professor of electrical engineering and computer science Song Han’s work on efficient computing, particularly his innovative algorithms and hardware systems based on his own deep compression technique for machine learning, is a boon for an autonomous racing team looking to make their algorithms run faster.
    “Dr. Han is a big fan of MIT Driverless, and he’s been extremely helpful,” says Castillo. “We can only put a limited amount of computing in our car,” he explains, “so the faster we can make our algorithms run, the better we will be able to make them and the faster the car will be able to go safely.”
    Think of MIT Driverless as an essential pit stop in the autonomous knowledge pipeline that flows between the Institute and industry. Their mission is to become the hub of applied autonomy at MIT, leveraging the research done on campus to help their engineers develop a broad skill set that is applicable beyond just the specific use case of autonomous driving.
    “There are labs at MIT working to solve some of the most complex problems in the world,” says Castillo. “At MIT Driverless, we believe it’s vital to have a place that functions as a proving ground for this research while training the engineers that will help re-imagine the future of the tech industry when it comes to autonomous systems and robotics.”
    And the MIT Driverless approach to autonomous vehicle racing, particularly as it pertains to architecture and data processing, is similar to the way industry addresses the self-driving problem for streets and highways — which is just one reason why the team has no shortage of industry sponsors who want to get involved. “We have a tight integration between the components that make the car run,” says Stathas. “From a systems perspective, we have well-defined sub-systems that our industry partners appreciate because it aligns with real-world autonomous vehicle development.”
    In addition to gaining access to some of the most brilliant young talent in the world, industry partners can boost brand awareness while participating in the emerging sport of autonomous racing. “We’ve formed tight bonds with industry-leading companies,” says Castillo. “Very often, our sponsors are our biggest fans. They also place their trust in us and want to recruit from us because our engineers are well equipped to perform in the real world.” More

  • in

    Healing with hydrogels

    In November, mechanical engineering PhD candidate Hyunwoo Yuk earned the top prize at the Collegiate Inventors Competition hosted by the National Inventor’s Hall of Fame. Yuk was named the graduate winner for his invention SanaHeal, a bioadhesive tape that can easily bind to tissues or organs. The tape could one day be used in place of sutures to promote healing and minimize complications after surgery.
    Yuk accepted the prize a few weeks prior to successfully defending his doctoral thesis last December. These accomplishments marked the culmination of a personal journey that has its root in family tragedy.
    As he was completing his bachelor’s degree, Yuk received a call that his brother was involved in a horrific accident. He suffered multiple traumatic injuries and required intensive care. Yuk spent the next two years by his brother’s side as he was ushered in and out of operating rooms and intensive care units.
    “I’m 10 years older than my little brother, so I view him much like a son. Seeing what he went through was pure pain,” says Yuk.
    The years spent with his brother in the hospital gave Yuk insight into the problems and limitations of medical technologies. He started approaching these problems from the perspective of an engineer.
    “I believe that personal problems are the best problems for engineers to solve,” adds Yuk. “If there is a chance that I can do something meaningful as an inventor to solve the problems my brother faced, there is no better motivation.”
    Working with soft materials
    Before his brother’s accident, Yuk was focused primarily on bioinspired robotics during his undergraduate studies at the Korea Advanced Institute of Science and Technology (KAIST). While serving in the Korean military, Yuk was stationed at a local school to help children with special needs. Working with children with limited motor skills inspired Yuk to apply to MIT for graduate school to build robots for use in rehabilitation.
    Shortly after being accepted into MIT’s mechanical engineering graduate program, Yuk received an email from a new faculty member, Xuanhe Zhao, now professor of mechanical engineering and George N. Hatsopoulos (1949) Faculty Fellow. An expert in the field of soft materials, Zhao was looking for a graduate student to take the lead on understanding and design of soft materials, especially hydrogels.
    Despite being a complete newcomer to the world of soft materials, Yuk quickly found excitement from the interesting properties of soft materials and their unique potential to bridge artificial machines and devices to the human body. Through their research on how humans and machines interface, Zhao and Yuk began exploring the adhesive properties of hydrogels.
    “Adhesion of soft materials, especially hydrogels, greatly attracted our research interests, because the state-of-the-art adhesion of hydrogels was fairly weak and often led to failures,” Zhao says. “In 2015, we proposed the first general strategy to achieve tough adhesion for diverse hydrogels and other materials.”
    Like humans, hydrogels are made out of polymers and water. The similarities in mechanical and electrical properties make biological tissue and hydrogels compatible, opening up a world of biomedical applications. Now, Zhao and Yuk could further integrate hydrogels within various materials, devices, and even the human body with unprecedented robustness.
    “We gradually started realizing that one major impact of hydrogels and hydrogel adhesion lies in biomedicine, but we needed someone who could help us better understand unmet clinical demands,” explains Yuk.
    That role was filled by Christoph Nabzdyk, a critical care physician, cardiothoracic anesthesiologist, and assistant professor at the Mayo Clinic.
    Understanding clinical needs
    In 2017, the research team was approached by Nabzdyk, who at the time was a fellow in critical care medicine and cardiothoracic anesthesiology at Massachusetts General Hospital and Harvard University. Nabzdyk had research experience in wound healing and novel biomaterials. After seeing Zhao present at a conference, Nabzdyk saw an opportunity to apply Zhao and Yuk’s work on bioadhesive hydrogels to tissue and organ healing.
    “Despite decades of research, the most common modes of sealing tissue defects are sutures and staples — both rather antiquated and traumatic approaches to tissue approximation,” says Nabzdyk. “The bioadhesive work that Hyunwoo and Xuanhe have been developing has broad implications for various scenarios of organ defect repairs, hemostasis, and implantable adhesive electronics.”
    Using Nabzdyk’s connections, Yuk conducted over 30 interviews with surgeons. These interviews with end users reminded Yuk of his two years in hospitals with his brother.
    “I realized that so many of the problems my brother experienced, like uncontrollable bleeding and dozens of scars, could possibly have been solved with these hydrogel materials,” adds Yuk.
    The research team started exploring the mechanics of sealing wet tissues. Together with Ellen Roche, assistant professor of mechanical engineering, and graduate student Claudia Varela, they created a bioadhesive patch that can seal organs in a matter of seconds. This double-sided tape could potentially replace sutures, preventing the leakage of blood and reducing the risk of infection, pain, and scarring.
    “Working on this research is personally very fulfilling. I feel like I have found my purpose as an engineer,” says Yuk.
    He admits that to have true impact, the bioadhesive tape needs to move from the laboratory to the operating room.
    Focus on social impact
    To navigate the uncharted territory of regulations, standards, and investors, Yuk, Zhao, and Nabzdyk formed a team, named SanaHeal, that sought guidance and seed funding from organizations including the MIT Deshpande Center for Technological Innovation and MIT Venture Mentoring Service. These programs helped Yuk polish his ability to communicate about SanaHeal, a skill set that helped him win the graduate award at the Collegiate Inventors Competition.
    In addition to winning at the Collegiate Inventors Competition, Yuk has received a number of accolades including being named one of Forbes 30 Under 30, receiving the Materials Research Society Gold Graduate Student Award, and getting first place at MIT’s de Florez Awards. According to his collaborators, these honors are well-deserved.
    “Driven by the genuine need to improve his younger brother’s health, Hyunwoo has been working long hours over the last six-and-a-half years — co-filing over 10 patent applications, leading or co-leading over 15 papers including two in Nature, and motivating and caring for other team members.” Zhao says. “I cannot wait to see how Hyunwoo and SanaHeal will impact his family, other patients, and the world.”
    “Hyunwoo stands out even amongst his already brilliant peers. I have not worked with anyone who at such an early stage in their career has had such a long list of academic accomplishments and advanced level of scientific thinking,” Nabzdyk adds. “Despite all his hard work and drive, Hyunwoo himself is a sweet and caring person, a gentle and humble soul.”
    With his doctoral studies coming to a close, Yuk is looking to expand SanaHeal so it can have real impact in saving lives or helping those who, like his brother, have suffered traumatic injury.
    “It feels like we are ready to step forward to the real battlefield,” he says. “I’ve grown from the mindset of an academic research to someone who is more translationally minded and focused on the social impact our work can have.” More

  • in

    Researchers develop speedier network analysis for a range of computer hardware

    Graphs — data structures that show the relationship among objects — are highly versatile. It’s easy to imagine a graph depicting a social media network’s web of connections. But graphs are also used in programs as diverse as content recommendation (what to watch next on Netflix?) and navigation (what’s the quickest route to the beach?). As Ajay Brahmakshatriya summarizes: “graphs are basically everywhere.”
    Brahmakshatriya has developed software to more efficiently run graph applications on a wider range of computer hardware. The software extends GraphIt, a state-of-the-art graph programming language, to run on graphics processing units (GPUs), hardware that processes many data streams in parallel. The advance could accelerate graph analysis, especially for applications that benefit from a GPU’s parallelism, such as recommendation algorithms.
    Brahmakshatriya, a PhD student in MIT’s Department of Electrical Engineering and Computer Science and the Computer Science and Artificial Intelligence Laboratory, will present the work at this month’s International Symposium on Code Generation and Optimization. Co-authors include Brahmakshatriya’s advisor, Professor Saman Amarasinghe, as well as Douglas T. Ross Career Development Assistant Professor of Software Technology Julian Shun, postdoc Changwan Hong, recent MIT PhD student Yunming Zhang PhD ’20 (now with Google), and Adobe Research’s Shoaib Kamil.
    When programmers write code, they don’t talk directly to the computer hardware. The hardware itself operates in binary — 1s and 0s — while the coder writes in a structured, “high-level” language made up of words and symbols. Translating that high-level language into hardware-readable binary requires programs called compilers. “A compiler converts the code to a format that can run on the hardware,” says Brahmakshatriya. One such compiler, specially designed for graph analysis, is GraphIt.
    The researchers developed GraphIt in 2018 to optimize the performance of graph-based algorithms regardless of the size and shape of the graph. GraphIt allows the user not only to input an algorithm, but also to schedule how that algorithm runs on the hardware. “The user can provide different options for the scheduling, until they figure out what works best for them,” says Brahmakshatriya. “GraphIt generates very specialized code tailored for each application to run as efficiently as possible.”
    A number of startups and established tech firms alike have adopted GraphIt to aid their development of graph applications. But Brahmakshatriya says the first iteration of GraphIt had a shortcoming: It only runs on central processing units or CPUs, the type of processor in a typical laptop.
    “Some algorithms are massively parallel,” says Brahmakshatriya, “meaning they can better utilize hardware like a GPU that has 10,000 cores for execution.” He notes that some types of graph analysis, including recommendation algorithms, require a high degree of parallelism. So Brahmakshatriya extended GraphIt to enable graph analysis to flourish on GPUs.
    Brahmakshatriya’s team preserved the way GraphIt users input algorithms, but adapted the scheduling component for a wider array of hardware. “Our main design decision in extending GraphIt to GPUs was to keep the algorithm representation exactly the same,” says Brahmakshatriya. “Instead, we added a new scheduling language. So, the user can keep the same algorithms that they had before written before [for CPUs], and just change the scheduling input to get the GPU code.”
    This new, optimized scheduling for GPUs gives a boost to graph algorithms that require high parallelism — including recommendation algorithms or internet search functions that sift through millions of websites simultaneously. To confirm the efficacy of GraphIt’s new extension, the team ran 90 experiments pitting GraphIt’s runtime against other state-of-the-art graph compilers on GPUs. The experiments included a range of algorithms and graph types, from road networks to social networks. GraphIt ran fastest in 65 of the 90 cases and was close behind the leading algorithm in the rest of the trials, demonstrating both its speed and versatility.
    GraphIt “advances the field by attaining performance and productivity simultaneously,” says Adrian Sampson, a computer scientist at Cornell University who was not involved with the research. “Traditional ways of doing graph analysis have one or the other: Either you can write a simple algorithm with mediocre performance, or you can hire an expert to write an extremely fast implementation — but that kind of performance is rarely accessible to mere mortals. The GraphIt extension is the key to letting ordinary people write high-level, abstract algorithms and nonetheless getting expert-level performance out of GPUs.”
    Sampson adds the advance could be particularly useful in rapidly changing fields: “An exciting domain like that is genomics, where algorithms are evolving so quickly that high-performance expert implementations can’t keep up with the rate of change. I’m excited for bioinformatics practitioners to get their hands on GraphIt to expand the kinds of genomic analyses they’re capable of.”
    Brahmakshatriya says the new GraphIt extension provides a meaningful advance in graph analysis, enabling users to go between CPUs and GPUs with state-of-the-art performance with ease. “The field these days is tooth-and-nail competition. There are new frameworks coming out every day,” He says. But he emphasizes that the payoff for even slight optimization is worth it. “Companies are spending millions of dollars each day to run graph algorithms. Even if you make it run just 5 percent faster, you’re saving many thousands of dollars.”
    This research was funded, in part, by the National Science Foundation, U.S. Department of Energy, the Applications Driving Architectures Center, and the Defense Advanced Research Projects Agency. More

  • in

    Stefanie Mueller changes everything: A hands-on class responds to Covid

    When Professor Stefanie Mueller needed to adapt her laboratory class to the Covid-19 pandemic, she was initially overwhelmed by the amount of work that would need to be done. That’s because Mueller’s hands-on building and fabrication class, 6.810 (Engineering Interactive Technologies), is entirely about the ways that humans interact with technology in the physical world. As it turns out, however, technology held some surprises — even for Mueller.
    “At the beginning, I thought it would be so much work to rethink everything, but it was also a really good opportunity to rethink our teaching,” says the assistant professor of electrical engineering and computer science (EECS), who immediately realized that the vast majority of class work — tuning and re-tuning design concepts — would have to be done in isolation. “It’s a building-based class, so normally, the students sit with us in an extended lab section while we prototype together. With Covid, you can’t have people sitting together, so we introduced a bunch of changes. First, we gave everyone a little bag full of electronic components to take home and work with,” says Mueller. Those basic components combine with more specialized products generated in the lab to create a wide variety of interactive technologies, including touchpads and devices giving haptic feedback.
    With the majority of the class’s work shifting to dorm rooms, Mueller needed a better way to capture the free-flowing discussions and casual dynamic of a large group. Pre-pandemic, Mueller had used the familiar MIT tool Piazza to facilitate student discussions outside of class. “Piazza is more like a forum or board where a question is posted, then answered, and then the post goes down,” says Mueller, who wanted to find a better substitution for the organic conversations of a working lab — and found it in office chat tool Slack. “We found that Slack lowers the barrier for students to reach out because it’s much more informal than email,” says Mueller, who assigned every student a private Slack channel on the class’s shared workspace for one-to-one communications with their instructor, setting up additional channels for broad group discussion. “All the labs are now write-ups, with checkpoints where the students are asked to post pictures or videos on Slack to make sure they’re doing it correctly.”

    During Covid, only a few students at a time could convene in the laboratory space to use large fabrication tools like the 3D printer. Deep cleaning followed each group.
    Photo: Juliana Sohn

    Previous item Next item

    Facing the limitations of remote troubleshooting, Mueller also set up a scheduling system for students to get in-person help and use tools too bulky and expensive for individual distribution. “[Students] book a time to come into the lab, get help, laser-cut and 3D print, so they never overcrowd the space and so there is time to sanitize,” she reports. Deep cleaning between each set of distanced visitors to the lab added another layer of complexity. “Covid meant we had to find new ways to offer the same level of teaching quality while keeping everyone safe and following all safety measures necessary in this pandemic,” says Michael Wessely, a co-instructor for 6.810. “I was truly impressed how well the teaching staff, MIT administration, and the staff from IDC [International Design Center], where we hold all of our practical sessions, worked hand-in-hand to provide the best experience possible for the students.”
    The projects made by Mueller’s students are as cutting-edge as the technology used to make them. One such project, a multi-touch pad based on the fundamental principles that underlay modern smartphone screens, was designed by EECS PhD candidate Junyi Zhu. “When Stefanie and I were brainstorming the problem set series for the class, we wanted it to cover more interactive technologies, as well as including the design and prototyping stages of an interactive device, so that the students would have a ‘full-cycle’ experience from digital design to physical fabrication and system building,” says Zhu, who acted as a teaching assistant (TA) for the course. “We also wanted it to be relatively new and raw, so that students would not feel bored. We looked through some projects and publications from recent years’ top HCI [human-computer interaction] conferences, and finally developed the multi-touch pad problem set series, which includes digital parametric design and physical fabrication of the multi-touch pad, electronic prototyping and circuit design, sensing data visualization, application development, and presentation materials creation (e.g., rotoscope drawing, short sequences video).”
    By layering up lessons, each predicated on a successful problem-solve, Zhu, whose current research focuses on object form and electronic function integration in interactive device prototyping, free-form electronics, and health sensing, hopes to give students an accurate sense of the experience of product development. “We believe that this can help students learn not only the technical skills, but also some ‘thinking models’ and fundamentals of interactive device/system design.”
    The projects in 6.810 frequently require advanced problem-solving skills, which students develop through laborious trial and error. One such project, an interactive mug, requires critical thought at every stage, as project designer Wessely explains: “In contrast to software engineering, where there is a wide selection of debugging tools, a fabricated prototype does not have any automatic tool to find errors in the fabrication, for example, of a sprayed thin film layer of a functional material. Students have to be their own ‘debugger’ by deeply understanding how fabrication technologies and materials work and behave.”
    The students aren’t the only ones who’ve developed strong problem-solving skills during the class. The faculty and co-instructors of 6.810 report that rising to the challenges posed by the pandemic has improved their pedagogy and left them with a lasting respect for their students.
    “TAing for a hybrid class during the pandemic was not easy, with extra logistical costs and precautions (e.g., practically self-quarantining all the time outside of the class to make sure to be able to show up in person for class OHs and workshops) across the entire semester,” says Junyi Zhu. “Similar challenges were faced by our students as well, with limited in-person office hours and activities affecting the learning and debugging experience, extra stress from the pandemic, etc. I am very proud of our students and the quality of their final projects.”
    Michael Wessely was left with a similar impression of his students’ strength. “I learned that MIT students are extremely dedicated and resilient, and are prepared to be successful no matter how complex or challenging a task is,” says Wessely.
    As for Mueller, she plans to bring many of the lessons of the pandemic back into her class when normal life resumes. “On Slack you have a history of a student’s project, and a TA can jump in much faster, which was a big plus. I will definitely use that workspace next year,” she notes. “Also, normally I would just have office hours, but I would not have sign-up slots. In retrospect, my TAs were overcrowded and I didn’t know who was there to get help; the whole spreadsheet signup made it easier to plan.”
    With the lessons of this strange time in hand, Mueller is prepared to prototype a new, better 6.810 long into the future. More

  • in

    Less-wasteful laser-cutting

    Laser-cutting is an essential part of many industries, from car manufacturing to construction. However, the process isn’t always easy or efficient: Cutting huge sheets of metal requires time and expertise, and even the most careful users can still produce huge amounts of leftover material that go to waste. The underlying technologies that use lasers to cut edges aren’t actually all that cutting-edge: their users are often in the dark about how much of each material they’ve used, or if a design they have in mind can even be fabricated.
    With this in mind, researchers from MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) have created a new tool called Fabricaide that provides live feedback on how different parts of the design should be placed onto their sheets — and can even analyze exactly how much material is used. 

    Play video

    Fabricaide: A Tool for Less Wasteful Laser-Cutting

    “By giving feedback on the feasibility of a design as it’s being created, Fabricaide allows users to better plan their designs in the context of available materials,” says PhD student Ticha Sethapakdi, who led the development of the system alongside MIT Professor Stefanie Mueller, undergraduate Adrian Reginald Chua Sy, and Carnegie Mellon University PhD student Daniel Anderson.
    Fabricaide has a workflow that the team says significantly shortens the feedback loop between design and fabrication. The tool keeps an archive of what the user has done, tracking how much of each material they have left. It also allows the user to assign multiple materials to different parts of the design to be cut, which simplifies the process so that it’s less of a headache for multi-material designs. 
    Another important element of Fabricaide is a custom 2D packing algorithm that can arrange parts onto sheets in an optimally efficient way, in real time. The team showed that their algorithm was faster than existing open-source tools, while producing comparable quality. (The algorithm can also be turned off, if the user already knows how they want to arrange the materials.)
    “A lot of these materials are very scarce resources, and so a problem that often comes up is that a designer doesn’t realize that they’ve run out of a material until after they’ve already cut the design,” says Sethapakdi. “With Fabricaide, they’d be able to know earlier so that they can proactively determine how to best allocate materials.”
    As the user creates their design, the tool optimizes the placement of parts onto existing sheets and provides warnings if there is insufficient material, with suggestions for material substitutes (for example, using 1 millimeter-thick yellow acrylic instead of 1 mm red acrylic). Fabricaide acts as an interface that integrates with existing design tools, and is compatible with both 2D and 3D CAD software like AutoCAD, SolidWorks, and even Adobe Illustrator.
    In the future the team hopes to incorporate more sophisticated properties of materials, like how strong or flexible they need to be. The team says that they could envision Fabricaide being used in shared makerspaces as a way to reduce waste. A user might see that, say, 10 people are trying to use a particular material, and can then switch to a different material for their design in order to conserve resources.
    The project was supported, in part, by the National Science Foundation. More

  • in

    Toward a disease-sniffing device that rivals a dog’s nose

    Numerous studies have shown that trained dogs can detect many kinds of disease —  including lung, breast, ovarian, bladder, and prostate cancers, and possibly Covid-19 — simply through smell. In some cases, involving prostate cancer for example, the dogs had a 99 percent success rate in detecting the disease by sniffing patients’ urine samples.
    But it takes time to train such dogs, and their availability and time is limited. Scientists have been hunting for ways of automating the amazing olfactory capabilities of the canine nose and brain, in a compact device. Now, a team of researchers at MIT and other institutions has come up with a system that can detect the chemical and microbial content of an air sample with even greater sensitivity than a dog’s nose. They coupled this to a machine-learning process that can identify the distinctive characteristics of the disease-bearing samples.
    The findings, which the researchers say could someday lead to an automated odor-detection system small enough to be incorporated into a cellphone, are being published today in the journal PLOS One, in a paper by Clare Guest of Medical Detection Dogs in the U.K., Research Scientist Andreas Mershin of MIT, and 18 others at Johns Hopkins University, the Prostate Cancer Foundation, and several other universities and organizations.
     “Dogs, for now 15 years or so, have been shown to be the earliest, most accurate disease detectors for anything that we’ve ever tried,” Mershin says. And their performance in controlled tests has in some cases exceeded that of the best current lab tests, he says. “So far, many different types of cancer have been detected earlier by dogs than any other technology.”
    What’s more, the dogs apparently pick up connections that have so far eluded human researchers: When trained to respond to samples from patients with one type of cancer, some dogs have then identified several other types of cancer — even though the similarities between the samples weren’t evident to humans.
    These dogs can identify “cancers that don’t have any identical biomolecular signatures in common, nothing in the odorants,” Mershin says. Using powerful analytical tools including gas chromatography mass spectrometry (GCMS) and microbial profiling, “if you analyze the samples from, let’s say, skin cancer and bladder cancer and breast cancer and lung cancer — all things that the dog has been shown to be able to detect — they have nothing in common.” Yet the dog can somehow generalize from one kind of cancer to be able to identify the others.
    Mershin and the team over the last few years have developed, and continued to improve on, a miniaturized detector system that incorporates mammalian olfactory receptors stabilized to act as sensors, whose data streams can be handled in real-time by a typical smartphone’s capabilities. He envisions a day when every phone will have a scent detector built in, just as cameras are now ubiquitous in phones. Such detectors, equipped with advanced algorithms developed through machine learning, could potentially pick up early signs of disease far sooner than typical screening regimes, he says — and could even warn of smoke or a gas leak as well.
    In the latest tests, the team tested 50 samples of urine from confirmed cases of prostate cancer and controls known to be free of the disease, using both dogs trained and handled by Medical Detection Dogs in the U.K. and the miniaturized detection system. They then applied a machine-learning program to tease out any similarities and differences between the samples that could help the sensor-based system to identify the disease. In testing the same samples, the artificial system was able to match the success rates of the dogs, with both methods scoring more than 70 percent.
    The miniaturized detection system, Mershin says, is actually 200 times more sensitive than a dog’s nose in terms of being able to detect and identify tiny traces of different molecules, as confirmed through controlled tests mandated by DARPA. But in terms of interpreting those molecules, “it’s 100 percent dumber.” That’s where the machine learning comes in, to try to find the elusive patterns that dogs can infer from the scent, but humans haven’t been able to grasp from a chemical analysis.
     “The dogs don’t know any chemistry,” Mershin says. “They don’t see a list of molecules appear in their head. When you smell a cup of coffee, you don’t see a list of names and concentrations, you feel an integrated sensation. That sensation of scent character is what the dogs can mine.”
    While the physical apparatus for detecting and analyzing the molecules in air has been under development for several years, with much of the focus on reducing its size, until now the analysis was lacking. “We knew that the sensors are already better than what the dogs can do in terms of the limit of detection, but what we haven’t shown before is that we can train an artificial intelligence to mimic the dogs,” he says. “And now we’ve shown that we can do this. We’ve shown that what the dog does can be replicated to a certain extent.”
    This achievement, the researchers say, provides a solid framework for further research to develop the technology to a level suitable for clinical use. Mershin hopes to be able to test a far larger set of samples, perhaps 5,000, to pinpoint in greater detail the significant indicators of disease. But such testing doesn’t come cheap: It costs about $1,000 per sample for clinically tested and certified samples of disease-carrying and disease-free urine to be collected, documented, shipped, and analyzed he says.
    Reflecting on how he became involved in this research, Mershin recalled a study of bladder cancer detection, in which a dog kept misidentifying one member of the control group as being positive for the disease, even though he had been specifically selected based on hospital tests as being disease free. The patient, who knew about the dog’s test, opted to have further tests, and a few months later was found to have the disease at a very early stage. “Even though it’s just one case, I have to admit that did sway me,” Mershin says.
    The team included researchers at MIT, Johns Hopkins University in Maryland, Medical Detection Dogs in Milton Keynes, U.K., the Cambridge Polymer Group, the Prostate Cancer Foundation, the University of Texas at El Paso, Imagination Engines, and Harvard University. The research was supported by the Prostate Cancer Foundation, the National Cancer Institute, and the National Institutes of Health. More

  • in

    New surgery may enable better control of prosthetic limbs

    MIT researchers have invented a new type of amputation surgery that can help amputees to better control their residual muscles and sense where their “phantom limb” is in space. This restored sense of proprioception should translate to better control of prosthetic limbs, as well as a reduction of limb pain, the researchers say.
    In most amputations, muscle pairs that control the affected joints, such as elbows or ankles, are severed. However, the MIT team has found that reconnecting these muscle pairs, allowing them to retain their normal push-pull relationship, offers people much better sensory feedback.
    “Both our study and previous studies show that the better patients can dynamically move their muscles, the more control they’re going to have. The better a person can actuate muscles that move their phantom ankle, for example, the better they’re actually able to use their prostheses,” says Shriya Srinivasan, an MIT postdoc and lead author of the study.
    In a study that will appear this week in the Proceedings of the National Academy of Sciences, 15 patients who received this new type of surgery, known as agonist-antagonist myoneural interface (AMI), could control their muscles more precisely than patients with traditional amputations. The AMI patients also reported feeling more freedom of movement and less pain in their affected limb.
    “Through surgical and regenerative techniques that restore natural agonist-antagonist muscle movements, our study shows that persons with an AMI amputation experience a greater phantom joint range of motion, a reduced level of pain, and an increased fidelity of prosthetic limb controllability,” says Hugh Herr, a professor of media arts and sciences, head of the Biomechatronics group in the Media Lab, and the senior author of the paper.
    Other authors of the paper include Samantha Gutierrez-Arango and Erica Israel, senior research support associates at the Media Lab; Ashley Chia-En Teng, an MIT undergraduate; Hyungeun Song, a graduate student in the Harvard-MIT Program in Health Sciences and Technology; Zachary Bailey, a former visiting researcher at the Media Lab; Matthew Carty, a visiting scientist at the Media Lab; and Lisa Freed, a Media Lab research scientist.
    Restoring sensation
    Most muscles that control limb movement occur in pairs that alternately stretch and contract. One example of these agonist-antagonist pairs is the biceps and triceps. When you bend your elbow, the biceps muscle contracts, causing the triceps to stretch, and that stretch sends sensory information back to the brain.
    During a conventional limb amputation, these muscle movements are restricted, cutting off this sensory feedback and making it much harder for amputees to feel where their prosthetic limbs are in space or to sense forces applied to those limbs.
    “When one muscle contracts, the other one doesn’t have its antagonist activity, so the brain gets confusing signals,” says Srinivasan, a former member of the Biomechatronics group now working at MIT’s Koch Institute for Integrative Cancer Research. “Even with state-of-the-art prostheses, people are constantly visually following the prosthesis to try to calibrate their brains to where the device is moving.”
    A few years ago, the MIT Biomechatronics group invented and scientifically developed in preclinical studies a new amputation technique that maintains the relationships between those muscle pairs. Instead of severing each muscle, they connect the two ends of the muscles so that they still dynamically communicate with each other within the residual limb. In a 2017 study of rats, they showed that when the animals contracted one muscle of the pair, the other muscle would stretch and send sensory information back to the brain.
    Since these preclinical studies, about 25 people have undergone the AMI surgery at Brigham and Women’s Hospital, performed by Carty, who is also a plastic surgeon at the Brigham and Women’s hospital. In the new PNAS study, the researchers measured the precision of muscle movements in the ankle and subtalar joints of 15 patients who had AMI amputations performed below the knee. These patients had two sets of muscles reconnected during their amputation: the muscles that control the ankle, and those that control the subtalar joint, which allows the sole of the foot to tilt inward or outward. The study compared these patients to seven people who had traditional amputations below the knee.
    Each patient was evaluated while lying down with their legs propped on a foam pillow, allowing their feet to extend into the air. Patients did not wear prosthetic limbs during the study. The researchers asked them to flex their ankle joints — both the intact one and the “phantom” one — by 25, 50, 75, or 100 percent of their full range of motion. Electrodes attached to each leg allowed the researchers to measure the activity of specific muscles as each movement was performed repeatedly.
    The researchers compared the electrical signals coming from the muscles in the amputated limb with those from the intact limb and found that for AMI patients, they were very similar. They also found that patients with the AMI amputation were able to control the muscles of their amputated limb much more precisely than the patients with traditional amputations. Patients with traditional amputations were more likely to perform the same movement over and over in their amputated limb, regardless of how far they were asked to flex their ankle.
    “The AMI patients’ ability to control these muscles was a lot more intuitive than those with typical amputations, which largely had to do with the way their brain was processing how the phantom limb was moving,” Srinivasan says.
    In a paper that recently appeared in Science Translational Medicine, the researchers reported that brain scans of the AMI amputees showed that they were getting more sensory feedback from their residual muscles than patients with traditional amputations. In work that is now ongoing, the researchers are measuring whether this ability translates to better control of a prosthetic leg while walking.
    Freedom of movement
    The researchers also discovered an effect they did not anticipate: AMI patients reported much less pain and a greater sensation of freedom of movement in their amputated limbs.
    “Our study wasn’t specifically designed to achieve this, but it was a sentiment our subjects expressed over and over again. They had a much greater sensation of what their foot actually felt like and how it was moving in space,” Srinivasan says. “It became increasingly apparent that restoring the muscles to their normal physiology had benefits not only for prosthetic control, but also for their day-to-day mental well-being.”
    The research team has also developed a modified version of the surgery that can be performed on people who have already had a traditional amputation. This process, which they call “regenerative AMI,” involves grafting small muscle segments to serve as the agonist and antagonist muscles for an amputated joint. They are also working on developing the AMI procedure for other types of amputations, including above the knee and above and below the elbow.
    “We’re learning that this technique of rewiring the limb, and using spare parts to reconstruct that limb, is working, and it’s applicable to various parts of the body,” Herr says. 
    The research was funded by the MIT Media Lab Consortia, the National Institute of Child Health and Human Development, the National Center for Medical Rehabilitation Research, and the Congressionally Directed Medical Research Programs of the U.S. Department of Defense. More

  • in

    Shafi Goldwasser wins L'Oréal-UNESCO Award

    Shafi Goldwasser, the RSA Professor of Computer Science and Engineering at MIT, a co-leader of the cryptography and information security group, and a member of the complexity theory group within the Theory of Computation Group and the Computer Science and Artificial Intelligence Laboratory, has been named the laureate for North America in this year’s 2021 L’Oréal-UNESCO For Women in Science International Awards.
    The award celebrates Goldwasser’s groundbreaking work in cryptography, which has enabled secure communication and verification over the internet and collaborative computation on private data. Goldwasser is also known of her pioneering work on interactive and probabilistic proof verification. In announcing the award, the organizers said that Goldwasser’s research “has a significant impact on our understanding of large classes of problems for which computers cannot efficiently find approximate solutions.”
    Goldwasser’s work has had a similar impact on MIT. “Shafi’s work on zero-knowledge proofs in the 1980s enabled cryptographic protocols, which are used everywhere on the internet for secure communications,” says Arvind, faculty head of CS and the Charles W. and Jennifer C. Johnson Professor in the Department of Electrical Engineering and Computer Science. “The impact of her work will be felt for generations because of the long list of brilliant students she has mentored over the years.”
    Goldwasser has been honored with the A.M. Turing Award in 2013, the Simons Foundation Investigator Award in 2012, the IEEE Emanuel R. Priore Award in 2011, the Franklin Institute Benjamin Franklin Medal in Computer and Cognitive Science in 2010, and many other awards and honors. In addition to her duties at MIT, Goldwasser is director of the Simons Institute for the Theory of Computing, professor in electrical engineering and computer sciences at the University of California at Berkeley, and professor of computer science and applied mathematics at Weizmann Institute in Israel.
    Founded in 1998, the L’Oréal-UNESCO For Women in Science International Awards annually honors five eminent women scientists representing every major region of the world. Specifically focusing on excellent research within the physical sciences, mathematics, and computer science, the program, which has honored 117 laureates since its creation, including Institute Professor Emerita of Physics and Electrical Engineering and Computer Science Mildred Dresselhaus in 2007, seeks to “make visible” the achievements of women in science, heralding their accomplishments and inspiring more women to pursue science as a vocation. More