More stories

  • in

    Accelerating AI at the speed of light

    Improved computing power and an exponential increase in data have helped fuel the rapid rise of artificial intelligence. But as AI systems become more sophisticated, they’ll need even more computational power to address their needs, which traditional computing hardware most likely won’t be able to keep up with. To solve the problem, MIT spinout Lightelligence is developing the next generation of computing hardware.

    The Lightelligence solution makes use of the silicon fabrication platform used for traditional semiconductor chips, but in a novel way. Rather than building chips that use electricity to carry out computations, Lightelligence develops components powered by light that are low energy and fast, and they might just be the hardware we need to power the AI revolution. Compared to traditional architectures, the optical chips made by Lightelligence offer orders of magnitude improvement in terms of high speed, low latency, and low power consumption.

    In order to perform arithmetic operations, electronic chips need to combine tens, sometimes hundreds, of logic gates. To perform this process requires the electronic chip transistors to switch off and on for multiple clock periods. Every time a logic gate transistor switches, it generates heat and consumes power.

    Not so with the chips produced by Lightelligence. In the optical domain, arithmetic computations are done with physics instead of with logic gate transistors that require multiple clocks. More clocks means a slower time to get a result. “We precisely control how the photons interact with each other inside the chip,” says Yichen Shen PhD ’16, co-founder and CEO of Lightelligence. “It’s just light propagating through the chip, photons interfering with each other. The nature of the interference does the mathematics that we want it to do.”

    This process of interference generates very little heat, which means Shen’s optical computing chips enable much lower power consumption than their electron-powered counterparts. Shen points out that we’ve made use of fiber optics for long-distance communication for decades. “Think of the optical fibers spread across the bottom of the Pacific Ocean, and the light propagating through thousands of kilometers without losing much power. Lightelligence is bringing this concept for long-distance communication to on-chip compute.”

    With most forecasters projecting an end to Moore’s Law sometime in 2025, Shen believes his optic-driven solution is poised to address many of the computational challenges of the future. “We’re changing the fundamental way computing is done, and I think we’re doing it at the right time in history,” says Shen. “We believe optics is going to be the next computing platform, at least for linear operations like AI.”

    To be clear, Shen does not envision optics replacing the entire electronic computing industry. Rather, Lightelligence aims to accelerate certain linear algebra operations to perform quick, power-efficient tasks like those found in artificial neural networks.

    Much of AI compute happens in the cloud at data centers like the ones supporting Amazon or Microsoft. Because AI algorithms are computationally intensive, AI compute takes up a large percentage of data center capacity. Picture tens of thousands of servers, running continuously, burning millions of dollars worth of electricity. Now imagine replacing some of those conventional servers with Lightelligence servers that burn much less power at a fraction of the cost. “Our optical chips would greatly reduce the cost of data centers, or, put another way, greatly increase the computational capability of those data centers for AI applications,” says Shen.  

    And what about self-driving vehicles? They rely on cameras and AI computation to make quick decisions. But a conventional digital electronic chip doesn’t “think” quickly enough to make the decisions necessary at high speeds. Faster computational imaging leads to faster decision-making. “Our chip completes these decision-making tasks at a fraction of the time of regular chips, which would enable the AI system within the car to make much quicker decisions and more precise decisions, enabling safer driving,” says Shen.

    Lightelligence boasts an all-MIT founding team, supported by 40 technical experts, including machine learning pioneers, leading photonic researchers, and semiconductor industry veterans intent on revolutionizing computing technology. Shen did his PhD work in the Department of Physics with professors Marin Soljajic and John Joannoupolos, where he developed an interest in the intersection of photonics and AI. “I realized that computation is a key enabler of modern artificial intelligence, and faster computing hardware would be needed to complement the growth of faster, smarter AI algorithms,” he says.

    Lightelligence was founded in 2017 when Shen teamed up with Soljajic and two other MIT alumni. Fellow co-founder Huaiyu Meng SM ’14, PhD ’18 received his doctorate in electrical engineering and now serves as Lightelligence’s vice president of photonics. Rounding out the founding team is Spencer Powers MBA ’16. Powers, who received his MBA from MIT Sloan School of Management, is also a Lightelligence board member with extensive experience in the startup world.

    Shen and his team are not alone in this new field of optical computing, but they do have key advantages over their competitors. First off, they invented the technology at the Institute. Lightelligence is also the first company to have built a complete system of optical computing hardware, which it accomplished in April 2019. Shen is self-assured in the innovation potential of Lightelligence and what it could mean for the future, regardless of the competition. “There are new stories of teams working in this space, but we’re not only the first, we’re the fastest in terms of execution. I stand by that,” he says.

    But there’s another reason Shen’s not worried about the competition. He likens this stage in the evolution of the technology to the era when transistors were replacing vacuum tubes. Several transistor companies were making the leap, but they weren’t competing with each other so much as they were innovating to compete with the incumbent industry. “Having more competitors doing optical computing is good for us at this stage,” says Shen. “It makes for a louder voice, a bigger community to expand and enhance the whole ecosystem for optical computing.”

    By 2021, Shen anticipates that Lightelligence will have de-risked 80-90 percent of the technical challenges necessary for optical computing to be a viable commercial product. In the meantime, Lightelligence is making the most of its status as the newest member of the MIT Startup Exchange accelerator, STEX25, building deep relationships with tier-one customers on several niche applications where there is a pressing need for high-performance hardware, such as data centers and manufacturers. More

  • in

    The potential of artificial intelligence to bring equity in health care

    Health care is at a junction, a point where artificial intelligence tools are being introduced to all areas of the space. This introduction comes with great expectations: AI has the potential to greatly improve existing technologies, sharpen personalized medicines, and, with an influx of big data, benefit historically underserved populations.

    But in order to do those things, the health care community must ensure that AI tools are trustworthy, and that they don’t end up perpetuating biases that exist in the current system. Researchers at the MIT Abdul Latif Jameel Clinic for Machine Learning in Health (Jameel Clinic), an initiative to support AI research in health care, call for creating a robust infrastructure that can aid scientists and clinicians in pursuing this mission.

    Fair and equitable AI for health care

    The Jameel Clinic recently hosted the AI for Health Care Equity Conference to assess current state-of-the-art work in this space, including new machine learning techniques that support fairness, personalization, and inclusiveness; identify key areas of impact in health care delivery; and discuss regulatory and policy implications.

    Nearly 1,400 people virtually attended the conference to hear from thought leaders in academia, industry, and government who are working to improve health care equity and further understand the technical challenges in this space and paths forward.

    During the event, Regina Barzilay, the School of Engineering Distinguished Professor of AI and Health and the AI faculty lead for Jameel Clinic, and Bilal Mateen, clinical technology lead at the Wellcome Trust, announced the Wellcome Fund grant conferred to Jameel Clinic to create a community platform supporting equitable AI tools in health care.

    The project’s ultimate goal is not to solve an academic question or reach a specific research benchmark, but to actually improve the lives of patients worldwide. Researchers at Jameel Clinic insist that AI tools should not be designed with a single population in mind, but instead be crafted to be reiterative and inclusive, to serve any community or subpopulation. To do this, a given AI tool needs to be studied and validated across many populations, usually in multiple cities and countries. Also on the project wish list is to create open access for the scientific community at large, while honoring patient privacy, to democratize the effort.

    “What became increasingly evident to us as a funder is that the nature of science has fundamentally changed over the last few years, and is substantially more computational by design than it ever was previously,” says Mateen.

    The clinical perspective

    This call to action is a response to health care in 2020. At the conference, Collin Stultz, a professor of electrical engineering and computer science and a cardiologist at Massachusetts General Hospital, spoke on how health care providers typically prescribe treatments and why these treatments are often incorrect.

    In simplistic terms, a doctor collects information on their patient, then uses that information to create a treatment plan. “The decisions providers make can improve the quality of patients’ lives or make them live longer, but this does not happen in a vacuum,” says Stultz.

    Instead, he says that a complex web of forces can influence how a patient receives treatment. These forces go from being hyper-specific to universal, ranging from factors unique to an individual patient, to bias from a provider, such as knowledge gleaned from flawed clinical trials, to broad structural problems, like uneven access to care.

    Datasets and algorithms

    A central question of the conference revolved around how race is represented in datasets, since it’s a variable that can be fluid, self-reported, and defined in non-specific terms.

    “The inequities we’re trying to address are large, striking, and persistent,” says Sharrelle Barber, an assistant professor of epidemiology and biostatistics at Drexel University. “We have to think about what that variable really is. Really, it’s a marker of structural racism,” says Barber. “It’s not biological, it’s not genetic. We’ve been saying that over and over again.”

    Some aspects of health are purely determined by biology, such as hereditary conditions like cystic fibrosis, but the majority of conditions are not straightforward. According to Massachusetts General Hospital oncologist T. Salewa Oseni, when it comes to patient health and outcomes, research tends to assume biological factors have outsized influence, but socioeconomic factors should be considered just as seriously.

    Even as machine learning researchers detect preexisting biases in the health care system, they must also address weaknesses in algorithms themselves, as highlighted by a series of speakers at the conference. They must grapple with important questions that arise in all stages of development, from the initial framing of what the technology is trying to solve to overseeing deployment in the real world.

    Irene Chen, a PhD student at MIT studying machine learning, examines all steps of the development pipeline through the lens of ethics. As a first-year doctoral student, Chen was alarmed to find an “out-of-the-box” algorithm, which happened to project patient mortality, churning out significantly different predictions based on race. This kind of algorithm can have real impacts, too; it guides how hospitals allocate resources to patients.

    Chen set about understanding why this algorithm produced such uneven results. In later work, she defined three specific sources of bias that could be detangled from any model. The first is “bias,” but in a statistical sense — maybe the model is not a good fit for the research question. The second is variance, which is controlled by sample size. The last source is noise, which has nothing to do with tweaking the model or increasing the sample size. Instead, it indicates that something has happened during the data collection process, a step way before model development. Many systemic inequities, such as limited health insurance or a historic mistrust of medicine in certain groups, get “rolled up” into noise.

    “Once you identify which component it is, you can propose a fix,” says Chen.

    Marzyeh Ghassemi, an assistant professor at the University of Toronto and an incoming professor at MIT, has studied the trade-off between anonymizing highly personal health data and ensuring that all patients are fairly represented. In cases like differential privacy, a machine-learning tool that guarantees the same level of privacy for every data point, individuals who are too “unique” in their cohort started to lose predictive influence in the model. In health data, where trials often underrepresent certain populations, “minorities are the ones that look unique,” says Ghassemi.

    “We need to create more data, it needs to be diverse data,” she says. “These robust, private, fair, high-quality algorithms we’re trying to train require large-scale data sets for research use.”

    Beyond Jameel Clinic, other organizations are recognizing the power of harnessing diverse data to create more equitable health care. Anthony Philippakis, chief data officer at the Broad Institute of MIT and Harvard, presented on the All of Us research program, an unprecedented project from the National Institutes of Health that aims to bridge the gap for historically under-recognized populations by collecting observational and longitudinal health data on over 1 million Americans. The database is meant to uncover how diseases present across different sub-populations.

    One of the largest questions of the conference, and of AI in general, revolves around policy. Kadija Ferryman, a cultural anthropologist and bioethicist at New York University, points out that AI regulation is in its infancy, which can be a good thing. “There’s a lot of opportunities for policy to be created with these ideas around fairness and justice, as opposed to having policies that have been developed, and then working to try to undo some of the policy regulations,” says Ferryman.

    Even before policy comes into play, there are certain best practices for developers to keep in mind. Najat Khan, chief data science officer at Janssen R&D, encourages researchers to be “extremely systematic” when choosing datasets. Even large, common datasets contain inherent bias.

    Even more fundamental is opening the door to a diverse group of future researchers.

    “We have to ensure that we are developing folks, investing in them, and having them work on really important problems that they care about,” says Khan. “You’ll see a fundamental shift in the talent that we have.”

    The AI for Health Care Equity Conference was co-organized by MIT’s Jameel Clinic; Department of Electrical Engineering and Computer Science; Institute for Data, Systems, and Society; Institute for Medical Engineering and Science; and the MIT Schwarzman College of Computing. More

  • in

    Artificial intelligence system could help counter the spread of disinformation

    Disinformation campaigns are not new — think of wartime propaganda used to sway public opinion against an enemy. What is new, however, is the use of the internet and social media to spread these campaigns. The spread of disinformation via social media has the power to change elections, strengthen conspiracy theories, and sow discord.

    Steven Smith, a staff member from MIT Lincoln Laboratory’s Artificial Intelligence Software Architectures and Algorithms Group, is part of a team that set out to better understand these campaigns by launching the Reconnaissance of Influence Operations (RIO) program. Their goal was to create a system that would automatically detect disinformation narratives as well as those individuals who are spreading the narratives within social media networks. Earlier this year, the team published a paper on their work in the Proceedings of the National Academy of Sciences and they received an R&D 100 award last fall.

    The project originated in 2014 when Smith and colleagues were studying how malicious groups could exploit social media. They noticed increased and unusual activity in social media data from accounts that had the appearance of pushing pro-Russian narratives.

    “We were kind of scratching our heads,” Smith says of the data. So the team applied for internal funding through the laboratory’s Technology Office and launched the program in order to study whether similar techniques would be used in the 2017 French elections.

    In the 30 days leading up to the election, the RIO team collected real-time social media data to search for and analyze the spread of disinformation. In total, they compiled 28 million Twitter posts from 1 million accounts. Then, using the RIO system, they were able to detect disinformation accounts with 96 percent precision.

    What makes the RIO system unique is that it combines multiple analytics techniques in order to create a comprehensive view of where and how the disinformation narratives are spreading.

    “If you are trying to answer the question of who is influential on a social network, traditionally, people look at activity counts,” says Edward Kao, who is another member of the research team. On Twitter, for example, analysts would consider the number of tweets and retweets. “What we found is that in many cases this is not sufficient. It doesn’t actually tell you the impact of the accounts on the social network.”

    As part of Kao’s PhD work in the laboratory’s Lincoln Scholars program, a tuition fellowship program, he developed a statistical approach — now used in RIO — to help determine not only whether a social media account is spreading disinformation but also how much the account causes the network as a whole to change and amplify the message.

    Erika Mackin, another research team member, also applied a new machine learning approach that helps RIO to classify these accounts by looking into data related to behaviors such as whether the account interacts with foreign media and what languages it uses. This approach allows RIO to detect hostile accounts that are active in diverse campaigns, ranging from the 2017 French presidential elections to the spread of Covid-19 disinformation.

    Another unique aspect of RIO is that it can detect and quantify the impact of accounts operated by both bots and humans, whereas most automated systems in use today detect bots only. RIO also has the ability to help those using the system to forecast how different countermeasures might halt the spread of a particular disinformation campaign.

    The team envisions RIO being used by both government and industry as well as beyond social media and in the realm of traditional media such as newspapers and television. Currently, they are working with West Point student Joseph Schlessinger, who is also a graduate student at MIT and a military fellow at Lincoln Laboratory, to understand how narratives spread across European media outlets. A new follow-on program is also underway to dive into the cognitive aspects of influence operations and how individual attitudes and behaviors are affected by disinformation.

    “Defending against disinformation is not only a matter of national security, but also about protecting democracy,” says Kao. More

  • in

    New algorithms show accuracy, reliability in gauging unconsciousness under general anesthesia

    Anesthestic drugs act on the brain, but most anesthesiologists rely on heart rate, respiratory rate, and movement to infer whether surgery patients remain unconscious to the desired degree. In a new study, a research team based at MIT and Massachusetts General Hospital shows that a straightforward artificial intelligence approach, attuned to the kind of anesthetic being used, can yield algorithms that assess unconsciousness in patients based on brain activity with high accuracy and reliability.

    “One of the things that is foremost in the minds of anesthesiologists is ‘Do I have somebody who is lying in front of me who may be conscious and I don’t realize it?’ Being able to reliably maintain unconsciousness in a patient during surgery is fundamental to what we do,” says senior author Emery N. Brown, the Edward Hood Taplin Professor in The Picower Institute for Learning and Memory and the Institute for Medical Engineering and Science at MIT, and an anesthesiologist at MGH. “This is an important step forward.”

    More than providing a good readout of unconsciousness, Brown adds, the new algorithms offer the potential to allow anesthesiologists to maintain it at the desired level while using less drug than they might administer when depending on less direct, accurate, and reliable indicators. That can improve patient’s post-operative outcomes, such as delirium.

    “We may always have to be a little bit ‘overboard,’” says Brown, who is also a professor at Harvard Medical School. “But can we do it with sufficient accuracy so that we are not dosing people more than is needed?”

    Used to drive an infusion pump, for instance, algorithms could help anesthesiologists precisely throttle drug delivery to optimize a patient’s state and the doses they are receiving.

    Artificial intelligence, real-world testing

    To develop the technology to do so, postdocs John Abel and Marcus Badgeley led the study, published in PLOS ONE, in which they trained machine learning algorithms on a remarkable dataset the lab gathered back in 2013. In that study, 10 healthy volunteers in their 20s underwent anesthesia with the commonly used drug propofol. As the dose was methodically raised using computer-controlled delivery, the volunteers were asked to respond to a simple request until they couldn’t anymore. Then when they were brought back to consciousness as the dose was later lessened, they became able to respond again. All the while, neural rhythms reflecting their brain activity were recorded with electroencephalogram (EEG) electrodes, providing a direct, real-time link between measured brain activity and exhibited unconsciousness.

    In the new work, Abel, Badgeley, and the team trained versions of their AI algorithms, based on different underlying statistical methods, on more than 33,000 2-second-long snippets of EEG recordings from seven of the volunteers. This way the algorithms could “learn” the difference between EEG readings predictive of consciousness and unconsciousness under propofol. Then the researchers tested the algorithms in three ways.

    First, they checked whether their three most promising algorithms accurately predicted unconsciousness when applied to EEG activity recorded from the other three volunteers of the 2013 study. They did.

    Then they used the algorithms to analyze EEG recorded from 27 real surgery patients who received propofol for general anesthesia. Even though the algorithms were now being applied to data gathered from a “noisier” real-world surgical setting where the rhythms were also being measured with different equipment, the algorithms still distinguished unconsciousness with higher accuracy than other studies have shown. The authors even highlight one case in which the algorithms were able to detect a patient’s decreasing level of unconsciousness several minutes before the actual attending anesthesiologist did, meaning that if it had been in use during the surgery itself, it could have provided an accurate and helpful early warning.

    As a third test, the team applied the algorithms to EEG recordings from 17 surgery patients who were anesthetized with sevoflurane. Though sevoflurane is different from propofol and is inhaled rather than infused, it works in a similar manner, by binding to the same GABA-A receptors on the same key types of brain cells. The team’s algorithms again performed with high, though somewhat-reduced accuracy, suggesting that their ability to classify unconsciousness carried over reliably to another anesthetic drug that works in a similar way.

    The ability to predict unconsciousness across different drugs with the same mechanism of action is key, the authors said. One of the main flaws with current EEG-based systems for monitoring consciousness, they said, is that they don’t distinguish among drug classes, even though different categories of anesthesia drugs work in very different ways, producing distinct EEG patterns. They also don’t adequately account for known age differences in brain response to anesthesia. These limitations on their accuracy have also limited their clinical use.

    In the new study, while the algorithms trained on 20-somethings applied well to cohorts of surgery patients whose average age skewed significantly older and varied more widely, the authors acknowledge that they want to train algorithms distinctly for use with children or seniors. They can also train new algorithms to apply specifically for other kinds of drugs with different mechanisms of action. Altogether, a suite of well-trained and attuned algorithms could provide high accuracy that accounts for patient age and the drug in use.

    Abel says the team’s approach of framing the problem as a matter of predicting consciousness via EEG for a specific class of drugs made the machine learning approach very simple to implement and extend.

    “This is a proof of concept showing that now we can go and say let’s look at an older population or let’s look at a different kind of drug,” he says. “Doing this is simple if you set it up the right way.”

    The resulting algorithms aren’t even computationally demanding. The authors noted that for a given 2 seconds of EEG data, the algorithms could make an accurate prediction of consciousness in less than a tenth of a second running on just a standard MacBook Pro computer.

    The lab is already building on the findings to refine the algorithms further, Brown says. He says he also wants to expand testing to hundreds more cases to further confirm their performance, and also to determine whether wider distinctions may begin to emerge among the different underlying statistical models the team employed.

    In addition to Brown, Abel and Badgeley, the paper’s other authors are Benyamin Meschede-Krasa, Gabriel Schamberg, Indie Garwood, Kimaya Lecamwasam, Sourish Chakravarty, David Zhou, Matthew Keating, and Patrick Purdon.

    Funding for the study came from the National Institutes of Health, The JPB Foundation, A Guggenheim Fellowship for Applied Mathematics, and Massachusetts General Hospital. More

  • in

    Slender robotic finger senses buried items

    Over the years, robots have gotten quite good at identifying objects — as long as they’re out in the open.

    Discerning buried items in granular material like sand is a taller order. To do that, a robot would need fingers that were slender enough to penetrate the sand, mobile enough to wriggle free when sand grains jam, and sensitive enough to feel the detailed shape of the buried object.

    MIT researchers have now designed a sharp-tipped robot finger equipped with tactile sensing to meet the challenge of identifying buried objects. In experiments, the aptly named Digger Finger was able to dig through granular media such as sand and rice, and it correctly sensed the shapes of submerged items it encountered. The researchers say the robot might one day perform various subterranean duties, such as finding buried cables or disarming buried bombs.

    Play video

    The research will be presented at the next International Symposium on Experimental Robotics. The study’s lead author is Radhen Patel, a postdoc in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). Co-authors include CSAIL PhD student Branden Romero, Harvard University PhD student Nancy Ouyang, and Edward Adelson, the John and Dorothy Wilson Professor of Vision Science in CSAIL and the Department of Brain and Cognitive Sciences.

    Seeking to identify objects buried in granular material — sand, gravel, and other types of loosely packed particles — isn’t a brand new quest. Previously, researchers have used technologies that sense the subterranean from above, such as Ground Penetrating Radar or ultrasonic vibrations. But these techniques provide only a hazy view of submerged objects. They might struggle to differentiate rock from bone, for example.

    “So, the idea is to make a finger that has a good sense of touch and can distinguish between the various things it’s feeling,” says Adelson. “That would be helpful if you’re trying to find and disable buried bombs, for example.” Making that idea a reality meant clearing a number of hurdles.

    The team’s first challenge was a matter of form: The robotic finger had to be slender and sharp-tipped.

    In prior work, the researchers had used a tactile sensor called GelSight. The sensor consisted of a clear gel covered with a reflective membrane that deformed when objects pressed against it. Behind the membrane were three colors of LED lights and a camera. The lights shone through the gel and onto the membrane, while the camera collected the membrane’s pattern of reflection. Computer vision algorithms then extracted the 3D shape of the contact area where the soft finger touched the object. The contraption provided an excellent sense of artificial touch, but it was inconveniently bulky.

    For the Digger Finger, the researchers slimmed down their GelSight sensor in two main ways. First, they changed the shape to be a slender cylinder with a beveled tip. Next, they ditched two-thirds of the LED lights, using a combination of blue LEDs and colored fluorescent paint. “That saved a lot of complexity and space,” says Ouyang. “That’s how we were able to get it into such a compact form.” The final product featured a device whose tactile sensing membrane was about 2 square centimeters, similar to the tip of a finger.

    With size sorted out, the researchers turned their attention to motion, mounting the finger on a robot arm and digging through fine-grained sand and coarse-grained rice. Granular media have a tendency to jam when numerous particles become locked in place. That makes it difficult to penetrate. So, the team added vibration to the Digger Finger’s capabilities and put it through a battery of tests.

    “We wanted to see how mechanical vibrations aid in digging deeper and getting through jams,” says Patel. “We ran the vibrating motor at different operating voltages, which changes the amplitude and frequency of the vibrations.” They found that rapid vibrations helped “fluidize” the media, clearing jams and allowing for deeper burrowing — though this fluidizing effect was harder to achieve in sand than in rice.

    They also tested various twisting motions in both the rice and sand. Sometimes, grains of each type of media would get stuck between the Digger-Finger’s tactile membrane and the buried object it was trying to sense. When this happened with rice, the trapped grains were large enough to completely obscure the shape of the object, though the occlusion could usually be cleared with a little robotic wiggling. Trapped sand was harder to clear, though the grains’ small size meant the Digger Finger could still sense the general contours of target object.

    Patel says that operators will have to adjust the Digger Finger’s motion pattern for different settings “depending on the type of media and on the size and shape of the grains.” The team plans to keep exploring new motions to optimize the Digger Finger’s ability to navigate various media.

    Adelson says the Digger Finger is part of a program extending the domains in which robotic touch can be used. Humans use their fingers amidst complex environments, whether fishing for a key in a pants pocket or feeling for a tumor during surgery. “As we get better at artificial touch, we want to be able to use it in situations when you’re surrounded by all kinds of distracting information,” says Adelson. “We want to be able to distinguish between the stuff that’s important and the stuff that’s not.”

    Funding for this research was provided, in part, by the Toyota Research Institute through the Toyota-CSAIL Joint Research Center; the Office of Naval Research; and the Norwegian Research Council. More

  • in

    Twelve from MIT awarded 2021 Fulbright Fellowships

    Twelve MIT student affiliates have won fellowships for the Fulbright 2021-22 grant year. Their host country destinations include Brazil, Iceland, India, the Netherlands, New Zealand, Norway, South Korea, Spain, and Taiwan, where they will conduct research, earn a graduate degree, or teach English.

    Sponsored by the U.S. Department of State, the Fulbright U.S. Student Program offers opportunities for American student scholars in over 160 countries. Last fall, Fulbright received a record number of applications, making this the most competitive cycle in the 75-year history of the program.

    Jenny Chan is a senior studying mechanical engineering. Growing up in Philadelphia as the child of Vietnamese and Cambodian immigrants gave her an appreciation for how education could be used to uplift others. This led to her joining many activities that would continue to ignite her passion for education, including CodeIt, Global Teaching Labs, Full STEAM Ahead, and DynaMIT. At MIT, Chan also enjoys holding Friday night events with SaveTFP, sailing on the Charles River, and dancing as a member of DanceTroupe. Her Fulbright grant will take her to Taiwan, where she will serve as an English teaching assistant.

    Gretchen Eggers ’20 graduated with double majors in brain and cognitive sciences and computer science. As a Fulbright student in Brazil, Eggers will head to the Arts and Artificial Intelligence group at the University of São Paulo to research graffiti, street art, and the design of creative artificial intelligence. With a lifelong passion for painting and the arts, Eggers is excited to spend time with and learn about mural painting from local artists in São Paulo. Upon completing her Fulbright, Eggers plans to pursue a PhD in human-computer interaction.

    Miki Hansen is a senior majoring in mechanical engineering. As the winner of the Delft University of Technology’s Industrial Design Engineering Award, she will pursue a MS in integrated product design at TU Delft in the Netherlands. In tandem with her studies, she hopes to conduct research into sustainable product design for a circular economy. At MIT, Hansen was involved in Design for America, Pi Tau Sigma (MechE Honor Society), DanceTroupe, the MissBehavior dance team, and Alpha Chi Omega. After completing Fulbright, Hansen plans on working as a product designer focused on sustainable materials and packaging.

    Olivia Wynne Houck is a doctoral student in the History, Theory, and Criticism of Architecture program. She focuses on urban planning in the 20th century, with an interest in the intersections of transportation, economic, and diplomatic policies in Iceland, the United States, and Sweden. She also conducts research on infrastructure in the Arctic. As a Fulbright National Science Foundation Arctic Research Award recipient, Houck will be hosted by the political science department at the University of Iceland, where she will pursue archival research on Route 1, the ring road that encircles Iceland. Houck has also received a fellowship from the American-Scandinavian Foundation. 

    Laura Huang is a senior majoring in mechanical engineering. At the National Taiwan University of Science and Technology, Huang will combine engineering and art to develop an assistive calligraphy robot to better understand human-computer interaction. At MIT, she has done research with the Human Computer Interaction Engineering group in the Computer Science and Artificial Intelligence Laboratory, and has helped run assistive technology workshops in India and Saudi Arabia. Outside of research, Huang creates art, plays with the women’s volleyball club, and leads STEM educational outreach through MIT CodeIt and Global Teaching Labs. While in Taiwan, she hopes to continue STEM outreach, explore the culinary scene, and learn calligraphy.

    Teis Jorgensen graduates in June with an MS from the Integrated Design and Management program. He is a designer, researcher, and behavioral scientist with seven years’ experience designing products and services with a social mission. His passion is designing games that inspire and challenge players to be the best version of themselves. For his Fulbright research grant in Kerala, India, Teis will interview women about their challenges balancing home and professional responsibilities. His goal is to use these interviews as the inspiration for the design of a board game that shares their stories and ultimately helps remove barriers to female employment.

    Meghana Kamineni will graduate this spring with a major in computer science and engineering and a minor in biology. At the University of Oslo in Norway, Kamineni will implement statistical models to understand and predict the impact of vaccinations and other interventions on the spread of Covid-19. At MIT, she pursued interests in computational research for health care through work on the bacterial infection C. difficile in the laboratory of Professor John Guttag. Outside of research, she has been involved with STEM educational outreach for middle school students through dynaMIT and MIT CodeIt, and hopes to continue outreach in Norway. After Fulbright, Kamineni plans to attend medical school.

    Andrea Shinyoung Kim will graduate in June with an MS in comparative media studies. Her master’s thesis looks at the relation between digital avatars and personhood in social virtual reality, advised by D. Fox Harrell. Her Fulbright research in South Korea will investigate how virtual reality can facilitate cross-cultural learning and live performance art. Kim will observe Korean mask dances and its craft to better inform the design of online virtual worlds. She will collaborate with her hosts at the Seoul Arts Institute and CultureHub. After Fulbright, she plans to pursue a PhD to further explore her interdisciplinary interests and arts praxis.

    Kevin Lujan Lee is a PhD candidate in the Department of Urban Studies and Planning. In Aotearoa/New Zealand, he will study the transnational processes shaping how low-wage Pacific Islander workers navigate the institutions of labor market regulation. This will comprise one-half of his broader dissertation project — a comparative study of Indigenous Pacific Islanders and low-wage work in 21st-century empires. His research is only made possible by activists in the U.S. immigrant labor movement and global LANDBACK movement, who envision a world beyond labor precarity and Indigenous dispossession. Lee hopes to pursue an academic career to support the work of these movements.

    Anjali Nambrath is a senior double majoring in physics and mathematics. She has worked on projects related to nuclear structure, neutrino physics, and dark matter detection at MIT and at two national labs. At MIT, she was president of the Society for Physics Students, a member of the MIT Shakespeare Ensemble, an organizer of HackMIT, and a teacher for the MIT Educational Studies Program. For her Fulbright grant to India, Nambrath will be based at the Tata Institute for Fundamental Research in Mumbai, where she will work on models of neutrino production and interaction in supernovae. After Fulbright, Nambrath will begin graduate school in physics at the University of California at Berkeley. 

    Abby Stein will graduate in June with a double major in physics and electrical engineering. At MIT, she researched communication theory in the Research Laboratory of Electronics, and optical network hardware at Lincoln Laboratory. Stein discovered an interest in international research and education through her MISTI experience in Chile, where she studied optics for astronomy, and through teaching engineering workshops in Israel with MIT’s Global Teaching Labs. For her Fulbright, Stein will conduct research on quantum optical satellite networks at the Institute of Photonic Sciences in Barcelona, Spain. After completing Fulbright, Stein will head to Stanford University to pursue a PhD in applied physics. 

    Tony Terrasa is a senior majoring in mechanical engineering and music. As a Fulbright English teaching assistant in Spain, he will be teaching in Galicia. Previously, Terrasa taught English, math, and physics to secondary school students in Lübeck, Germany as part of the MIT Global Teaching Labs program. He also taught for three years in the English as a Second Language Program for MIT Facilities Department employees. An MIT Emerson Fellow of Jazz Saxophone, he looks forward to listening to and learning about Galician music traditions while sharing some of his own.

    MIT students and recent alumni interested in applying to the Fulbright U.S. Student Program should contact Julia Mongo in the Office of Distinguished Fellowships at MIT Career Advising and Professional Development. Students are also supported in the process by the Presidential Committee on Distinguished Fellowships. More

  • in

    There’s a symphony in the antibody protein the body makes to neutralize the coronavirus

    The pandemic reached a new milestone this spring with the rollout of Covid-19 vaccines. MIT Professor Markus Buehler marked the occasion by writing “Protein Antibody in E Minor,” an orchestral piece performed last month by South Korea’s Lindenbaum Festival Orchestra. The room was empty, but the message was clear.

    “It’s a hopeful piece as we enter this new phase in the pandemic,” says Buehler, the McAfee Professor of Engineering at MIT, and also a composer of experimental music.

    “This is the beginning of a musical healing project,” adds Hyung Joon Won, a Seoul-based violinist who initiated the collaboration.

    “Protein Antibody in E Minor” is the sequel to “Viral Counterpoint of the Spike Protein,” a piece Buehler wrote last spring during the first wave of coronavirus infections. Picked up by the media, “Viral Counterpoint” went global, like the virus itself, reaching Won, who at the time was performing for patients hospitalized with Covid-19. Won became the first in a series of artists to approach Buehler about collaborating.

    At Won’s request, Buehler adapted “Viral Counterpoint” for the violin. This spring, the two musicians teamed up again, with Buehler translating the coronavirus-attacking antibody protein into a score for a 10-piece orchestra.

    The two pieces are as different as the proteins they are based on. “Protein Antibody” is harmonious and playful; “Viral Counterpoint” is foreboding, even sinister. “Protein Antibody,” which is based on the part of the protein that attaches to SARS-CoV-2, runs for five minutes; “Viral Counterpoint,” which represents the virus’s entire spike protein, meanders for 50.

    The antibody protein’s straightforward shape lent itself to a classical composition, says Buehler. The intricate folds of the spike protein, by contrast, required a more complex representation.

    Both pieces use a theory that Buehler devised for translating protein structures into musical scores. Both proteins — antigen and pathogen — have 20 amino acids, which can be expressed as 20 unique vibrational tones. Proteins, like other molecules, vibrate at different frequencies, a phenomenon Buehler has used to “see” the virus and its variants, capturing their complex entanglements in a musical score.

    In work with the MIT-IBM Watson AI Lab and PhD student Yiwen Hu, Buehler discovered that the proteins that stud SARS-Cov-2 vibrate less frequently and intensely than its more lethal cousins, SARS and MERS. He hypothesizes that the viruses use vibrations to jimmy their way into cells; the more energetic the protein, the deadlier the virus or mutation.

    Play video

    The molecular mechanics of the pandemic: MERS, SARS and COVID-19

    “As the coronavirus continues to mutate, this method gives us another way of studying the variants and the threat they pose,” says Buehler. “It also shows the importance of considering proteins as vibrating objects in their biological context.”

    Translating proteins into music is part of Buehler’s larger work designing new proteins by borrowing ideas from nature and harnessing the power of AI. He has trained deep-learning algorithms to both translate the structure of existing proteins into their vibrational patterns and run the operation in reverse to infer structure from vibrational patterns. With these tools, he hopes to take existing proteins and create entirely new ones targeted for specific technological or medical needs.

    The process of turning science into art is like finding another “microscope” to observe nature, says Buehler. It has also opened his work to a broader audience. More than a year after “Viral Counterpoint’s” debut, the piece has racked up more than a million downloads on SoundCloud. Some listeners were so moved they asked Buehler for permission to create their own interpretation of his work. In addition to Won, the violinist in South Korea, the piece was picked up by a ballet company in South Africa, a glass artist in Oregon, and a dance professor in Michigan, among others.

    A “suite” of homespun ballets

    The Joburg Ballet shut down last spring with the rest of South Africa. But amid the lockdown, “Viral Counterpoint” reached Iain MacDonald, artistic director of Joburg Ballet. Then, as now, the company’s dancers were quarantined at home. Putting on a traditional ballet was impossible, so MacDonald improvised; he assigned each dancer a fragment of Buehler’s music and asked them to choreograph a response. They performed from home as friends and family filmed from their cellphones. Stitched together, the segments became “The Corona Suite,” a six-minute piece that aired on YouTube last July.

    In it, the dancers twirl and pirouette on a set of unlikely stages: in the stairwell of an apartment building, on a ladder in a garden, and beside a glimmering swimming pool. With no access to costumes, the dancers made do with their own leotards, tights, and even boxer briefs, in whatever shade of red they could find. “Red became the socially-distant cohesive thread that tied the company together,” says MacDonald.

    MacDonald says the piece was intended as a public service announcement, to encourage people to stay home. It was also meant to inspire hope: that the company’s dancers would return to the stage, stay mentally and physically fit, and that everyone would pull through. “We all hoped that the virus would not cause harm to our loved ones,” he says. “And that we, as a people, could come out of this stronger and united than ever before.” 

    A Covid “sonnet” cast in glass

    Jerri Bartholomew, a microbiologist at Oregon State University, was supposed to spend her sabbatical last year at a lab in Spain. When Covid intervened, she retreated to the glass studio in her backyard. There, she focused on her other passion: making art from her research on fish parasites. She had previously worked with musicians to translate her own data into music; when she heard “Viral Counterpoint” she was moved to reinterpret Buehler’s music as glass art. 

    She found his pre-print paper describing the sonification process, digitized the figures, and transferred them to silkscreen. She then printed them on a sheet of glass, fusing and casting the images to create a series of increasingly abstract representations. After, she spent hours polishing each glass work. “It’s a lot of grinding,” she says. Her favorite piece, Covid Sonnet, shows the spike protein flowing into Buehler’s musical score. “His musical composition is an abstraction,” she says. “I hope people will be curious about why it looks and sounds the way it does. It makes the science more interesting.”

    Translating a lethal virus into movement

    Months into the pandemic, Covid’s impact on immigrants in the United States was becoming clear; Rosely Conz, a choreographer and native of Brazil, wanted to channel her anxiety into art. When she heard “Viral Counterpoint,” she knew she had a score for her ballet. She would make the virus visible, she decided, in the same way Buehler had made it audible. “I looked for aspects of the virus that could be applied to movement — its machine-like characteristics, its transfer from one performer to another, its protein spike that makes it so infectious,” she says.

    “Virus” debuted this spring at Alma College, a liberal arts school in rural Michigan where Conz teaches. On a dark stage shimmering with red light, her students leaped and glided in black pointe shoes and face masks. Their elbows and legs jabbed at the air, almost robotically, as if to channel the ugliness of the virus. Those gestures were juxtaposed by “melting movements” that Rosely says embody the humanity of the dancer. The piece is literally about the virus, but also the constraints of making art in a crisis; the dancers maintained six feet of distance throughout. “I always tell my students that in choreography we should use limitation as possibility, and that is what I tried to do,” she says. 

    Back at MIT, Buehler is planning several more “Protein Antibody” performances with Won this year. In the lab, he and Hu, his PhD student, are expanding their study of the molecular vibrations of proteins to see if they might have therapeutic value. “It’s the next step in our quest to better understand the molecular mechanics of the life,” he says. More

  • in

    Jeremy Kepner named SIAM Fellow

    Jeremy Kepner, a Lincoln Laboratory Fellow in the Cyber Security and Information Sciences Division and a research affiliate of the MIT Department of Mathematics, was named to the 2021 class of fellows of the Society for Industrial and Applied Mathematics (SIAM). The fellow designation honors SIAM members who have made outstanding contributions to the 17 mathematics-related research areas that SIAM promotes through its publications, conferences, and community of scientists. Kepner was recognized for “contributions to interactive parallel computing, matrix-based graph algorithms, green supercomputing, and big data.”

    Since joining Lincoln Laboratory in 1998, Kepner has worked to expand the capabilities of computing at the laboratory and throughout the computing community. He has published broadly, served on technical committees of national conferences, and contributed to regional efforts to provide access to supercomputing.

    “Jeremy has had two decades of contributing to the important field of high performance computing, including both supercomputers and embedded systems. He has also made a seminal impact to supercomputer system research. He invented a unique way to do signal processing on sparse data, critically important for parsing through social networks and leading to more efficient use of parallel computing environments,” says David Martinez, now a Lincoln Laboratory fellow and previously a division head who hired and then worked with Kepner for many years.

    At Lincoln Laboratory, Kepner originally led the U.S. Department of Defense (DoD) High Performance Embedded Computing Software Initiative that created the Vector, Signal and Image Processing Library standard that many DoD sensor systems have utilized. In 1999, he invented the MatlabMPI software and in 2001 was the architect of pMatlab (Parallel Matlab Toolbox) that has been used by thousands of Lincoln Laboratory staff and scientists and engineers worldwide. In 2011, the Parallel Vector Tile Optimizing Library (PVTOL), developed under Kepner’s direction, won an R&D 100 Award.

    “Jeremy has been a world leader in moving the state of high performance computing forward for the past two decades,” says Stephen Rejto, head of Lincoln Laboratory’s Cyber Security and Information Sciences Division. “His vision and drive have been invaluable to the laboratory’s mission.”

    Kepner led a consortium to pioneer the Massachusetts Green High Performance Computing Center, the world’s largest and, because of its use of hydropower, “greenest” open research data center, which is enabling a dramatic increase in MIT’s computing capabilities while reducing its CO2 footprint. He led the establishment of the current Lincoln Laboratory Supercomputing Center, which boasts New England’s most powerful supercomputer. In 2019, he helped found the U.S. Air Force-MIT AI Accelerator, which leverages the expertise and resources of MIT and the Air Force to advance research in artificial intelligence.

    “These individual honors are a recognition of the achievements of our entire Lincoln team to whom I am eternally indebted,” Kepner says.

    Kepner’s recent work has been in graph analytics and big data. He created a novel database management language and schema (Dynamic Distributed Dimensional Data Model, or D4M), which is widely used in both Lincoln Laboratory and government big data systems.

    His publications range across many fields — data mining, databases, high performance computing, graph algorithms, cybersecurity, visualization, cloud computing, random matrix theory, abstract algebra, and bioinformatics. Among his works are two SIAM bestselling books, “Parallel MATLAB for Multicore and Multinode Computers” and “Graph Algorithms in the Language of Linear Algebra.” In 2018, he and coauthor Hayden Jananthan published “Mathematics of Big Data” as one of the books in the MIT Lincoln Laboratory series put out by MIT Press.

    Kepner, who joined SIAM during his graduate days at Princeton University, has not only published books and articles through SIAM but also been involved with the SIAM community’s activities. He has served as vice chair of the SIAM International Conference on Data Mining; advises a SIAM student section; and enlisted SIAM’s affiliation with the High Performance Extreme (originally Embedded) Computing (HPEC) conference, in which he has had “an instrumental role in bringing together the high performance embedded computing community and which under his leadership became an IEEE conference in 2012,” according to Martinez, who founded the Lincoln Laboratory-hosted HPEC conference in 1997.

    Kepner is the first Lincoln Laboratory researcher to attain the rank of SIAM Fellow and the ninth from MIT. More