More stories

  • in

    Examining racial attitudes in virtual spaces through gaming

    The national dialogue on race has progressed powerfully and painfully in the past year, and issues of racial bias in the news have become ubiquitous. However, for over a decade, researchers from MIT’s Imagination, Computation, and Expression Laboratory (ICE Lab) have been developing systems to model, simulate, and analyze such issues of identity. 
    In recent years there’s been a rise in popularity of video games or virtual reality (VR) experiences addressing racial issues for educational or training purposes, coinciding with the rapid development of the academic field of serious or “impact” games such as “Walk a Mile in Digital Shoes” or “1000 Cut Journey.” 
    Now researchers from the ICE Lab, part of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the MIT Center for Advanced Virtuality, have updated a 2019 computational model to better understand our behavioral choices, by way of a video game simulation of a discriminatory racial encounter between a Black student and her white teacher. 
    A paper on the game will be presented this week the 2020 Foundations of Digital Games conference. 
    The system, which was informed by the social science research of collaborators at the University of Michigan’s Engaging, Managing, and Bonding through Race (EMBRace) lab, is supported by the Racial Encounter Coping Appraisal and Socialization Theory (RECAST). RECAST provides a way of understanding how racial socialization, or the way one has been taught to think about race, cushions the influence between racial stress and coping.
    The game, called “Passage Home,” is used to help understand the attitudes of PreK-12 educators, with the eventual goal of providing an innovative tool for clinicians to better understand the behavioral choices adolescents make when encountered with racial injustice. 
    Following user studies conducted with the original version of Passage Home in 2019, the team worked with Riana Elyse Anderson, assistant professor in the Department of Health Behavior and Health Education at the University of Michigan’s School of Public Health, and Nkemka Anyiwo, vice provost and National Science Foundation Postdoctoral Fellow in the Graduate School of Education at the University of Pennsylvania, to iterate on the original prototype and improve it to align more closely with RECAST theory. Since creating the latest version of “Passage Home” VR, they sought to understand the opportunities and challenges for using it as a tool for capturing insights about how individuals perceive and respond to racialized encounters. 
    Experiments from “Passage Home” revealed that players’ existing colorblind racial attitudes and their ethnic identity development hindered their ability to accurately interpret racist subtexts.
    The interactive game puts the player into the first-person perspective of “Tiffany,” a Black student who is falsely accused of plagiarism by her white female English teacher, “Mrs. Smith.” In the game, Mrs. Smith holds the inherently racist belief that Black students are incapable of producing high-quality work as the basis of her accusation. 
    “There has been much focus on understanding the efficacy of these systems as interventions to reduce racial bias, but there’s been less attention on how individuals’ prior physical-world racial attitudes influence their experiences of such games about racial issues,” says MIT CSAIL PhD student Danielle Olson, lead author on the paper being presented this week.
    “Danielle Olson is at the forefront of computational modeling of social phenomena, including race and racialized experiences,” says her thesis supervisor D. Fox Harrell, professor of digital media and AI in CSAIL and director of the ICE Lab and MIT Center for Advanced Virtuality. “What is crucial about her dissertation research and system ‘Passage Home’ is that it does not only model race as physical experience, rather it simulates how people are socialized to think about race, which often has more a profound impact on their racial biases regarding others and themselves than merely what they look like.”
    Many mainstream strategies for portraying race in VR experiences are often rooted in negative racial stereotypes, and the questions are often focused on “right” and “wrong” actions. In contrast, with “Passage Home,” the researchers aimed to take into account the nuance and complexity of how people think about race, which involves systemic social structures, history, lived experiences, interpersonal interactions, and discourse.
    In the game, prior to the discriminatory interaction, the player is provided with a note that they (Tiffany) are  academically high-achieving and did not commit plagiarism. The player is prompted to make a series of choices to capture their thoughts, feelings, and desired actions in response to the allegation. 
    The player then chooses which internal thoughts are most closely aligned with their own, and the verbal responses, body language, or gesture they want to express. These combinations contribute to how the narrative unfolds. 
    One educator, for example, expressed that, “This situation could have happened to any student of any race, but the way [the student] was raised, she took it as being treated unfairly.” 
    The game makes it clear that the student did not cheat, and the student never complains of unfairness, so in this case, the educator’s prior racial attitude results in not only misreading the situation, but actually imputing an attitude to the student that was never there. (The team notes that many people failed to recognize the racist nature of the comments because their racial literacy inhibited them from decoding anti-Black subtexts.)
    The results of the game demonstrated statistically significant relationships within the following categories:
    Competence (players’ feelings of skillfulness and success in the game)
    Positively associated with unawareness of racial privilege

    Negative affect (players’ feelings of boredom and monotony in the game)
    Positively associated with unawareness of blatant racial issues

    Empathy (players’ feelings of empathy towards Mrs. Smith, who is racially biased towards Tiffany)
    Negatively associated with ethnic identity search, and positively associated with unawareness of racial privilege, blatant racial issues, and institutional discrimination

    Perceived competence of Tiffany, the student 
    How well did the player think she handled the situation? 

    Perceived unfairness of Mrs. Smith, the teacher
    Was Mrs. Smith unfair to Tiffany? 

    “Even if developers create these games to attempt to encourage white educators to understand how racism negatively impacts their Black students, their prior worldviews may cause them to identify with the teacher who is the perpetrator of racial violence, not the student who is the target,” says Olson. “These results can aid developers in avoiding assumptions about players’ racial literacy by creating systems informed by evidence-based research on racial socialization and coping.” 
    While this work demonstrates a promising tool, the team notes that because racism exists at individual, cultural, institutional and systemic levels, there are limitations to which levels and how much impact emergent technologies such as VR can make. 
    Future games could be personalized to attend to differences in players’ racial socialization and attitudes, rather than assuming players will interpret racialized content in a similar way. By improving players’ in-game experiences, the hope is that this will increase the possibility for transformative learning with educators, and aid in the pursuit of racial equity for students.
    This material is based upon work supported by the following grant programs: National Science Foundation Graduate Research Fellowship Program, the Ford Foundation Predoctoral Fellowship Program, the MIT Abdul Latif Jameel World Education Lab pK-12 Education Innovation Grant, and the International Chapter of the P.E.O. Scholar Award.  More

  • in

    Helping robots avoid collisions

    George Konidaris still remembers his disheartening introduction to robotics.
    “When you’re a young student and you want to program a robot, the first thing that hits you is this immense disappointment at how much you can’t do with that robot,” he says.
    Most new roboticists want to program their robots to solve interesting, complex tasks — but it turns out that just moving them through space without colliding with objects is more difficult than it sounds.
    Fortunately, Konidaris is hopeful that future roboticists will have a more exciting start in the field. That’s because roughly four years ago, he co-founded Realtime Robotics, a startup that’s solving the “motion planning problem” for robots.
    The company has invented a solution that gives robots the ability to quickly adjust their path to avoid objects as they move to a target. The Realtime controller is a box that can be connected to a variety of robots and deployed in dynamic environments.
    “Our box simply runs the robot according to the customer’s program,” explains Konidaris, who currently serves as Realtime’s chief roboticist. “It takes care of the movement, the speed of the robot, detecting obstacles, collision detection. All [our customers] need to say is, ‘I want this robot to move here.’”
    Realtime’s key enabling technology is a unique circuit design that, when combined with proprietary software, has the effect of a plug-in motor cortex for robots. In addition to helping to fulfill the expectations of starry-eyed roboticists, the technology also represents a fundamental advance toward robots that can work effectively in changing environments.
    Helping robots get around
    Konidaris was not the first person to get discouraged about the motion planning problem in robotics. Researchers in the field have been working on it for 40 years. During a four-year postdoc at MIT, Konidaris worked with School of Engineering Professor in Teaching Excellence Tomas Lozano-Perez, a pioneer in the field who was publishing papers on motion planning before Konidaris was born.
    Humans take collision avoidance for granted. Konidaris points out that the simple act of grabbing a beer from the fridge actually requires a series of tasks such as opening the fridge, positioning your body to reach in, avoiding other objects in the fridge, and deciding where to grab the beer can.
    “You actually need to compute more than one plan,” Konidaris says. “You might need to compute hundreds of plans to get the action you want. … It’s weird how the simplest things humans do hundreds of times a day actually require immense computation.”
    In robotics, the motion planning problem revolves around the computational power required to carry out frequent tests as robots move through space. At each stage of a planned path, the tests help determine if various tiny movements will make the robot collide with objects around it. Such tests have inspired researchers to think up ever more complicated algorithms in recent years, but Konidaris believes that’s the wrong approach.
    “People were trying to make algorithms smarter and more complex, but usually that’s a sign that you’re going down the wrong path,” Konidaris says. “It’s actually not that common that super technically sophisticated techniques solve problems like that.”
    Konidaris left MIT in 2014 to join the faculty at Duke University, but he continued to collaborate with researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). Duke is also where Konidaris met Realtime co-founders Sean Murray, Dan Sorin, and Will Floyd-Jones. In 2015, the co-founders collaborated to make a new type of computer chip with circuits specifically designed to perform the frequent collision tests required to move a robot safely through space. The custom circuits could perform operations in parallel to more efficiently test short motion collisions.
    “When I left MIT for Duke, one thing bugging me was this motion planning thing should really be solved by now,” Konidaris says. “It really did come directly out of a lot of experiences at MIT. I wouldn’t have been able to write a single paper on motion planning before I got to MIT.”
    The researchers founded Realtime in 2016 and quickly brought on robotics industry veteran Peter Howard MBA ’87, who currently serves as Realtime’s CEO and is also considered a co-founder.
    “I wanted to start the company in Boston because I knew MIT and lot of robotics work was happening there,” says Konidaris, who moved to Brown University in 2016. “Boston is a hub for robotics. There’s a ton of local talent, and I think a lot of that is because MIT is here — PhDs from MIT became faculty at local schools, and those people started robotics programs. That network effect is very strong.”
    Removing robot restraints
    Today the majority of Realtime’s customers are in the automotive, manufacturing, and logistics industries. The robots using Realtime’s solution are doing everything from spot welding to making inspections to picking items from bins.
    After customers purchase Realtime’s control box, they load in a file describing the configuration of the robot’s work cell, information about the robot such as its end-of-arm tool, and the task the robot is completing. Realtime can also help optimally place the robot and its accompanying sensors around a work area. Konidaris says Realtime can shorten the process of deploying robots from an average of 15 weeks to one week.
    Once the robot is up and running, Realtime’s box controls its movement, giving it instant collision-avoidance capabilities.
    “You can use it for any robot,” Konidaris says. “You tell it where it needs to go and we’ll handle the rest.”
    Realtime is part of MIT’s Industrial Liaison Program (ILP), which helps companies make connections with larger industrial partners, and it recently joined ILP’s STEX25 startup accelerator.
    With a few large rollouts planned for the coming months, the Realtime team’s excitement is driven by the belief that solving a problem as fundamental as motion planning unlocks a slew of new applications for the robotics field.
    “What I find most exciting about Realtime is that we are a true technology company,” says Konidaris. “The vast majority of startups are aimed at finding a new application for existing technology; often, there’s no real pushing of the technical boundaries with a new app or website, or even a new robotics ‘vertical.’ But we really did invent something new, and that edge and that energy is what drives us. All of that feels very MIT to me.” More

  • in

    Monitoring sleep positions for a healthy rest

    MIT researchers have developed a wireless, private way to monitor a person’s sleep postures — whether snoozing on their back, stomach, or sides — using reflected radio signals from a small device mounted on a bedroom wall.
    The device, called BodyCompass, is the first home-ready, radio-frequency-based system to provide accurate sleep data without cameras or sensors attached to the body, according to Shichao Yue, who will introduce the system in a presentation at the UbiComp 2020 conference on Sept. 15. The PhD student has used wireless sensing to study sleep stages and insomnia for several years.
    “We thought sleep posture could be another impactful application of our system” for medical monitoring, says Yue, who worked on the project under the supervision of Professor Dina Katabi in the MIT Computer Science and Artificial Intelligence Laboratory. Studies show that stomach sleeping increases the risk of sudden death in people with epilepsy, he notes, and sleep posture could also be used to measure the progression of Parkinson’s disease as the condition robs a person of the ability to turn over in bed.
    In the future, people might also use BodyCompass to keep track of their own sleep habits or to monitor infant sleeping, Yue says: “It can be either a medical device or a consumer product, depending on needs.”
    Other authors on the conference paper, published in the Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, include graduate students Yuzhe Yang and Hao Wang, and Katabi Lab affiliate Hariharan Rahul. Katabi is the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT.
    Restful reflections
    BodyCompass works by analyzing the reflection of radio signals as they bounce off objects in a room, including the human body. Similar to a Wi-Fi router attached to the bedroom wall, the device sends and collects these signals as they return through multiple paths. The researchers then map the paths of these signals, working backward from the reflections to determine the body’s posture.
    For this to work, however, the scientists needed a way to figure out which of the signals were bouncing off the sleeper’s body, and not bouncing off the mattress or a nightstand or an overhead fan. Yue and his colleagues realized that their past work in deciphering breathing patterns from radio signals could solve the problem.
    Signals that bounce off a person’s chest and belly are uniquely modulated by breathing, they concluded. Once that breathing signal was identified as a way to “tag” reflections coming from the body, the researchers could analyze those reflections compared to the position of the device to determine how the person was lying in bed. (If a person was lying on her back, for instance, strong radio waves bouncing off her chest would be directed at the ceiling and then to the device on the wall.) “Identifying breathing as coding helped us to separate signals from the body from environmental reflections, allowing us to track where informative reflections are,” Yue says.
    Reflections from the body are then analyzed by a customized neural network to infer how the body is angled in sleep. Because the neural network defines sleep postures according to angles, the device can distinguish between a sleeper lying on the right side from one who has merely tilted slightly to the right. This kind of fine-grained analysis would be especially important for epilepsy patients for whom sleeping in a prone position is correlated with sudden unexpected death, Yue says.
    BodyCompass has some advantages over other ways of monitoring sleep posture, such as installing cameras in a person’s bedroom or attaching sensors directly to the person or their bed. Sensors can be uncomfortable to sleep with, and cameras reduce a person’s privacy, Yue notes. “Since we will only record essential information for detecting sleep posture, such as a person’s breathing signal during sleep,” he says, “it is nearly impossible for someone to infer other activities of the user from this data.”
    An accurate compass
    The research team tested BodyCompass’ accuracy over 200 hours of sleep data from 26 healthy people sleeping in their own bedrooms. At the start of the study, the subjects wore two accelerometers (sensors that detect movement) taped to their chest and stomach, to train the device’s neural network with “ground truth” data on their sleeping postures.
    BodyCompass was most accurate — predicting the correct body posture 94 percent of the time — when the device was trained on a week’s worth of data. One night’s worth of training data yielded accurate results 87 percent of the time. BodyCompass could achieve 84 percent accuracy with just 16 minutes’ worth of data collected, when sleepers were asked to hold a few usual sleeping postures in front of the wireless sensor.
    Along with epilepsy and Parkinson’s disease, BodyCompass could prove useful in treating patients vulnerable to bedsores and sleep apnea, since both conditions can be alleviated by changes in sleeping posture. Yue has his own interest as well: He suffers from migraines that seem to be affected by how he sleeps. “I sleep on my right side to avoid headache the next day,” he says, “but I’m not sure if there really is any correlation between sleep posture and migraines. Maybe this can help me find out if there is any relationship.”
    For now, BodyCompass is a monitoring tool, but it may be paired someday with an alert that can prod sleepers to change their posture. “Researchers are working on mattresses that can slowly turn a patient to avoid dangerous sleep positions,” Yue says. “Future work may combine our sleep posture detector with such mattresses to move an epilepsy patient to a safer position if needed.” More

  • in

    Helping companies prioritize their cybersecurity investments

    One reason that cyberattacks have continued to grow in recent years is that we never actually learn all that much about how they happen. Companies fear that reporting attacks will tarnish their public image, and even those who do report them don’t share many details because they worry that their competitors will gain insight into their security practices. 
    “It’s really a nice gift that we’ve given to cyber-criminals,” says Taylor Reynolds, technology policy director at MIT’s Internet Policy Research Initiative (IPRI). “In an ideal world, these attacks wouldn’t happen over and over again, because companies would be able to use data from attacks to develop quantitative measurements of the security risk so that we could prevent such incidents in the future.”
    In an economy where most industries are tightening their belts, many organizations don’t know which types of attacks lead to the largest financial losses, and therefore how to best deploy scarce security resources. 
    But a new platform from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) aims to change that, quantifying companies’ security risk without requiring them to disclose sensitive data about their systems to the research team, much less their competitors.
    Developed by Reynolds alongside economist Andrew Lo and cryptographer Vinod Vaikuntanathan, the platform helps companies do multiple things:
    quantify how secure they are;
    understand how their security compares to peers; and
    evaluate whether they’re spending the right amount of money on security, and if and how they should change their particular security priorities.
    The team received internal data from seven large companies that averaged 50,000 employees and annual revenues of $24 billion. By securely aggregating 50 different security incidents that took place at the companies, the researchers were able to analyze which specific steps were not taken that could have prevented them. (Their analysis used a well-established set of nearly 200 security actions referred to as the Center for Internet Security Sub-Controls.) 
    “We were able to paint a really thorough picture in terms of which security failures were costing companies the most money,” says Reynolds, who co-authored a related paper with professors Lo and Vaikuntanathan, MIT graduate student Leo de Castro, Principal Research Scientist Daniel J. Weitzner, PhD student Fransisca Susan, and graduate student Nicolas Zhang. “If you’re a chief information security officer at one of these organizations, it can be an overwhelming task to try to defend absolutely everything. They need to know where they should direct their attention.”
    The team calls their platform “SCRAM,” for “Secure Cyber Risk Aggregation and Measurement.” Among other findings, they determined that the three following security vulnerabilities had the largest total losses, each in excess of $1 million:
    Failures in preventing malware attacks
    Malware attacks, like the one last month that reportedly forced the wearables company Garmin to pay a $10 million ransom, are still a tried-and-true method of gaining control of valuable consumer data. Reynolds says that companies continue to struggle to prevent such attacks, relying on regularly backing up their data and reminding their employees not to click on suspicious emails. 
    Communication over unauthorized ports 
    Curiously, the team found that every firm in their study said they had, in fact, implemented the security measure of blocking access to unauthorized ports — the digital equivalent of companies locking all their doors. Even still, attacks that involved gaining access to these ports accounted for a large number of high-cost losses. 
    “Losses can arise even when there are defenses that are well-developed and understood,” says Weitzner, who also serves as director of MIT IPRI. “It’s important to recognize that improving common existing defenses should not be neglected in favor of expanding into new areas of defense.”
    Failures in log management for security incidents 
    Every day companies amass detailed “logs” denoting activity within their systems. Senior security officers often turn to these logs after an attack to audit the incident and see what happened. Reynolds says that there are many ways that companies could be using machine learning and artificial intelligence more efficiently to help understand what’s happening — including, crucially, during or even before a security attack. 
    Two other key areas that warrant further analysis include taking inventory of hardware so that only authorized devices are given access, as well as boundary defenses like firewalls and proxies that aim to control the flow of traffic through network borders. 
    The team developed their data aggregation platform in conjunction with MIT cryptography experts, using an existing method called multi-party computation (MPC) that allows them to perform calculations on data without themselves being able to read or unlock it. After computing its anonymized findings, the SCRAM system then asks each contributing company to help it unlock only the answer using their own secret cryptographic key.
    “The power of this platform is that it allows firms to contribute locked data that would otherwise be too sensitive or risky to share with a third party,” says Reynolds.
    As a next step, the researchers plan to expand the pool of participating companies, with representation from a range of different sectors that include electricity, finance, and biotech. Reynolds says that if the team can gather data from upwards of 70 or 80 companies, they’ll be able to do something unprecedented: put an actual dollar figure on the risk of particular defenses failing.
    The project was a cross-campus effort involving affiliates at IPRI, CSAIL’s Theory of Computation group, and the MIT Sloan School of Management. It was funded by the Hewlett Foundation and CSAIL’s Financial Technology industry initiative (“FinTech@CSAIL”).  More

  • in

    MIT hosts seven distinguished MLK Professors and Scholars for 2020-21

    In light of the Covid-19 pandemic, MIT has been charged with reimagining its campus, classes, and programs, including the Dr. Martin Luther King, Jr. (MLK) Visiting Professors and Scholars Program (VPSP).
    Founded in 1990, MLK VPSP honors the life and legacy of Martin Luther King, Jr. by increasing the presence of and recognizing the contributions of scholars from underrepresented groups at MIT. MLK Visiting Professors and Scholars enhance their scholarship through intellectual engagement with the MIT community and enrich the cultural, academic, and professional experience of students. The program hosts between four and eight scholars each year. But what does a virtual year mean for a visiting scholar?
    Even with the challenge of remote learning and limited in-person contact, MLK VPSP faculty hosts have articulated innovative ways to engage with the MIT community. Moya Bailey, for instance, will be a content contributor for the Program in Women’s and Gender Studies’ website and social media accounts. Charles Senteio will continue to collaborate with the Office of Minority Education on curriculum development that reflects a diverse student population with a focus on health and well-being, and he will also explore remote learning and its impact on curriculum.
    With Provost Martin Schmidt’s steadfast institutional support, and with active oversight from Institute Community and Equity Officer John Dozier and Associate Provost Tim Jamison, the MLK VPSP continues to honor King’s legacy and be an institutional priority on campus and online. For Academic Year 2020-2021, MIT is hosting seven accomplished scholars representing different areas of interest from all over the United States and Canada.
    2020-2021 MLK Visiting Professors and Scholars
    Moya Bailey is an assistant professor at Northeastern University in the Department of Cultures, Societies, and Global Studies and in the program in Women’s, Gender, and Sexuality Studies. In 2010, Bailey coined the term “misogynoir,” widely adopted by scholars, which describes the anti-Black racist misogyny that Black women experience. In the spring, she will teach a course in the MIT Program in Women’s and Gender Studies called Black Feminist Health Science Studies. In April 2021, she will organize and host a daylong Black Feminist Health Science symposium.
    Jamie Macbeth joins the program for another year in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) as a valuable member of the Genesis group, a research team mainly focused on building computer systems and computational models of human intelligence based on humans’ capability for understanding natural language. One of Macbeth’s research collaborations involves using computer systems in understanding natural language to detect aggressive language on social media with the eventual goal of violence prevention. He will continue to mentor and collaborate with women and underrepresented groups at the undergraduate, MS, and PhD levels.
    Ben McDonald is returning for a second year as a postdoc in the Department of Chemistry. His research focuses on developing designer polymers for chemical warfare-responsive membranes and surfactants to control the function of dynamic, complex soft colloids. His role as a mentor will expand to include both undergraduate and graduate students in the Swager Lab. McDonald will continue to collaborate with Chemistry Alliance for Diversity and Inclusion at MIT to organize and host virtual seminars showcasing the work of underrepresented scholars of color in the fields of chemistry and chemical engineering.
    Luis Gilberto Murillo-Urrutia, a research fellow hosted by the Environmental Solutions Initiative (ESI), joins us from the Center for Latin America and Latino Studies at American University. His research focuses on the intersection of peace and security with environmental conservation, particularly in Afro-Colombian territories. During his visit, Murillo-Urrutia will hold mentorship sessions at ESI for students conducting research on environmental planning and policy or with a minor in environment and sustainability.
    Thomas Searles, recently promoted to associate professor with tenure, is visiting from the Department of Physics at Howard University. While at MIT, he will pursue numerical studies of topological materials for photonic and quantum technological applications. He will mentor students from his lab, the Black Students Union, National Society of Black Engineers, and the Black Graduate Student Association. Searles plans to meet with the MIT physics graduate admissions committee to formulate recruitment strategies with his home and other historically Black colleges and universities.
    Charles Senteio joins the program from Rutgers University School of Communication and Information, where he is an assistant professor in library and information science. As a visiting scholar at the MIT Sloan School of Management, he will collaborate with the Operations Management Group to expand on his community health informatics research and investigate health equity barriers. He recently facilitated a workshop, “Healthcare, Technology, and Social Justice Converge — Applied Equity Research and Why It Matters to All of Us” at the MIT Day of Dialogue event in August.
    Patricia Saulis is Wolastoqey (Maliseet) from Wolastoq Negotkuk (Tobique First Nation in New Brunswick, Canada). As an MLK Visiting Scholar, Saulis will collaborate with her faculty host, Professor James Paradis from Comparative Media Studies/Writing, on a course titled, “Transmedia Art, Extraction and Environmental Justice” and engage with MIT Center for Environmental Health Sciences on their EPA Superfund-related work in the Northeastern United States. She will work closely with the American Indian Science and Engineering Society (AISES) and the Native American Students Association in raising awareness of the challenges impacting our Indigenous students. Through dialogue and presentations, she will help promote the understanding of Indigenous Peoples’ culture and help identify strategies to create a more inclusive campus for our Indigenous community. 
    Community engagement
    This year’s scholars are eager to join our community and embark on a mutually rewarding journey of learning and engagement — wherever in the world we may be.  
    MIT community members are invited to join the Institute Community and Equity Office in engaging the MLK Professors and Scholars through a signature monthly speaker series, where each scholar will present their research and hold discussions via Zoom. The first welcome event will be held on Sept. 16 from 12 to 1 p.m. Contact Rachel Ornitz rornitz@mit.edu for event details.
    For more information about this year’s and previous scholars and the program, visit the newly redesigned MLK Visiting Professors and Scholars website. More

  • in

    Making health care more personal

    The health care system today largely focuses on helping people after they have problems. When they do receive treatment, it’s based on what has worked best on average across a huge, diverse group of patients.
    Now the company Health at Scale is making health care more proactive and personalized — and, true to its name, it’s doing so for millions of people.
    Health at Scale uses a new approach for making care recommendations based on new classes of machine-learning models that work even when only small amounts of data on individual patients, providers, and treatments are available.
    The company is already working with health plans, insurers, and employers to match patients with doctors. It’s also helping to identify people at rising risk of visiting the emergency department or being hospitalized in the future, and to predict the progression of chronic diseases. Recently, Health at Scale showed its models can identify people at risk of severe respiratory infections like influenza or pneumonia, or, potentially, Covid-19.
    “From the beginning, we decided all of our predictions would be related to achieving better outcomes for patients,” says John Guttag, chief technology officer of Health at Scale and the Dugald C. Jackson Professor of Computer Science and Electrical Engineering at MIT. “We’re trying to predict what treatment or physician or intervention would lead to better outcomes for people.”
    A new approach to improving health
    Health at Scale co-founder and CEO Zeeshan Syed met Guttag while studying electrical engineering and computer science at MIT. Guttag served as Syed’s advisor for his bachelor’s and master’s degrees. When Syed decided to pursue his PhD, he only applied to one school, and his advisor was easy to choose.
    Syed did his PhD through the Harvard-MIT Program in Health Sciences and Technology (HST). During that time, he looked at how patients who’d had heart attacks could be better managed. The work was personal for Syed: His father had recently suffered a serious heart attack.
    Through the work, Syed met Mohammed Saeed SM ’97, PhD ’07, who was also in the HST program. Syed, Guttag, and Saeed founded Health at Scale in 2015 along with  David Guttag ’05, focusing on using core advances in machine learning to solve some of health care’s hardest problems.
    “It started with the burning itch to address real challenges in health care about personalization and prediction,” Syed says.
    From the beginning, the founders knew their solutions needed to work with widely available data like health care claims, which include information on diagnoses, tests, prescriptions, and more. They also sought to build tools for cleaning up and processing raw data sets, so that their models would be part of what Guttag refers to as a “full machine-learning stack for health care.”
    Finally, to deliver effective, personalized solutions, the founders knew their models needed to work with small numbers of encounters for individual physicians, clinics, and patients, which posed severe challenges for conventional AI and machine learning.
    “The large companies getting into [the health care AI] space had it wrong in that they viewed it as a big data problem,” Guttag says. “They thought, ‘We’re the experts. No one’s better at crunching large amounts of data than us.’ We thought if you want to make the right decision for individuals, the problem was a small data problem: Each patient is different, and we didn’t want to recommend to patients what was best on average. We wanted what was best for each individual.”
    The company’s first models helped recommend skilled nursing facilities for post-acute care patients. Many such patients experience further health problems and return to the hospital. Health at Scale’s models showed that some facilities were better at helping specific kinds of people with specific health problems. For example, a 64-year-old man with a history of cardiovascular disease may fare better at one facility compared to another.
    Today the company’s recommendations help guide patients to the primary care physicians, surgeons, and specialists that are best suited for them. Guttag even used the service when he got his hip replaced last year.
    Health at Scale also helps organizations identify people at rising risk of specific adverse health events, like heart attacks, in the future.
    “We’ve gone beyond the notion of identifying people who have frequently visited emergency departments or hospitals in the past, to get to the much more actionable problem of finding those people at an inflection point, where they are likely to experience worse outcomes and higher costs,” Syed says.
    The company’s other solutions help determine the best treatment options for patients and help reduce health care fraud, waste, and abuse. Each use case is designed to improve patient health outcomes by giving health care organizations decision-support for action.
    “Broadly speaking, we are interested in building models that can be used to help avoid problems, rather than simply predict them,” says Guttag. “For example, identifying those individuals at highest risk for serious complications of a respiratory infection [enables care providers] to target them for interventions that reduce their chance of developing such an infection.”
    Impact at scale
    Earlier this year, as the scope of the Covid-19 pandemic was becoming clear, Health at Scale began considering ways its models could help.
    “The lack of data in the beginning of the pandemic motivated us to look at the experiences we have gained from combatting other respiratory infections like influenza and pneumonia,” says Saeed, who serves as Health at Scale’s chief medical officer.
    The idea led to a peer-reviewed paper where researchers affiliated with the company, the University of Michigan, and MIT showed Health at Scale’s models could accurately predict hospitalizations and visits to the emergency department related to respiratory infections.
    “We did the work on the paper using the tech we’d already built,” Guttag says. “We had interception products deployed for predicting patients at-risk of emergent hospitalizations for a variety of causes, and we saw that we could extend that approach. We had customers that we gave the solution to for free.”
    The paper proved out another use case for a technology that is already being used by some of the largest health plans in the U.S. That’s an impressive customer base for a five-year-old company of only 20 people — about half of which have MIT affiliations.
    “The culture MIT creates to solve problems that are worth solving, to go after impact, I think that’s been reflected in the way the company got together and has operated,” Syed says. “I’m deeply proud that we’ve maintained that MIT spirit.”
    And, Syed believes, there’s much more to come.
    “We set out with the goal of driving impact,” Syed says. “We currently run some of the largest production deployments of machine learning at scale, affecting millions, if not tens of millions, of patients, and we  are only just getting started.” More

  • in

    Toward a machine learning model that can reason about everyday actions

    The ability to reason abstractly about events as they unfold is a defining feature of human intelligence. We know instinctively that crying and writing are means of communicating, and that a panda falling from a tree and a plane landing are variations on descending. 
    Organizing the world into abstract categories does not come easily to computers, but in recent years researchers have inched closer by training machine learning models on words and images infused with structural information about the world, and how objects, animals, and actions relate. In a new study at the European Conference on Computer Vision this month, researchers unveiled a hybrid language-vision model that can compare and contrast a set of dynamic events captured on video to tease out the high-level concepts connecting them. 
    Their model did as well as or better than humans at two types of visual reasoning tasks — picking the video that conceptually best completes the set, and picking the video that doesn’t fit. Shown videos of a dog barking and a man howling beside his dog, for example, the model completed the set by picking the crying baby from a set of five videos. Researchers replicated their results on two datasets for training AI systems in action recognition: MIT’s Multi-Moments in Time and DeepMind’s Kinetics.
    “We show that you can build abstraction into an AI system to perform ordinary visual reasoning tasks close to a human level,” says the study’s senior author Aude Oliva, a senior research scientist at MIT, co-director of the MIT Quest for Intelligence, and MIT director of the MIT-IBM Watson AI Lab. “A model that can recognize abstract events will give more accurate, logical predictions and be more useful for decision-making.”
    As deep neural networks become expert at recognizing objects and actions in photos and video, researchers have set their sights on the next milestone: abstraction, and training models to reason about what they see. In one approach, researchers have merged the pattern-matching power of deep nets with the logic of symbolic programs to teach a model to interpret complex object relationships in a scene. Here, in another approach, researchers capitalize on the relationships embedded in the meanings of words to give their model visual reasoning power.
    “Language representations allow us to integrate contextual information learned from text databases into our visual models,” says study co-author Mathew Monfort, a research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). “Words like ‘running,’ ‘lifting,’ and ‘boxing’ share some common characteristics that make them more closely related to the concept ‘exercising,’ for example, than ‘driving.’ ”
    Using WordNet, a database of word meanings, the researchers mapped the relation of each action-class label in Moments and Kinetics to the other labels in both datasets. Words like “sculpting,” “carving,” and “cutting,” for example, were connected to higher-level concepts like “crafting,” “making art,” and “cooking.” Now when the model recognizes an activity like sculpting, it can pick out conceptually similar activities in the dataset. 
    This relational graph of abstract classes is used to train the model to perform two basic tasks. Given a set of videos, the model creates a numerical representation for each video that aligns with the word representations of the actions shown in the video. An abstraction module then combines the representations generated for each video in the set to create a new set representation that is used to identify the abstraction shared by all the videos in the set.
    To see how the model would do compared to humans, the researchers asked human subjects to perform the same set of visual reasoning tasks online. To their surprise, the model performed as well as humans in many scenarios, sometimes with unexpected results. In a variation on the set completion task, after watching a video of someone wrapping a gift and covering an item in tape, the model suggested a video of someone at the beach burying someone else in the sand. 
    “It’s effectively ‘covering,’ but very different from the visual features of the other clips,” says Camilo Fosco, a PhD student at MIT who is co-first author of the study with PhD student Alex Andonian. “Conceptually it fits, but I had to think about it.”
    Limitations of the model include a tendency to overemphasize some features. In one case, it suggested completing a set of sports videos with a video of a baby and a ball, apparently associating balls with exercise and competition.
    A deep learning model that can be trained to “think” more abstractly may be capable of learning with fewer data, say researchers. Abstraction also paves the way toward higher-level, more human-like reasoning.
    “One hallmark of human cognition is our ability to describe something in relation to something else — to compare and to contrast,” says Oliva. “It’s a rich and efficient way to learn that could eventually lead to machine learning models that can understand analogies and are that much closer to communicating intelligently with us.”Other authors of the study are Allen Lee from MIT, Rogerio Feris from IBM, and Carl Vondrick from Columbia University. More

  • in

    Robot takes contact-free measurements of patients’ vital signs

    The research described in this article has been published on a preprint server but has not yet been peer-reviewed by scientific or medical experts.
    During the current coronavirus pandemic, one of the riskiest parts of a health care worker’s job is assessing people who have symptoms of Covid-19. Researchers from MIT and Brigham and Women’s Hospital hope to reduce that risk by using robots to remotely measure patients’ vital signs.
    The robots, which are controlled by a handheld device, can also carry a tablet that allows doctors to ask patients about their symptoms without being in the same room.
    “In robotics, one of our goals is to use automation and robotic technology to remove people from dangerous jobs,” says Henwei Huang, an MIT postdoc. “We thought it should be possible for us to use a robot to remove the health care worker from the risk of directly exposing themselves to the patient.”
    Using four cameras mounted on a dog-like robot developed by Boston Dynamics, the researchers have shown that they can measure skin temperature, breathing rate, pulse rate, and blood oxygen saturation in healthy patients, from a distance of 2 meters. They are now making plans to test it in patients with Covid-19 symptoms.
    “We are thrilled to have forged this industry-academia partnership in which scientists with engineering and robotics expertise worked with clinical teams at the hospital to bring sophisticated technologies to the bedside,” says Giovanni Traverso, an MIT assistant professor of mechanical engineering, a gastroenterologist at Brigham and Women’s Hospital, and the senior author of the study.
    The researchers have posted a paper on their system on the preprint server techRxiv, and have submitted it to a peer-reviewed journal. Huang is one of the lead authors of the study, along with Peter Chai, an assistant professor of emergency medicine at Brigham and Women’s Hospital, and Claas Ehmke, a visiting scholar from ETH Zurich.

    Measuring vital signs
    When Covid-19 cases began surging in Boston in March, many hospitals, including Brigham and Women’s, set up triage tents outside their emergency departments to evaluate people with Covid-19 symptoms. One major component of this initial evaluation is measuring vital signs, including body temperature.
    The MIT and BWH researchers came up with the idea to use robotics to enable contactless monitoring of vital signs, to allow health care workers to minimize their exposure to potentially infectious patients. They decided to use existing computer vision technologies that can measure temperature, breathing rate, pulse, and blood oxygen saturation, and worked to make them mobile.
    To achieve that, they used a robot known as Spot, which can walk on four legs, similarly to a dog. Health care workers can maneuver the robot to wherever patients are sitting, using a handheld controller. The researchers mounted four different cameras onto the robot — an infrared camera plus three monochrome cameras that filter different wavelengths of light.
    The researchers developed algorithms that allow them to use the infrared camera to measure both elevated skin temperature and breathing rate. For body temperature, the camera measures skin temperature on the face, and the algorithm correlates that temperature with core body temperature. The algorithm also takes into account the ambient temperature and the distance between the camera and the patient, so that measurements can be taken from different distances, under different weather conditions, and still be accurate.
    Measurements from the infrared camera can also be used to calculate the patient’s breathing rate. As the patient breathes in and out, wearing a mask, their breath changes the temperature of the mask. Measuring this temperature change allows the researchers to calculate how rapidly the patient is breathing.
    The three monochrome cameras each filter a different wavelength of light — 670, 810, and 880 nanometers. These wavelengths allow the researchers to measure the slight color changes that result when hemoglobin in blood cells binds to oxygen and flows through blood vessels. The researchers’ algorithm uses these measurements to calculate both pulse rate and blood oxygen saturation.
    “We didn’t really develop new technology to do the measurements,” Huang says. “What we did is integrate them together very specifically for the Covid application, to analyze different vital signs at the same time.”
    Continuous monitoring
    In this study, the researchers performed the measurements on healthy volunteers, and they are now making plans to test their robotic approach in people who are showing symptoms of Covid-19, in a hospital emergency department.
    While in the near term, the researchers plan to focus on triage applications, in the longer term, they envision that the robots could be deployed in patients’ hospital rooms. This would allow the robots to continuously monitor patients and also allow doctors to check on them, via tablet, without having to enter the room. Both applications would require approval from the U.S. Food and Drug Administration.
    The research was funded by the MIT Department of Mechanical Engineering and the Karl van Tassel (1925) Career Development Professorship. More