More stories

  • in

    How an archeological approach can help leverage biased data in AI to improve medicine

    The classic computer science adage “garbage in, garbage out” lacks nuance when it comes to understanding biased medical data, argue computer science and bioethics professors from MIT, Johns Hopkins University, and the Alan Turing Institute in a new opinion piece published in a recent edition of the New England Journal of Medicine (NEJM). The rising popularity of artificial intelligence has brought increased scrutiny to the matter of biased AI models resulting in algorithmic discrimination, which the White House Office of Science and Technology identified as a key issue in their recent Blueprint for an AI Bill of Rights. 

    When encountering biased data, particularly for AI models used in medical settings, the typical response is to either collect more data from underrepresented groups or generate synthetic data making up for missing parts to ensure that the model performs equally well across an array of patient populations. But the authors argue that this technical approach should be augmented with a sociotechnical perspective that takes both historical and current social factors into account. By doing so, researchers can be more effective in addressing bias in public health. 

    “The three of us had been discussing the ways in which we often treat issues with data from a machine learning perspective as irritations that need to be managed with a technical solution,” recalls co-author Marzyeh Ghassemi, an assistant professor in electrical engineering and computer science and an affiliate of the Abdul Latif Jameel Clinic for Machine Learning in Health (Jameel Clinic), the Computer Science and Artificial Intelligence Laboratory (CSAIL), and Institute of Medical Engineering and Science (IMES). “We had used analogies of data as an artifact that gives a partial view of past practices, or a cracked mirror holding up a reflection. In both cases the information is perhaps not entirely accurate or favorable: Maybe we think that we behave in certain ways as a society — but when you actually look at the data, it tells a different story. We might not like what that story is, but once you unearth an understanding of the past you can move forward and take steps to address poor practices.” 

    Data as artifact 

    In the paper, titled “Considering Biased Data as Informative Artifacts in AI-Assisted Health Care,” Ghassemi, Kadija Ferryman, and Maxine Mackintosh make the case for viewing biased clinical data as “artifacts” in the same way anthropologists or archeologists would view physical objects: pieces of civilization-revealing practices, belief systems, and cultural values — in the case of the paper, specifically those that have led to existing inequities in the health care system. 

    For example, a 2019 study showed that an algorithm widely considered to be an industry standard used health-care expenditures as an indicator of need, leading to the erroneous conclusion that sicker Black patients require the same level of care as healthier white patients. What researchers found was algorithmic discrimination failing to account for unequal access to care.  

    In this instance, rather than viewing biased datasets or lack of data as problems that only require disposal or fixing, Ghassemi and her colleagues recommend the “artifacts” approach as a way to raise awareness around social and historical elements influencing how data are collected and alternative approaches to clinical AI development. 

    “If the goal of your model is deployment in a clinical setting, you should engage a bioethicist or a clinician with appropriate training reasonably early on in problem formulation,” says Ghassemi. “As computer scientists, we often don’t have a complete picture of the different social and historical factors that have gone into creating data that we’ll be using. We need expertise in discerning when models generalized from existing data may not work well for specific subgroups.” 

    When more data can actually harm performance 

    The authors acknowledge that one of the more challenging aspects of implementing an artifact-based approach is being able to assess whether data have been racially corrected: i.e., using white, male bodies as the conventional standard that other bodies are measured against. The opinion piece cites an example from the Chronic Kidney Disease Collaboration in 2021, which developed a new equation to measure kidney function because the old equation had previously been “corrected” under the blanket assumption that Black people have higher muscle mass. Ghassemi says that researchers should be prepared to investigate race-based correction as part of the research process. 

    In another recent paper accepted to this year’s International Conference on Machine Learning co-authored by Ghassemi’s PhD student Vinith Suriyakumar and University of California at San Diego Assistant Professor Berk Ustun, the researchers found that assuming the inclusion of personalized attributes like self-reported race improve the performance of ML models can actually lead to worse risk scores, models, and metrics for minority and minoritized populations.  

    “There’s no single right solution for whether or not to include self-reported race in a clinical risk score. Self-reported race is a social construct that is both a proxy for other information, and deeply proxied itself in other medical data. The solution needs to fit the evidence,” explains Ghassemi. 

    How to move forward 

    This is not to say that biased datasets should be enshrined, or biased algorithms don’t require fixing — quality training data is still key to developing safe, high-performance clinical AI models, and the NEJM piece highlights the role of the National Institutes of Health (NIH) in driving ethical practices.  

    “Generating high-quality, ethically sourced datasets is crucial for enabling the use of next-generation AI technologies that transform how we do research,” NIH acting director Lawrence Tabak stated in a press release when the NIH announced its $130 million Bridge2AI Program last year. Ghassemi agrees, pointing out that the NIH has “prioritized data collection in ethical ways that cover information we have not previously emphasized the value of in human health — such as environmental factors and social determinants. I’m very excited about their prioritization of, and strong investments towards, achieving meaningful health outcomes.” 

    Elaine Nsoesie, an associate professor at the Boston University of Public Health, believes there are many potential benefits to treating biased datasets as artifacts rather than garbage, starting with the focus on context. “Biases present in a dataset collected for lung cancer patients in a hospital in Uganda might be different from a dataset collected in the U.S. for the same patient population,” she explains. “In considering local context, we can train algorithms to better serve specific populations.” Nsoesie says that understanding the historical and contemporary factors shaping a dataset can make it easier to identify discriminatory practices that might be coded in algorithms or systems in ways that are not immediately obvious. She also notes that an artifact-based approach could lead to the development of new policies and structures ensuring that the root causes of bias in a particular dataset are eliminated. 

    “People often tell me that they are very afraid of AI, especially in health. They’ll say, ‘I’m really scared of an AI misdiagnosing me,’ or ‘I’m concerned it will treat me poorly,’” Ghassemi says. “I tell them, you shouldn’t be scared of some hypothetical AI in health tomorrow, you should be scared of what health is right now. If we take a narrow technical view of the data we extract from systems, we could naively replicate poor practices. That’s not the only option — realizing there is a problem is our first step towards a larger opportunity.”  More

  • in

    Understanding viral justice

    In the wake of the Covid-19 pandemic, the word “viral” has a new resonance, and it’s not necessarily positive. Ruha Benjamin, a scholar who investigates the social dimensions of science, medicine, and technology, advocates a shift in perspective. She thinks justice can also be contagious. That’s the premise of Benjamin’s award-winning book “Viral Justice: How We Grow the World We Want,” as she shared with MIT Libraries staff on a June 14 visit. 

    “If this pandemic has taught us anything, it’s that something almost undetectable can be deadly, and that we can transmit it without even knowing,” said Benjamin, professor of African American studies at Princeton University. “Doesn’t this imply that small things, seemingly minor actions, decisions, or habits, could have exponential effects in the other direction, tipping the scales towards justice?” 

    To seek a more just world, Benjamin exhorted library staff to notice the ways exclusion is built into our daily lives, showing examples of park benches with armrests at regular intervals. On the surface they appear welcoming, but they also make lying down — or sleeping — impossible. This idea is taken to the extreme with “Pay and Sit,” an art installation by Fabian Brunsing in the form of a bench that deploys sharp spikes on the seat if the user doesn’t pay a meter. It serves as a powerful metaphor for discriminatory design. 

    “Dr. Benjamin’s keynote was seriously mind-blowing,” said Cherry Ibrahim, human resources generalist in the MIT Libraries. “One part that really grabbed my attention was when she talked about benches purposely designed to prevent unhoused people from sleeping on them. There are these hidden spikes in our community that we might not even realize because they don’t directly impact us.” 

    Benjamin urged the audience to look for those “spikes,” which new technologies can make even more insidious — gender and racial bias in facial recognition, the use of racial data in software used to predict student success, algorithmic bias in health care — often in the guise of progress. She coined the term “the New Jim Code” to describe the combination of coded bias and the imagined objectivity we ascribe to technology. 

    “At the MIT Libraries, we’re deeply concerned with combating inequities through our work, whether it’s democratizing access to data or investigating ways disparate communities can participate in scholarship with minimal bias or barriers,” says Director of Libraries Chris Bourg. “It’s our mission to remove the ‘spikes’ in the systems through which we create, use, and share knowledge.”

    Calling out the harms encoded into our digital world is critical, argues Benjamin, but we must also create alternatives. This is where the collective power of individuals can be transformative. Benjamin shared examples of those who are “re-imagining the default settings of technology and society,” citing initiatives like Data for Black Lives movement and the Detroit Community Technology Project. “I’m interested in the way that everyday people are changing the digital ecosystem and demanding different kinds of rights and responsibilities and protections,” she said.

    In 2020, Benjamin founded the Ida B. Wells Just Data Lab with a goal of bringing together students, educators, activists, and artists to develop a critical and creative approach to data conception, production, and circulation. Its projects have examined different aspects of data and racial inequality: assessing the impact of Covid-19 on student learning; providing resources that confront the experience of Black mourning, grief, and mental health; or developing a playbook for Black maternal mental health. Through the lab’s student-led projects Benjamin sees the next generation re-imagining technology in ways that respond to the needs of marginalized people.

    “If inequity is woven into the very fabric of our society — we see it from policing to education to health care to work — then each twist, coil, and code is a chance for us to weave new patterns, practices, and politics,” she said. “The vastness of the problems that we’re up against will be their undoing.” More

  • in

    Day of AI curriculum meets the moment

    MIT Responsible AI for Social Empowerment and Education (RAISE) recently celebrated the second annual Day of AI with two flagship local events. The Edward M. Kennedy Institute for the U.S. Senate in Boston hosted a human rights and data policy-focused event that was streamed worldwide. Dearborn STEM Academy in Roxbury, Massachusetts, hosted a student workshop in collaboration with Amazon Future Engineer. With over 8,000 registrations across all 50 U.S. states and 108 countries in 2023, participation in Day of AI has more than doubled since its inaugural year.

    Day of AI is a free curriculum of lessons and hands-on activities designed to teach kids of all ages and backgrounds the basics and responsible use of artificial intelligence, designed by researchers at MIT RAISE. This year, resources were available for educators to run at any time and in any increments they chose. The curriculum included five new modules to address timely topics like ChatGPT in School, Teachable Machines, AI and Social Media, Data Science and Me, and more. A collaboration with the International Society for Technology in Education also introduced modules for early elementary students. Educators across the world shared photos, videos, and stories of their students’ engagement, expressing excitement and even relief over the accessible lessons.

    Professor Cynthia Breazeal, director of RAISE, dean for digital learning at MIT, and head of the MIT Media Lab’s Personal Robots research group, said, “It’s been a year of extraordinary advancements in AI, and with that comes necessary conversations and concerns about who and what this technology is for. With our Day of AI events, we want to celebrate the teachers and students who are putting in the work to make sure that AI is for everyone.”

    Reflecting community values and protecting digital citizens

    Play video

    On May 18, 2023, MIT RAISE hosted a global Day of AI celebration featuring a flagship local event focused on human rights and data policy at the Edward M. Kennedy Institute for the U.S. Senate. Students from the Warren Prescott Middle School and New Mission High School heard from speakers the City of Boston, Liberty Mutual, and MIT to discuss the many benefits and challenges of artificial intelligence education. Video: MIT Open Learning

    MIT President Sally Kornbluth welcomed students from Warren Prescott Middle School and New Mission High School to the Day of AI program at the Edward M. Kennedy Institute. Kornbluth reflected on the exciting potential of AI, along with the ethical considerations society needs to be responsible for.

    “AI has the potential to do all kinds of fantastic things, including driving a car, helping us with the climate crisis, improving health care, and designing apps that we can’t even imagine yet. But what we have to make sure it doesn’t do is cause harm to individuals, to communities, to us — society as a whole,” she said.

    This theme resonated with each of the event speakers, whose jobs spanned the sectors of education, government, and business. Yo Deshpande, technologist for the public realm, and Michael Lawrence Evans, program director of new urban mechanics from the Boston Mayor’s Office, shared how Boston thinks about using AI to improve city life in ways that are “equitable, accessible, and delightful.” Deshpande said, “We have the opportunity to explore not only how AI works, but how using AI can line up with our values, the way we want to be in the world, and the way we want to be in our community.”

    Adam L’Italien, chief innovation officer at Liberty Mutual Insurance (one of Day of AI’s founding sponsors), compared our present moment with AI technologies to the early days of personal computers and internet connection. “Exposure to emerging technologies can accelerate progress in the world and in your own lives,” L’Italien said, while recognizing that the AI development process needs to be inclusive and mitigate biases.

    Human policies for artificial intelligence

    So how does society address these human rights concerns about AI? Marc Aidinoff ’21, former White House Office of Science and Technology Policy chief of staff, led a discussion on how government policy can influence the parameters of how technology is developed and used, like the Blueprint for an AI Bill of Rights. Aidinoff said, “The work of building the world you want to see is far harder than building the technical AI system … How do you work with other people and create a collective vision for what we want to do?” Warren Prescott Middle School students described how AI could be used to solve problems that humans couldn’t. But they also shared their concerns that AI could affect data privacy, learning deficits, social media addiction, job displacement, and propaganda.

    In a mock U.S. Senate trial activity designed by Daniella DiPaola, PhD student at the MIT Media Lab, the middle schoolers investigated what rights might be undermined by AI in schools, hospitals, law enforcement, and corporations. Meanwhile, New Mission High School students workshopped the ideas behind bill S.2314, the Social Media Addiction Reduction Technology (SMART) Act, in an activity designed by Raechel Walker, graduate research assistant in the Personal Robots Group, and Matt Taylor, research assistant at the Media Lab. They discussed what level of control could or should be introduced at the parental, educational, and governmental levels to reduce the risks of internet addiction.

    “Alexa, how do I program AI?”

    Play video

    The 2023 Day of AI celebration featured a flagship local event at the Dearborn STEM Academy in Roxbury in collaboration with Amazon Future Engineer. Students participated in a hands-on activity using MIT App Inventor as part of Day of AI’s Alexa lesson. Video: MIT Open Learning

    At Dearborn STEM Academy, Amazon Future Engineer helped students work through the Intro to Voice AI curriculum module in real-time. Students used MIT App Inventor to code basic commands for Alexa. In an interview with WCVB, Principal Darlene Marcano said, “It’s important that we expose our students to as many different experiences as possible. The students that are participating are on track to be future computer scientists and engineers.”

    Breazeal told Dearborn students, “We want you to have an informed voice about how you want AI to be used in society. We want you to feel empowered that you can shape the world. You can make things with AI to help make a better world and a better community.”

    Rohit Prasad ’08, senior vice president and head scientist for Alexa at Amazon, and Victor Reinoso ’97, global director of philanthropic education initiatives at Amazon, also joined the event. “Amazon and MIT share a commitment to helping students discover a world of possibilities through STEM and AI education,” said Reinoso. “There’s a lot of current excitement around the technological revolution with generative AI and large language models, so we’re excited to help students explore careers of the future and navigate the pathways available to them.” To highlight their continued investment in the local community and the school program, Amazon donated a $25,000 Innovation and Early College Pathways Program Grant to the Boston Public School system.

    Day of AI down under

    Not only was the Day of AI program widely adopted across the globe, Australian educators were inspired to adapt their own regionally specific curriculum. An estimated 161,000 AI professionals will be needed in Australia by 2030, according to the National Artificial Intelligence Center in the Commonwealth Scientific and Industrial Research Organization (CSIRO), an Australian government agency and Day of AI Australia project partner. CSIRO worked with the University of New South Wales to develop supplementary educational resources on AI ethics and machine learning. Day of AI Australia reached 85,000 students at 400-plus secondary schools this year, sparking curiosity in the next generation of AI experts.

    The interest in AI is accelerating as fast as the technology is being developed. Day of AI offers a unique opportunity for K-12 students to shape our world’s digital future and their own.

    “I hope that some of you will decide to be part of this bigger effort to help us figure out the best possible answers to questions that are raised by AI,” Kornbluth told students at the Edward M. Kennedy Institute. “We’re counting on you, the next generation, to learn how AI works and help make sure it’s for everyone.” More

  • in

    Bringing the social and ethical responsibilities of computing to the forefront

    There has been a remarkable surge in the use of algorithms and artificial intelligence to address a wide range of problems and challenges. While their adoption, particularly with the rise of AI, is reshaping nearly every industry sector, discipline, and area of research, such innovations often expose unexpected consequences that involve new norms, new expectations, and new rules and laws.

    To facilitate deeper understanding, the Social and Ethical Responsibilities of Computing (SERC), a cross-cutting initiative in the MIT Schwarzman College of Computing, recently brought together social scientists and humanists with computer scientists, engineers, and other computing faculty for an exploration of the ways in which the broad applicability of algorithms and AI has presented both opportunities and challenges in many aspects of society.

    “The very nature of our reality is changing. AI has the ability to do things that until recently were solely the realm of human intelligence — things that can challenge our understanding of what it means to be human,” remarked Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing, in his opening address at the inaugural SERC Symposium. “This poses philosophical, conceptual, and practical questions on a scale not experienced since the start of the Enlightenment. In the face of such profound change, we need new conceptual maps for navigating the change.”

    The symposium offered a glimpse into the vision and activities of SERC in both research and education. “We believe our responsibility with SERC is to educate and equip our students and enable our faculty to contribute to responsible technology development and deployment,” said Georgia Perakis, the William F. Pounds Professor of Management in the MIT Sloan School of Management, co-associate dean of SERC, and the lead organizer of the symposium. “We’re drawing from the many strengths and diversity of disciplines across MIT and beyond and bringing them together to gain multiple viewpoints.”

    Through a succession of panels and sessions, the symposium delved into a variety of topics related to the societal and ethical dimensions of computing. In addition, 37 undergraduate and graduate students from a range of majors, including urban studies and planning, political science, mathematics, biology, electrical engineering and computer science, and brain and cognitive sciences, participated in a poster session to exhibit their research in this space, covering such topics as quantum ethics, AI collusion in storage markets, computing waste, and empowering users on social platforms for better content credibility.

    Showcasing a diversity of work

    In three sessions devoted to themes of beneficent and fair computing, equitable and personalized health, and algorithms and humans, the SERC Symposium showcased work by 12 faculty members across these domains.

    One such project from a multidisciplinary team of archaeologists, architects, digital artists, and computational social scientists aimed to preserve endangered heritage sites in Afghanistan with digital twins. The project team produced highly detailed interrogable 3D models of the heritage sites, in addition to extended reality and virtual reality experiences, as learning resources for audiences that cannot access these sites.

    In a project for the United Network for Organ Sharing, researchers showed how they used applied analytics to optimize various facets of an organ allocation system in the United States that is currently undergoing a major overhaul in order to make it more efficient, equitable, and inclusive for different racial, age, and gender groups, among others.

    Another talk discussed an area that has not yet received adequate public attention: the broader implications for equity that biased sensor data holds for the next generation of models in computing and health care.

    A talk on bias in algorithms considered both human bias and algorithmic bias, and the potential for improving results by taking into account differences in the nature of the two kinds of bias.

    Other highlighted research included the interaction between online platforms and human psychology; a study on whether decision-makers make systemic prediction mistakes on the available information; and an illustration of how advanced analytics and computation can be leveraged to inform supply chain management, operations, and regulatory work in the food and pharmaceutical industries.

    Improving the algorithms of tomorrow

    “Algorithms are, without question, impacting every aspect of our lives,” said Asu Ozdaglar, deputy dean of academics for the MIT Schwarzman College of Computing and head of the Department of Electrical Engineering and Computer Science, in kicking off a panel she moderated on the implications of data and algorithms.

    “Whether it’s in the context of social media, online commerce, automated tasks, and now a much wider range of creative interactions with the advent of generative AI tools and large language models, there’s little doubt that much more is to come,” Ozdaglar said. “While the promise is evident to all of us, there’s a lot to be concerned as well. This is very much time for imaginative thinking and careful deliberation to improve the algorithms of tomorrow.”

    Turning to the panel, Ozdaglar asked experts from computing, social science, and data science for insights on how to understand what is to come and shape it to enrich outcomes for the majority of humanity.

    Sarah Williams, associate professor of technology and urban planning at MIT, emphasized the critical importance of comprehending the process of how datasets are assembled, as data are the foundation for all models. She also stressed the need for research to address the potential implication of biases in algorithms that often find their way in through their creators and the data used in their development. “It’s up to us to think about our own ethical solutions to these problems,” she said. “Just as it’s important to progress with the technology, we need to start the field of looking at these questions of what biases are in the algorithms? What biases are in the data, or in that data’s journey?”

    Shifting focus to generative models and whether the development and use of these technologies should be regulated, the panelists — which also included MIT’s Srini Devadas, professor of electrical engineering and computer science, John Horton, professor of information technology, and Simon Johnson, professor of entrepreneurship — all concurred that regulating open-source algorithms, which are publicly accessible, would be difficult given that regulators are still catching up and struggling to even set guardrails for technology that is now 20 years old.

    Returning to the question of how to effectively regulate the use of these technologies, Johnson proposed a progressive corporate tax system as a potential solution. He recommends basing companies’ tax payments on their profits, especially for large corporations whose massive earnings go largely untaxed due to offshore banking. By doing so, Johnson said that this approach can serve as a regulatory mechanism that discourages companies from trying to “own the entire world” by imposing disincentives.

    The role of ethics in computing education

    As computing continues to advance with no signs of slowing down, it is critical to educate students to be intentional in the social impact of the technologies they will be developing and deploying into the world. But can one actually be taught such things? If so, how?

    Caspar Hare, professor of philosophy at MIT and co-associate dean of SERC, posed this looming question to faculty on a panel he moderated on the role of ethics in computing education. All experienced in teaching ethics and thinking about the social implications of computing, each panelist shared their perspective and approach.

    A strong advocate for the importance of learning from history, Eden Medina, associate professor of science, technology, and society at MIT, said that “often the way we frame computing is that everything is new. One of the things that I do in my teaching is look at how people have confronted these issues in the past and try to draw from them as a way to think about possible ways forward.” Medina regularly uses case studies in her classes and referred to a paper written by Yale University science historian Joanna Radin on the Pima Indian Diabetes Dataset that raised ethical issues on the history of that particular collection of data that many don’t consider as an example of how decisions around technology and data can grow out of very specific contexts.

    Milo Phillips-Brown, associate professor of philosophy at Oxford University, talked about the Ethical Computing Protocol that he co-created while he was a SERC postdoc at MIT. The protocol, a four-step approach to building technology responsibly, is designed to train computer science students to think in a better and more accurate way about the social implications of technology by breaking the process down into more manageable steps. “The basic approach that we take very much draws on the fields of value-sensitive design, responsible research and innovation, participatory design as guiding insights, and then is also fundamentally interdisciplinary,” he said.

    Fields such as biomedicine and law have an ethics ecosystem that distributes the function of ethical reasoning in these areas. Oversight and regulation are provided to guide front-line stakeholders and decision-makers when issues arise, as are training programs and access to interdisciplinary expertise that they can draw from. “In this space, we have none of that,” said John Basl, associate professor of philosophy at Northeastern University. “For current generations of computer scientists and other decision-makers, we’re actually making them do the ethical reasoning on their own.” Basl commented further that teaching core ethical reasoning skills across the curriculum, not just in philosophy classes, is essential, and that the goal shouldn’t be for every computer scientist be a professional ethicist, but for them to know enough of the landscape to be able to ask the right questions and seek out the relevant expertise and resources that exists.

    After the final session, interdisciplinary groups of faculty, students, and researchers engaged in animated discussions related to the issues covered throughout the day during a reception that marked the conclusion of the symposium. More

  • in

    Joining the battle against health care bias

    Medical researchers are awash in a tsunami of clinical data. But we need major changes in how we gather, share, and apply this data to bring its benefits to all, says Leo Anthony Celi, principal research scientist at the MIT Laboratory for Computational Physiology (LCP). 

    One key change is to make clinical data of all kinds openly available, with the proper privacy safeguards, says Celi, a practicing intensive care unit (ICU) physician at the Beth Israel Deaconess Medical Center (BIDMC) in Boston. Another key is to fully exploit these open data with multidisciplinary collaborations among clinicians, academic investigators, and industry. A third key is to focus on the varying needs of populations across every country, and to empower the experts there to drive advances in treatment, says Celi, who is also an associate professor at Harvard Medical School. 

    In all of this work, researchers must actively seek to overcome the perennial problem of bias in understanding and applying medical knowledge. This deeply damaging problem is only heightened with the massive onslaught of machine learning and other artificial intelligence technologies. “Computers will pick up all our unconscious, implicit biases when we make decisions,” Celi warns.

    Play video

    Sharing medical data 

    Founded by the LCP, the MIT Critical Data consortium builds communities across disciplines to leverage the data that are routinely collected in the process of ICU care to understand health and disease better. “We connect people and align incentives,” Celi says. “In order to advance, hospitals need to work with universities, who need to work with industry partners, who need access to clinicians and data.” 

    The consortium’s flagship project is the MIMIC (medical information marked for intensive care) ICU database built at BIDMC. With about 35,000 users around the world, the MIMIC cohort is the most widely analyzed in critical care medicine. 

    International collaborations such as MIMIC highlight one of the biggest obstacles in health care: most clinical research is performed in rich countries, typically with most clinical trial participants being white males. “The findings of these trials are translated into treatment recommendations for every patient around the world,” says Celi. “We think that this is a major contributor to the sub-optimal outcomes that we see in the treatment of all sorts of diseases in Africa, in Asia, in Latin America.” 

    To fix this problem, “groups who are disproportionately burdened by disease should be setting the research agenda,” Celi says. 

    That’s the rule in the “datathons” (health hackathons) that MIT Critical Data has organized in more than two dozen countries, which apply the latest data science techniques to real-world health data. At the datathons, MIT students and faculty both learn from local experts and share their own skill sets. Many of these several-day events are sponsored by the MIT Industrial Liaison Program, the MIT International Science and Technology Initiatives program, or the MIT Sloan Latin America Office. 

    Datathons are typically held in that country’s national language or dialect, rather than English, with representation from academia, industry, government, and other stakeholders. Doctors, nurses, pharmacists, and social workers join up with computer science, engineering, and humanities students to brainstorm and analyze potential solutions. “They need each other’s expertise to fully leverage and discover and validate the knowledge that is encrypted in the data, and that will be translated into the way they deliver care,” says Celi. 

    “Everywhere we go, there is incredible talent that is completely capable of designing solutions to their health-care problems,” he emphasizes. The datathons aim to further empower the professionals and students in the host countries to drive medical research, innovation, and entrepreneurship.

    Play video

    Fighting built-in bias 

    Applying machine learning and other advanced data science techniques to medical data reveals that “bias exists in the data in unimaginable ways” in every type of health product, Celi says. Often this bias is rooted in the clinical trials required to approve medical devices and therapies. 

    One dramatic example comes from pulse oximeters, which provide readouts on oxygen levels in a patient’s blood. It turns out that these devices overestimate oxygen levels for people of color. “We have been under-treating individuals of color because the nurses and the doctors have been falsely assured that their patients have adequate oxygenation,” he says. “We think that we have harmed, if not killed, a lot of individuals in the past, especially during Covid, as a result of a technology that was not designed with inclusive test subjects.” 

    Such dangers only increase as the universe of medical data expands. “The data that we have available now for research is maybe two or three levels of magnitude more than what we had even 10 years ago,” Celi says. MIMIC, for example, now includes terabytes of X-ray, echocardiogram, and electrocardiogram data, all linked with related health records. Such enormous sets of data allow investigators to detect health patterns that were previously invisible. 

    “But there is a caveat,” Celi says. “It is trivial for computers to learn sensitive attributes that are not very obvious to human experts.” In a study released last year, for instance, he and his colleagues showed that algorithms can tell if a chest X-ray image belongs to a white patient or person of color, even without looking at any other clinical data. 

    “More concerningly, groups including ours have demonstrated that computers can learn easily if you’re rich or poor, just from your imaging alone,” Celi says. “We were able to train a computer to predict if you are on Medicaid, or if you have private insurance, if you feed them with chest X-rays without any abnormality. So again, computers are catching features that are not visible to the human eye.” And these features may lead algorithms to advise against therapies for people who are Black or poor, he says. 

    Opening up industry opportunities 

    Every stakeholder stands to benefit when pharmaceutical firms and other health-care corporations better understand societal needs and can target their treatments appropriately, Celi says. 

    “We need to bring to the table the vendors of electronic health records and the medical device manufacturers, as well as the pharmaceutical companies,” he explains. “They need to be more aware of the disparities in the way that they perform their research. They need to have more investigators representing underrepresented groups of people, to provide that lens to come up with better designs of health products.” 

    Corporations could benefit by sharing results from their clinical trials, and could immediately see these potential benefits by participating in datathons, Celi says. “They could really witness the magic that happens when that data is curated and analyzed by students and clinicians with different backgrounds from different countries. So we’re calling out our partners in the pharmaceutical industry to organize these events with us!”  More

  • in

    Simulating discrimination in virtual reality

    Have you ever been advised to “walk a mile in someone else’s shoes?” Considering another person’s perspective can be a challenging endeavor — but recognizing our errors and biases is key to building understanding across communities. By challenging our preconceptions, we confront prejudice, such as racism and xenophobia, and potentially develop a more inclusive perspective about others.

    To assist with perspective-taking, MIT researchers have developed “On the Plane,” a virtual reality role-playing game (VR RPG) that simulates discrimination. In this case, the game portrays xenophobia directed against a Malaysian America woman, but the approach can be generalized. Situated on an airplane, players can take on the role of characters from different backgrounds, engaging in dialogue with others while making in-game choices to a series of prompts. In turn, players’ decisions control the outcome of a tense conversation between the characters about cultural differences.

    As a VR RPG, “On the Plane” encourages players to take on new roles that may be outside of their personal experiences in the first person, allowing them to confront in-group/out-group bias by incorporating new perspectives into their understanding of different cultures. Players engage with three characters: Sarah, a first-generation Muslim American of Malaysian ancestry who wears a hijab; Marianne, a white woman from the Midwest with little exposure to other cultures and customs; or a flight attendant. Sarah represents the out group, Marianne is a member of the in group, and the flight staffer is a bystander witnessing an exchange between the two passengers.“This project is part of our efforts to harness the power of virtual reality and artificial intelligence to address social ills, such as discrimination and xenophobia,” says Caglar Yildirim, an MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) research scientist who is a co-author and co-game designer on the project. “Through the exchange between the two passengers, players experience how one passenger’s xenophobia manifests itself and how it affects the other passenger. The simulation engages players in critical reflection and seeks to foster empathy for the passenger who was ‘othered’ due to her outfit being not so ‘prototypical’ of what an American should look like.”

    Yildirim worked alongside the project’s principal investigator, D. Fox Harrell, MIT professor of digital media and AI at CSAIL, the Program in Comparative Media Studies/Writing (CMS), and the Institute for Data, Systems, and Society (IDSS) and founding director of the MIT Center for Advanced Virtuality. “It is not possible for a simulation to give someone the life experiences of another person, but while you cannot ‘walk in someone else’s shoes’ in that sense, a system like this can help people recognize and understand the social patterns at work when it comes to issue like bias,” says Harrell, who is also co-author and designer on this project. “An engaging, immersive, interactive narrative can also impact people emotionally, opening the door for users’ perspectives to be transformed and broadened.” This simulation also utilizes an interactive narrative engine that creates several options for responses to in-game interactions based on a model of how people are categorized socially. The tool grants players a chance to alter their standing in the simulation through their reply choices to each prompt, affecting their affinity toward the other two characters. For example, if you play as the flight attendant, you can react to Marianne’s xenophobic expressions and attitudes toward Sarah, changing your affinities. The engine will then provide you with a different set of narrative events based on your changes in standing with others.

    To animate each avatar, “On the Plane” incorporates artificial intelligence knowledge representation techniques controlled by probabilistic finite state machines, a tool commonly used in machine learning systems for pattern recognition. With the help of these machines, characters’ body language and gestures are customizable: if you play as Marianne, the game will customize her mannerisms toward Sarah based on user inputs, impacting how comfortable she appears in front of a member of a perceived out group. Similarly, players can do the same from Sarah or the flight attendant’s point of view.In a 2018 paper based on work done in a collaboration between MIT CSAIL and the Qatar Computing Research Institute, Harrell and co-author Sercan Şengün advocated for virtual system designers to be more inclusive of Middle Eastern identities and customs. They claimed that if designers allowed users to customize virtual avatars more representative of their background, it might empower players to engage in a more supportive experience. Four years later, “On the Plane” accomplishes a similar goal, incorporating a Muslim’s perspective into an immersive environment.

    “Many virtual identity systems, such as avatars, accounts, profiles, and player characters, are not designed to serve the needs of people across diverse cultures. We have used statistical and AI methods in conjunction with qualitative approaches to learn where the gaps are,” they note. “Our project helps engender perspective transformation so that people will treat each other with respect and enhanced understanding across diverse cultural avatar representations.”

    Harrell and Yildirim’s work is part of the MIT IDSS’s Initiative on Combatting Systemic Racism (ICSR). Harrell is on the initiative’s steering committee and is the leader of the newly forming Antiracism, Games, and Immersive Media vertical, who study behavior, cognition, social phenomena, and computational systems related to race and racism in video games and immersive experiences.

    The researchers’ latest project is part of the ICSR’s broader goal to launch and coordinate cross-disciplinary research that addresses racially discriminatory processes across American institutions. Using big data, members of the research initiative develop and employ computing tools that drive racial equity. Yildirim and Harrell accomplish this goal by depicting a frequent, problematic scenario that illustrates how bias creeps into our everyday lives.“In a post-9/11 world, Muslims often experience ethnic profiling in American airports. ‘On the Plane’ builds off of that type of in-group favoritism, a well-established finding in psychology,” says MIT Professor Fotini Christia, director of the Sociotechnical Systems Research Center (SSRC) and associate director or IDSS. “This game also takes a novel approach to analyzing hardwired bias by utilizing VR instead of field experiments to simulate prejudice. Excitingly, this research demonstrates that VR can be used as a tool to help us better measure bias, combating systemic racism and other forms of discrimination.”“On the Plane” was developed on the Unity game engine using the XR Interaction Toolkit and Harrell’s Chimeria platform for authoring interactive narratives that involve social categorization. The game will be deployed for research studies later this year on both desktop computers and the standalone, wireless Meta Quest headsets. A paper on the work was presented in December at the 2022 IEEE International Conference on Artificial Intelligence and Virtual Reality. More

  • in

    Q&A: Global challenges surrounding the deployment of AI

    The AI Policy Forum (AIPF) is an initiative of the MIT Schwarzman College of Computing to move the global conversation about the impact of artificial intelligence from principles to practical policy implementation. Formed in late 2020, AIPF brings together leaders in government, business, and academia to develop approaches to address the societal challenges posed by the rapid advances and increasing applicability of AI.

    The co-chairs of the AI Policy Forum are Aleksander Madry, the Cadence Design Systems Professor; Asu Ozdaglar, deputy dean of academics for the MIT Schwarzman College of Computing and head of the Department of Electrical Engineering and Computer Science; and Luis Videgaray, senior lecturer at MIT Sloan School of Management and director of MIT AI Policy for the World Project. Here, they discuss talk some of the key issues facing the AI policy landscape today and the challenges surrounding the deployment of AI. The three are co-organizers of the upcoming AI Policy Forum Summit on Sept. 28, which will further explore the issues discussed here.

    Q: Can you talk about the ­ongoing work of the AI Policy Forum and the AI policy landscape generally?

    Ozdaglar: There is no shortage of discussion about AI at different venues, but conversations are often high-level, focused on questions of ethics and principles, or on policy problems alone. The approach the AIPF takes to its work is to target specific questions with actionable policy solutions and engage with the stakeholders working directly in these areas. We work “behind the scenes” with smaller focus groups to tackle these challenges and aim to bring visibility to some potential solutions alongside the players working directly on them through larger gatherings.

    Q: AI impacts many sectors, which makes us naturally worry about its trustworthiness. Are there any emerging best practices for development and deployment of trustworthy AI?

    Madry: The most important thing to understand regarding deploying trustworthy AI is that AI technology isn’t some natural, preordained phenomenon. It is something built by people. People who are making certain design decisions.

    We thus need to advance research that can guide these decisions as well as provide more desirable solutions. But we also need to be deliberate and think carefully about the incentives that drive these decisions. 

    Now, these incentives stem largely from the business considerations, but not exclusively so. That is, we should also recognize that proper laws and regulations, as well as establishing thoughtful industry standards have a big role to play here too.

    Indeed, governments can put in place rules that prioritize the value of deploying AI while being keenly aware of the corresponding downsides, pitfalls, and impossibilities. The design of such rules will be an ongoing and evolving process as the technology continues to improve and change, and we need to adapt to socio-political realities as well.

    Q: Perhaps one of the most rapidly evolving domains in AI deployment is in the financial sector. From a policy perspective, how should governments, regulators, and lawmakers make AI work best for consumers in finance?

    Videgaray: The financial sector is seeing a number of trends that present policy challenges at the intersection of AI systems. For one, there is the issue of explainability. By law (in the U.S. and in many other countries), lenders need to provide explanations to customers when they take actions deleterious in whatever way, like denial of a loan, to a customer’s interest. However, as financial services increasingly rely on automated systems and machine learning models, the capacity of banks to unpack the “black box” of machine learning to provide that level of mandated explanation becomes tenuous. So how should the finance industry and its regulators adapt to this advance in technology? Perhaps we need new standards and expectations, as well as tools to meet these legal requirements.

    Meanwhile, economies of scale and data network effects are leading to a proliferation of AI outsourcing, and more broadly, AI-as-a-service is becoming increasingly common in the finance industry. In particular, we are seeing fintech companies provide the tools for underwriting to other financial institutions — be it large banks or small, local credit unions. What does this segmentation of the supply chain mean for the industry? Who is accountable for the potential problems in AI systems deployed through several layers of outsourcing? How can regulators adapt to guarantee their mandates of financial stability, fairness, and other societal standards?

    Q: Social media is one of the most controversial sectors of the economy, resulting in many societal shifts and disruptions around the world. What policies or reforms might be needed to best ensure social media is a force for public good and not public harm?

    Ozdaglar: The role of social media in society is of growing concern to many, but the nature of these concerns can vary quite a bit — with some seeing social media as not doing enough to prevent, for example, misinformation and extremism, and others seeing it as unduly silencing certain viewpoints. This lack of unified view on what the problem is impacts the capacity to enact any change. All of that is additionally coupled with the complexities of the legal framework in the U.S. spanning the First Amendment, Section 230 of the Communications Decency Act, and trade laws.

    However, these difficulties in regulating social media do not mean that there is nothing to be done. Indeed, regulators have begun to tighten their control over social media companies, both in the United States and abroad, be it through antitrust procedures or other means. In particular, Ofcom in the U.K. and the European Union is already introducing new layers of oversight to platforms. Additionally, some have proposed taxes on online advertising to address the negative externalities caused by current social media business model. So, the policy tools are there, if the political will and proper guidance exists to implement them. More

  • in

    Caspar Hare, Georgia Perakis named associate deans of Social and Ethical Responsibilities of Computing

    Caspar Hare and Georgia Perakis have been appointed the new associate deans of the Social and Ethical Responsibilities of Computing (SERC), a cross-cutting initiative in the MIT Stephen A. Schwarzman College of Computing. Their new roles will take effect on Sept. 1.

    “Infusing social and ethical aspects of computing in academic research and education is a critical component of the college mission,” says Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science. “I look forward to working with Caspar and Georgia on continuing to develop and advance SERC and its reach across MIT. Their complementary backgrounds and their broad connections across MIT will be invaluable to this next chapter of SERC.”

    Caspar Hare

    Hare is a professor of philosophy in the Department of Linguistics and Philosophy. A member of the MIT faculty since 2003, his main interests are in ethics, metaphysics, and epistemology. The general theme of his recent work has been to bring ideas about practical rationality and metaphysics to bear on issues in normative ethics and epistemology. He is the author of two books: “On Myself, and Other, Less Important Subjects” (Princeton University Press 2009), about the metaphysics of perspective, and “The Limits of Kindness” (Oxford University Press 2013), about normative ethics.

    Georgia Perakis

    Perakis is the William F. Pounds Professor of Management and professor of operations research, statistics, and operations management at the MIT Sloan School of Management, where she has been a faculty member since 1998. She investigates the theory and practice of analytics and its role in operations problems and is particularly interested in how to solve complex and practical problems in pricing, revenue management, supply chains, health care, transportation, and energy applications, among other areas. Since 2019, she has been the co-director of the Operations Research Center, an interdepartmental PhD program that jointly reports to MIT Sloan and the MIT Schwarzman College of Computing, a role in which she will remain. Perakis will also assume an associate dean role at MIT Sloan in recognition of her leadership.

    Hare and Perakis succeed David Kaiser, the Germeshausen Professor of the History of Science and professor of physics, and Julie Shah, the H.N. Slater Professor of Aeronautics and Astronautics, who will be stepping down from their roles at the conclusion of their three-year term on Aug. 31.

    “My deepest thanks to Dave and Julie for their tremendous leadership of SERC and contributions to the college as associate deans,” says Huttenlocher.

    SERC impact

    As the inaugural associate deans of SERC, Kaiser and Shah have been responsible for advancing a mission to incorporate humanist, social science, social responsibility, and civic perspectives into MIT’s teaching, research, and implementation of computing. In doing so, they have engaged dozens of faculty members and thousands of students from across MIT during these first three years of the initiative.

    They have brought together people from a broad array of disciplines to collaborate on crafting original materials such as active learning projects, homework assignments, and in-class demonstrations. A collection of these materials was recently published and is now freely available to the world via MIT OpenCourseWare.

    In February 2021, they launched the MIT Case Studies in Social and Ethical Responsibilities of Computing for undergraduate instruction across a range of classes and fields of study. The specially commissioned and peer-reviewed cases are based on original research and are brief by design. Three issues have been published to date and a fourth will be released later this summer. Kaiser will continue to oversee the successful new series as editor.

    Last year, 60 undergraduates, graduate students, and postdocs joined a community of SERC Scholars to help advance SERC efforts in the college. The scholars participate in unique opportunities throughout, such as the summer Experiential Ethics program. A multidisciplinary team of graduate students last winter worked with the instructors and teaching assistants of class 6.036 (Introduction to Machine Learning), MIT’s largest machine learning course, to infuse weekly labs with material covering ethical computing, data and model bias, and fairness in machine learning through SERC.

    Through efforts such as these, SERC has had a substantial impact at MIT and beyond. Over the course of their tenure, Kaiser and Shah have engaged about 80 faculty members, and more than 2,100 students took courses that included new SERC content in the last year alone. SERC’s reach extended well beyond engineering students, with about 500 exposed to SERC content through courses offered in the School of Humanities, Arts, and Social Sciences, the MIT Sloan School of Management, and the School of Architecture and Planning. More