More stories

  • in

    3 Questions: Renaud Fournier on transforming MIT’s digital landscape

    Renaud Fournier SM ’95 joined the Institute in September 2023 in the newly established role of chief officer for business and digital transformation and is leading a team focused on simplifying business operations and systems for the MIT community. Fournier has extensive experience implementing systems and solving data challenges, both in higher education and the private sector — most recently, leading the digital transformation effort at New York University. Here, Fournier speaks about how he and his team will work closely with members of the MIT community to chart a course for MIT’s digital evolution.

    Q: What are MIT’s enterprise systems and how are they challenging for our community?

    A: The MIT community relies on our enterprise systems for a range of activities — everything from hiring and evaluating employees to managing research grants and facilities projects to maintaining student information. SAP is our current enterprise resource planning system for human resources, finance, and facilities management, and it’s integrated with other systems that provide additional business functionality. Some of these systems are purchased, like Coupa, while others are partially or fully homegrown, like Kuali Coeus and NIMBUS. Along with SAP, our other core systems — for example, Advance and MITSIS — feed data into a central data warehouse to support reporting.

    MIT’s enterprise systems and data landscape has evolved organically over 30 years. The Institute has become considerably more complicated since then, and they no longer represent the best practices or technology in the IT market.

    Q: What digital transformation projects are you most focused on?

    A: Our primary goal is to free up our community’s time so that they can achieve their greatest impact. The vision is to create easy-to-use and well-integrated systems, along with comprehensible and accessible data for reporting and analysis. To accomplish this, we will be taking a series of actions. These include modernizing our enterprise systems and data architecture to take advantage of better technology and functionality, within a cohesive and well-integrated landscape, and simplifying our business processes. To make our data accessible and actionable, we will implement more robust data governance, assigning clear ownership and accountability. And we will offer IT support that enables our community to accomplish its objectives. We need to address systems, processes, data, and support holistically, while engaging and assisting our community every step of the way.    

    Q: What are your next steps?

    A: Over the next few months, I will be building a team to guide the community on this journey, in partnership with IS&T [Information Systems and Technology], other central units, and our academic areas. Together, we will be developing a thoughtful and actionable multi-year roadmap of digital transformation projects, which will help us to produce a steady stream of improvements for our community. We have not selected any systems yet or determined the order in which they will be implemented. Engagement with stakeholders from central, academic, and research areas will inform how we prioritize projects over the next few years. Once we have created the roadmap to guide us, we look forward to the next phase — getting started on the work itself. More

  • in

    A new dataset of Arctic images will spur artificial intelligence research

    As the U.S. Coast Guard (USCG) icebreaker Healy takes part in a voyage across the North Pole this summer, it is capturing images of the Arctic to further the study of this rapidly changing region. Lincoln Laboratory researchers installed a camera system aboard the Healy while at port in Seattle before it embarked on a three-month science mission on July 11. The resulting dataset, which will be one of the first of its kind, will be used to develop artificial intelligence tools that can analyze Arctic imagery.

    “This dataset not only can help mariners navigate more safely and operate more efficiently, but also help protect our nation by providing critical maritime domain awareness and an improved understanding of how AI analysis can be brought to bear in this challenging and unique environment,” says Jo Kurucar, a researcher in Lincoln Laboratory’s AI Software Architectures and Algorithms Group, which led this project.

    As the planet warms and sea ice melts, Arctic passages are opening up to more traffic, both to military vessels and ships conducting illegal fishing. These movements may pose national security challenges to the United States. The opening Arctic also leaves questions about how its climate, wildlife, and geography are changing.

    Today, very few imagery datasets of the Arctic exist to study these changes. Overhead images from satellites or aircraft can only provide limited information about the environment. An outward-looking camera attached to a ship can capture more details of the setting and different angles of objects, such as other ships, in the scene. These types of images can then be used to train AI computer-vision tools, which can help the USCG plan naval missions and automate analysis. According to Kurucar, USCG assets in the Arctic are spread thin and can benefit greatly from AI tools, which can act as a force multiplier.

    The Healy is the USCG’s largest and most technologically advanced icebreaker. Given its current mission, it was a fitting candidate to be equipped with a new sensor to gather this dataset. The laboratory research team collaborated with the USCG Research and Development Center to determine the sensor requirements. Together, they developed the Cold Region Imaging and Surveillance Platform (CRISP).

    “Lincoln Laboratory has an excellent relationship with the Coast Guard, especially with the Research and Development Center. Over a decade, we’ve established ties that enabled the deployment of the CRISP system,” says Amna Greaves, the CRISP project lead and an assistant leader in the AI Software Architectures and Algorithms Group. “We have strong ties not only because of the USCG veterans working at the laboratory and in our group, but also because our technology missions are complementary. Today it was deploying infrared sensing in the Arctic; tomorrow it could be operating quadruped robot dogs on a fast-response cutter.”

    The CRISP system comprises a long-wave infrared camera, manufactured by Teledyne FLIR (for forward-looking infrared), that is designed for harsh maritime environments. The camera can stabilize itself during rough seas and image in complete darkness, fog, and glare. It is paired with a GPS-enabled time-synchronized clock and a network video recorder to record both video and still imagery along with GPS-positional data.  

    The camera is mounted at the front of the ship’s fly bridge, and the electronics are housed in a ruggedized rack on the bridge. The system can be operated manually from the bridge or be placed into an autonomous surveillance mode, in which it slowly pans back and forth, recording 15 minutes of video every three hours and a still image once every 15 seconds.

    “The installation of the equipment was a unique and fun experience. As with any good project, our expectations going into the install did not meet reality,” says Michael Emily, the project’s IT systems administrator who traveled to Seattle for the install. Working with the ship’s crew, the laboratory team had to quickly adjust their route for running cables from the camera to the observation station after they discovered that the expected access points weren’t in fact accessible. “We had 100-foot cables made for this project just in case of this type of scenario, which was a good thing because we only had a few inches to spare,” Emily says.

    The CRISP project team plans to publicly release the dataset, anticipated to be about 4 terabytes in size, once the USCG science mission concludes in the fall.

    The goal in releasing the dataset is to enable the wider research community to develop better tools for those operating in the Arctic, especially as this region becomes more navigable. “Collecting and publishing the data allows for faster and greater progress than what we could accomplish on our own,” Kurucar adds. “It also enables the laboratory to engage in more advanced AI applications while others make more incremental advances using the dataset.”

    On top of providing the dataset, the laboratory team plans to provide a baseline object-detection model, from which others can make progress on their own models. More advanced AI applications planned for development are classifiers for specific objects in the scene and the ability to identify and track objects across images.

    Beyond assisting with USCG missions, this project could create an influential dataset for researchers looking to apply AI to data from the Arctic to help combat climate change, says Paul Metzger, who leads the AI Software Architectures and Algorithms Group.

    Metzger adds that the group was honored to be a part of this project and is excited to see the advances that come from applying AI to novel challenges facing the United States: “I’m extremely proud of how our group applies AI to the highest-priority challenges in our nation, from predicting outbreaks of Covid-19 and assisting the U.S. European Command in their support of Ukraine to now employing AI in the Arctic for maritime awareness.”

    Once the dataset is available, it will be free to download on the Lincoln Laboratory dataset website. More

  • in

    Understanding viral justice

    In the wake of the Covid-19 pandemic, the word “viral” has a new resonance, and it’s not necessarily positive. Ruha Benjamin, a scholar who investigates the social dimensions of science, medicine, and technology, advocates a shift in perspective. She thinks justice can also be contagious. That’s the premise of Benjamin’s award-winning book “Viral Justice: How We Grow the World We Want,” as she shared with MIT Libraries staff on a June 14 visit. 

    “If this pandemic has taught us anything, it’s that something almost undetectable can be deadly, and that we can transmit it without even knowing,” said Benjamin, professor of African American studies at Princeton University. “Doesn’t this imply that small things, seemingly minor actions, decisions, or habits, could have exponential effects in the other direction, tipping the scales towards justice?” 

    To seek a more just world, Benjamin exhorted library staff to notice the ways exclusion is built into our daily lives, showing examples of park benches with armrests at regular intervals. On the surface they appear welcoming, but they also make lying down — or sleeping — impossible. This idea is taken to the extreme with “Pay and Sit,” an art installation by Fabian Brunsing in the form of a bench that deploys sharp spikes on the seat if the user doesn’t pay a meter. It serves as a powerful metaphor for discriminatory design. 

    “Dr. Benjamin’s keynote was seriously mind-blowing,” said Cherry Ibrahim, human resources generalist in the MIT Libraries. “One part that really grabbed my attention was when she talked about benches purposely designed to prevent unhoused people from sleeping on them. There are these hidden spikes in our community that we might not even realize because they don’t directly impact us.” 

    Benjamin urged the audience to look for those “spikes,” which new technologies can make even more insidious — gender and racial bias in facial recognition, the use of racial data in software used to predict student success, algorithmic bias in health care — often in the guise of progress. She coined the term “the New Jim Code” to describe the combination of coded bias and the imagined objectivity we ascribe to technology. 

    “At the MIT Libraries, we’re deeply concerned with combating inequities through our work, whether it’s democratizing access to data or investigating ways disparate communities can participate in scholarship with minimal bias or barriers,” says Director of Libraries Chris Bourg. “It’s our mission to remove the ‘spikes’ in the systems through which we create, use, and share knowledge.”

    Calling out the harms encoded into our digital world is critical, argues Benjamin, but we must also create alternatives. This is where the collective power of individuals can be transformative. Benjamin shared examples of those who are “re-imagining the default settings of technology and society,” citing initiatives like Data for Black Lives movement and the Detroit Community Technology Project. “I’m interested in the way that everyday people are changing the digital ecosystem and demanding different kinds of rights and responsibilities and protections,” she said.

    In 2020, Benjamin founded the Ida B. Wells Just Data Lab with a goal of bringing together students, educators, activists, and artists to develop a critical and creative approach to data conception, production, and circulation. Its projects have examined different aspects of data and racial inequality: assessing the impact of Covid-19 on student learning; providing resources that confront the experience of Black mourning, grief, and mental health; or developing a playbook for Black maternal mental health. Through the lab’s student-led projects Benjamin sees the next generation re-imagining technology in ways that respond to the needs of marginalized people.

    “If inequity is woven into the very fabric of our society — we see it from policing to education to health care to work — then each twist, coil, and code is a chance for us to weave new patterns, practices, and politics,” she said. “The vastness of the problems that we’re up against will be their undoing.” More

  • in

    Making sense of all things data

    Data, and more specifically using data, is not a new concept, but it remains an elusive one. It comes with terms like “the internet of things” (IoT) and “the cloud,” and no matter how often those are explained, smart people can still be confused. And then there’s the amount of information available and the speed with which it comes in. Software is omnipresent. It’s in coffeemakers and watches, gathering data every second. The question becomes how to take all the new technology and take advantage of the potential insights and analytics. It’s not a small ask.

    “Putting our arms around what digital transformation is can be difficult to do,” says Abel Sanchez. But as the executive director and research director of MIT’s Geospatial Data Center, that’s exactly what he does with his work in helping industries and executives shift their operations in order to make sense of their data and be able to use it to help their bottom lines.

    Play video

    Handling the pace

    Data can lead to making better business decisions. That’s not a new or surprising insight, but as Sanchez says, people still tend to work off of intuition. Part of the problem is that they don’t know what to do with their available data, and there’s usually plenty of available data. Part of that problem is that there’s so much information being produced from so many sources. As soon as a person wakes up and turns on their phone or starts their car, software is running. It’s coming in fast, but because it’s also complex, “it outperforms people,” he says.

    As an example with Uber, once a person clicks on the app for a ride, predictive models start firing at the rate of 1 million per second. It’s all in order to optimize the trip, taking into account factors such as school schedules, roadway conditions, traffic, and a driver’s availability. It’s helpful for the task, but it’s something that “no human would be able to do,” he says. 

    The solution requires a few components. One is a new way to store data. In the past, the classic was creating the “perfect library,” which was too structured. The response to that was to create a “data lake,” where all the information would go in and somehow people would make sense of it. “This also failed,” Sanchez says.

    Data storage needs to be re-imaged, in which a key element is greater accessibility. In most corporations, only 10-20 percent of employees have the access and technical skill to work with the data. The rest have to go through a centralized resource and get into a queue, an inefficient system. The goal, Sanchez says, is to democratize the information by going to a modern stack, which would convert what he calls “dormant data” into “active data.” The result? Better decisions could be made.

    The first, big step companies need to take is the will to make the change. Part of it is an investment of money, but it’s also an attitude shift. Corporations can have an embedded culture where things have always been done a certain way and deviating from that is resisted because it’s different. But when it comes to data, a new approach is needed. Managing and curating the information can no longer rest in the hands of one person with the institutional memory. It’s not possible. It’s also not practical because companies are losing out on efficiency and productivity, because with technology, “What use to take years to do, now you can do in days,” Sanchez says.

    Play video

    The new player

    The above exemplifies what’s been involved with coordinating data along four intertwined components: IoT, AI, the cloud, and security. The first two create the information, which then gets stored in the cloud, but it’s all for naught without robust security. But one relative newcomer has come into the picture. It’s blockchain technology, a term that is often said but still not fully understood, adding further to the confusion.

    Sanchez says that information has been handled and organized a certain way with the World Wide Web. Blockchain is an opportunity to be more nimble and productive by offering the chance to have an accepted identity, currency, and logic that works on a global scale. The holdup has always been that there’s never been any agreement on those three components on a global scale. It leads to people being shut out, inefficiency, and lost business.

    One example, Sanchez says, of blockchain’s potential is with hospitals. In the United States, they’re private and information has to be constantly integrated from doctors, insurance companies, labs, government regulators, and pharmaceutical companies. It leads to repeated steps to do something as simple as recognizing a patient’s identity, which often can’t be agreed upon. With blockchain, these various entities can create a consortium using open source code with no barriers of access, and it could quickly and easily identify a patient because it set up an agreement, and with it “remove that level of effort.” It’s an incremental step, but one which can be built upon that reduces cost and risk.

    Another example — “one of the best examples,” Sanchez says — is what was done in Indonesia. Most of the rice, corn, and wheat that comes from this area is produced from smallholder farms. For the people making loans, it’s expensive to understand the risk of cultivating these plots of land. Compounding that is that these farmers don’t have state-issued identities or credit records, so, “They don’t exist in the modern economic sense,” he says. They don’t have access to loans, and banks are losing out on potential good customers.

    With this project, blockchain allowed local people to gather information about the farms on their smartphones. Banks could acquire the information and compensate the people with tokens, thereby incentivizing the work. The bank would see the creditworthiness of the farms, and farmers could end up getting fair loans.

    In the end, it creates a beneficial circle for the banks, farmers, and community, but it also represents what can be done with digital transformation by allowing businesses to optimize their processes, make better decisions, and ultimately profit.

    “It’s a tremendous new platform,” Sanchez says. “This is the promise.” More

  • in

    Joining the battle against health care bias

    Medical researchers are awash in a tsunami of clinical data. But we need major changes in how we gather, share, and apply this data to bring its benefits to all, says Leo Anthony Celi, principal research scientist at the MIT Laboratory for Computational Physiology (LCP). 

    One key change is to make clinical data of all kinds openly available, with the proper privacy safeguards, says Celi, a practicing intensive care unit (ICU) physician at the Beth Israel Deaconess Medical Center (BIDMC) in Boston. Another key is to fully exploit these open data with multidisciplinary collaborations among clinicians, academic investigators, and industry. A third key is to focus on the varying needs of populations across every country, and to empower the experts there to drive advances in treatment, says Celi, who is also an associate professor at Harvard Medical School. 

    In all of this work, researchers must actively seek to overcome the perennial problem of bias in understanding and applying medical knowledge. This deeply damaging problem is only heightened with the massive onslaught of machine learning and other artificial intelligence technologies. “Computers will pick up all our unconscious, implicit biases when we make decisions,” Celi warns.

    Play video

    Sharing medical data 

    Founded by the LCP, the MIT Critical Data consortium builds communities across disciplines to leverage the data that are routinely collected in the process of ICU care to understand health and disease better. “We connect people and align incentives,” Celi says. “In order to advance, hospitals need to work with universities, who need to work with industry partners, who need access to clinicians and data.” 

    The consortium’s flagship project is the MIMIC (medical information marked for intensive care) ICU database built at BIDMC. With about 35,000 users around the world, the MIMIC cohort is the most widely analyzed in critical care medicine. 

    International collaborations such as MIMIC highlight one of the biggest obstacles in health care: most clinical research is performed in rich countries, typically with most clinical trial participants being white males. “The findings of these trials are translated into treatment recommendations for every patient around the world,” says Celi. “We think that this is a major contributor to the sub-optimal outcomes that we see in the treatment of all sorts of diseases in Africa, in Asia, in Latin America.” 

    To fix this problem, “groups who are disproportionately burdened by disease should be setting the research agenda,” Celi says. 

    That’s the rule in the “datathons” (health hackathons) that MIT Critical Data has organized in more than two dozen countries, which apply the latest data science techniques to real-world health data. At the datathons, MIT students and faculty both learn from local experts and share their own skill sets. Many of these several-day events are sponsored by the MIT Industrial Liaison Program, the MIT International Science and Technology Initiatives program, or the MIT Sloan Latin America Office. 

    Datathons are typically held in that country’s national language or dialect, rather than English, with representation from academia, industry, government, and other stakeholders. Doctors, nurses, pharmacists, and social workers join up with computer science, engineering, and humanities students to brainstorm and analyze potential solutions. “They need each other’s expertise to fully leverage and discover and validate the knowledge that is encrypted in the data, and that will be translated into the way they deliver care,” says Celi. 

    “Everywhere we go, there is incredible talent that is completely capable of designing solutions to their health-care problems,” he emphasizes. The datathons aim to further empower the professionals and students in the host countries to drive medical research, innovation, and entrepreneurship.

    Play video

    Fighting built-in bias 

    Applying machine learning and other advanced data science techniques to medical data reveals that “bias exists in the data in unimaginable ways” in every type of health product, Celi says. Often this bias is rooted in the clinical trials required to approve medical devices and therapies. 

    One dramatic example comes from pulse oximeters, which provide readouts on oxygen levels in a patient’s blood. It turns out that these devices overestimate oxygen levels for people of color. “We have been under-treating individuals of color because the nurses and the doctors have been falsely assured that their patients have adequate oxygenation,” he says. “We think that we have harmed, if not killed, a lot of individuals in the past, especially during Covid, as a result of a technology that was not designed with inclusive test subjects.” 

    Such dangers only increase as the universe of medical data expands. “The data that we have available now for research is maybe two or three levels of magnitude more than what we had even 10 years ago,” Celi says. MIMIC, for example, now includes terabytes of X-ray, echocardiogram, and electrocardiogram data, all linked with related health records. Such enormous sets of data allow investigators to detect health patterns that were previously invisible. 

    “But there is a caveat,” Celi says. “It is trivial for computers to learn sensitive attributes that are not very obvious to human experts.” In a study released last year, for instance, he and his colleagues showed that algorithms can tell if a chest X-ray image belongs to a white patient or person of color, even without looking at any other clinical data. 

    “More concerningly, groups including ours have demonstrated that computers can learn easily if you’re rich or poor, just from your imaging alone,” Celi says. “We were able to train a computer to predict if you are on Medicaid, or if you have private insurance, if you feed them with chest X-rays without any abnormality. So again, computers are catching features that are not visible to the human eye.” And these features may lead algorithms to advise against therapies for people who are Black or poor, he says. 

    Opening up industry opportunities 

    Every stakeholder stands to benefit when pharmaceutical firms and other health-care corporations better understand societal needs and can target their treatments appropriately, Celi says. 

    “We need to bring to the table the vendors of electronic health records and the medical device manufacturers, as well as the pharmaceutical companies,” he explains. “They need to be more aware of the disparities in the way that they perform their research. They need to have more investigators representing underrepresented groups of people, to provide that lens to come up with better designs of health products.” 

    Corporations could benefit by sharing results from their clinical trials, and could immediately see these potential benefits by participating in datathons, Celi says. “They could really witness the magic that happens when that data is curated and analyzed by students and clinicians with different backgrounds from different countries. So we’re calling out our partners in the pharmaceutical industry to organize these events with us!”  More

  • in

    3 Questions: Leo Anthony Celi on ChatGPT and medicine

    Launched in November 2022, ChatGPT is a chatbot that can not only engage in human-like conversation, but also provide accurate answers to questions in a wide range of knowledge domains. The chatbot, created by the firm OpenAI, is based on a family of “large language models” — algorithms that can recognize, predict, and generate text based on patterns they identify in datasets containing hundreds of millions of words.

    In a study appearing in PLOS Digital Health this week, researchers report that ChatGPT performed at or near the passing threshold of the U.S. Medical Licensing Exam (USMLE) — a comprehensive, three-part exam that doctors must pass before practicing medicine in the United States. In an editorial accompanying the paper, Leo Anthony Celi, a principal research scientist at MIT’s Institute for Medical Engineering and Science, a practicing physician at Beth Israel Deaconess Medical Center, and an associate professor at Harvard Medical School, and his co-authors argue that ChatGPT’s success on this exam should be a wake-up call for the medical community.

    Q: What do you think the success of ChatGPT on the USMLE reveals about the nature of the medical education and evaluation of students? 

    A: The framing of medical knowledge as something that can be encapsulated into multiple choice questions creates a cognitive framing of false certainty. Medical knowledge is often taught as fixed model representations of health and disease. Treatment effects are presented as stable over time despite constantly changing practice patterns. Mechanistic models are passed on from teachers to students with little emphasis on how robustly those models were derived, the uncertainties that persist around them, and how they must be recalibrated to reflect advances worthy of incorporation into practice. 

    ChatGPT passed an examination that rewards memorizing the components of a system rather than analyzing how it works, how it fails, how it was created, how it is maintained. Its success demonstrates some of the shortcomings in how we train and evaluate medical students. Critical thinking requires appreciation that ground truths in medicine continually shift, and more importantly, an understanding how and why they shift.

    Q: What steps do you think the medical community should take to modify how students are taught and evaluated?  

    A: Learning is about leveraging the current body of knowledge, understanding its gaps, and seeking to fill those gaps. It requires being comfortable with and being able to probe the uncertainties. We fail as teachers by not teaching students how to understand the gaps in the current body of knowledge. We fail them when we preach certainty over curiosity, and hubris over humility.  

    Medical education also requires being aware of the biases in the way medical knowledge is created and validated. These biases are best addressed by optimizing the cognitive diversity within the community. More than ever, there is a need to inspire cross-disciplinary collaborative learning and problem-solving. Medical students need data science skills that will allow every clinician to contribute to, continually assess, and recalibrate medical knowledge.

    Q: Do you see any upside to ChatGPT’s success in this exam? Are there beneficial ways that ChatGPT and other forms of AI can contribute to the practice of medicine? 

    A: There is no question that large language models (LLMs) such as ChatGPT are very powerful tools in sifting through content beyond the capabilities of experts, or even groups of experts, and extracting knowledge. However, we will need to address the problem of data bias before we can leverage LLMs and other artificial intelligence technologies. The body of knowledge that LLMs train on, both medical and beyond, is dominated by content and research from well-funded institutions in high-income countries. It is not representative of most of the world.

    We have also learned that even mechanistic models of health and disease may be biased. These inputs are fed to encoders and transformers that are oblivious to these biases. Ground truths in medicine are continuously shifting, and currently, there is no way to determine when ground truths have drifted. LLMs do not evaluate the quality and the bias of the content they are being trained on. Neither do they provide the level of uncertainty around their output. But the perfect should not be the enemy of the good. There is tremendous opportunity to improve the way health care providers currently make clinical decisions, which we know are tainted with unconscious bias. I have no doubt AI will deliver its promise once we have optimized the data input. More

  • in

    3 Questions: Why cybersecurity is on the agenda for corporate boards of directors

    Organizations of every size and in every industry are vulnerable to cybersecurity risks — a dynamic landscape of threats and vulnerabilities and a corresponding overload of possible mitigating controls. MIT Senior Lecturer Keri Pearlson, who is also the executive director of the research consortium Cybersecurity at MIT Sloan (CAMS) and an instructor for the new MIT Sloan Executive Education course Cybersecurity Governance for the Board of Directors, knows how business can get ahead of this risk. Here, she describes the current threat and explores how boards can mitigate their risk against cybercrime.

    Q: What does the current state of cyberattacks mean for businesses in 2023?

    A: Last year we were discussing how the pandemic heightened fear, uncertainty, doubt and chaos, opening new doors for malicious actors to do their cyber mischief in our organizations and our families. We saw an increase in ransomware and other cyber attacks, and we saw an increase in concern from operating executives and board of directors wondering how to keep the organization secure. Since then, we have seen a continued escalation of cyber incidents, many of which no longer make the headlines unless they are wildly unique, damaging, or different than previous incidents. For every new technology that cybersecurity professionals invent, it’s only a matter of time until malicious actors find a way around it. New leadership approaches are needed for 2023 as we move into the next phase of securing our organizations.

    In great part, this means ensuring deep cybersecurity competencies on our boards of directors. Cyber risk is so significant that a responsible board can no longer ignore it or just delegate it to risk management experts. In fact, an organization’s board of directors holds a uniquely vital role in safeguarding data and systems for the future because of their fiduciary responsibility to shareholders and their responsibility to oversee and mitigate business risk.

    As these cyber threats increase, and as companies bolster their cybersecurity budgets accordingly, the regulatory community is also advancing new requirements of companies. In March of this year, the SEC issued a proposed rule titled Cybersecurity Risk Management, Strategy, Governance, and Incident Disclosure. In it, the SEC describes its intention to require public companies to disclose whether their boards have members with cybersecurity expertise. Specifically, registrants will be required to disclose whether the entire board, a specific board member, or a board committee is responsible for the oversight of cyber risks; the processes by which the board is informed about cyber risks, and the frequency of its discussions on this topic; and whether and how the board or specified board committee considers cyber risks as part of its business strategy, risk management, and financial oversight.

    Q: How can boards help their organizations mitigate cyber risk?

    A: According to the studies I’ve conducted with my CAMS colleagues, most organizations focus on cyber protection rather than cyber resilience, and we believe that is a mistake. A company that invests only in protection is not managing the risk associated with getting up and running again in the event of a cyber incident, and they are not going to be able to respond appropriately to new regulations, either. Resiliency means having a practical plan for recovery and business continuation.

    Certainly, protection is part of the resilience equation, but if the pandemic taught us anything, it taught us that resilience is the ability to weather an attack and recover quickly with minimal impact to our operations. The ultimate goal of a cyber-resilient organization would be zero disruption from a cyber breach — no impact on operations, finances, technologies, supply chain or reputation. Board members should ask, What would it take for this to be the case? And they should ensure that executives and managers have made proper and appropriate preparations to respond and recover.

    Being a knowledgeable board member does not mean becoming a cybersecurity expert, but it does mean understanding basic concepts, risks, frameworks, and approaches. And it means having the ability to assess whether management appropriately comprehends related threats, has an appropriate cyber strategy, and can measure its effectiveness. Board members today require focused training on these critical areas to carry out their mission. Unfortunately, many enterprises fail to leverage their boards of directors in this capacity or prepare board members to actively contribute to strategy, protocols, and emergency action plans.

    Alongside my CAMS colleagues Stuart Madnick and Kevin Powers, I’m teaching a new  MIT Sloan Executive Education course, Cybersecurity Governance for the Board of Directors, designed to help organizations and their boards get up to speed. Participants will explore the board’s role in cybersecurity, as well as breach planning, response, and mitigation. And we will discuss the impact and requirements of the many new regulations coming forward, not just from the SEC, but also White House, Congress, and most states and countries around the world, which are imposing more high-level responsibilities on companies.

    Q: What are some examples of how companies, and specifically boards of directors, have successfully upped their cybersecurity game?

    A: To ensure boardroom skills reflect the patterns of the marketplace, companies such as FedEx, Hasbro, PNC, and UPS have transformed their approach to governing cyber risk, starting with board cyber expertise. In companies like these, building resiliency started with a clear plan — from the boardroom — built on business and economic analysis.

    In one company we looked at, the CEO realized his board was not well versed in the business context or financial exposure risk from a cyber attack, so he hired a third-party consulting firm to conduct a cybersecurity maturity assessment. The company CISO presented the results of the report to the enterprise risk management subcommittee, creating a productive dialogue around the business and financial impact of different investments in cybersecurity.  

    Another organization focused their board on the alignment of their cybersecurity program and operational risk. The CISO, chief risk officer, and board collaborated to understand the exposure of the organization from a risk perspective, resulting in optimizing their cyber insurance policy to mitigate the newly understood risk.

    One important takeaway from these examples is the importance of using the language of risk, resiliency, and reputation to bridge the gaps between technical cybersecurity needs and the oversight responsibilities executed by boards. Boards need to understand the financial exposure resulting from cyber risk, not just the technical components typically found in cyber presentations.

    Cyber risk is not going away. It’s escalating and becoming more sophisticated every day. Getting your board “on board” is key to meeting new guidelines, providing sufficient oversight to cybersecurity plans, and making organizations more resilient. More

  • in

    MIT welcomes eight MLK Visiting Professors and Scholars for 2022-23

    From space traffic to virus evolution, community journalism to hip-hop, this year’s cohort in the Martin Luther King Jr. (MLK) Visiting Professors and Scholars Program will power an unprecedented range of intellectual pursuits during their time on the MIT campus. 

    “MIT is so fortunate to have this group of remarkable individuals join us,” says Institute Community and Equity Officer John Dozier. “They bring a range and depth of knowledge to share with our students and faculty, and we look forward to working with them to build a stronger sense of community across the Institute.”

    Since its inception in 1990, the MLK Scholars Program has hosted more than 135 visiting professors, practitioners, and intellectuals who enhance and enrich the MIT community through their engagement with students and faculty. The program, which honors the life and legacy of MLK by increasing the presence and recognizing the contributions of underrepresented scholars, is supported by the Office of the Provost with oversight from the Institute Community and Equity Office. 

    In spring 2022, MIT President Rafael Reif committed to MIT to adding two new positions in the MLK Visiting Scholars Program, including an expert in Native American studies. Those additional positions will be filled in the coming year.  

    The 2022-23 MLK Scholars:

    Daniel Auguste is an assistant professor in the Department of Sociology at Florida Atlantic University and is hosted by Roberto Fernandez in MIT Sloan School of Management. Auguste’s research interests include social inequalities in entrepreneurship development. During his visit, Auguste will study the impact of education debt burden and wealth inequality on business ownership and success, and how these consequences differ by race and ethnicity.

    Tawanna Dillahunt is an associate professor in the School of Information at the University of Michigan, where she also holds an appointment with the electrical engineering and computer science department. Catherine D’Ignazio in the Department of Urban Studies and Planning and Fotini Christia in the Institute for Data, Systems, and Society are her faculty hosts. Dillahunt’s scholarship focuses on equitable and inclusive computing. She identifies technological opportunities and implements tools to address and alleviate employment challenges faced by marginalized people. Dillahunt’s visiting appointment begins in September 2023.

    Javit Drake ’94 is a principal scientist in modeling and simulation and measurement sciences at Proctor & Gamble. His faculty host is Fikile Brushett in the Department of Chemical Engineering. An industry researcher with electrochemical energy expertise, Drake is a Course 10 (chemical engineering) alumnus, repeat lecturer, and research affiliate in the department. During his visit, he will continue to work with the Brushett Research Group to deepen his research and understanding of battery technologies while he innovates from those discoveries.

    Eunice Ferreira is an associate professor in the Department of Theater at Skidmore College and is hosted by Claire Conceison in Music and Theater Arts. This fall, Ferreira will teach “Black Theater Matters,” a course where students will explore performance and the cultural production of Black intellectuals and artists on Broadway and in local communities. Her upcoming book projects include “Applied Theatre and Racial Justice: Radical Imaginings for Just Communities” (forthcoming from Routledge) and “Crioulo Performance: Remapping Creole and Mixed Race Theatre” (forthcoming from Vanderbilt University Press). 

    Wasalu Jaco, widely known as Lupe Fiasco, is a rapper, record producer, and entrepreneur. He will be co-hosted by Nick Montfort of Comparative Media Studies/Writing and Mary Fuller of Literature. Jaco’s interests lie in the nexus of rap, computing, and activism. As a former visiting artist in MIT’s Center for Art, Science and Technology (CAST), he will leverage existing collaborations and participate in digital media and art research projects that use computing to explore novel questions related to hip-hop and rap. In addition to his engagement in cross-departmental projects, Jaco will teach a spring course on rap in the media and social contexts.

    Moribah Jah is an associate professor in the Aerospace Engineering and Engineering Mechanics Department at the University of Texas at Austin. He is hosted by Danielle Wood in Media Arts and Sciences and the Department of Aeronautics and Astronautics, and Richard Linares in the Department of Aeronautics and Astronautics. Jah’s research interests include space sustainability and space traffic management; as a visiting scholar, he will develop and strengthen a joint MIT/UT-Austin research program to increase resources and visibility of space sustainability. Jah will also help host the AeroAstro Rising Stars symposium, which highlights graduate students, postdocs, and early-career faculty from backgrounds underrepresented in aerospace engineering. 

    Louis Massiah SM ’82 is a documentary filmmaker and the founder and director of community media of Scribe Video Center, a nonprofit organization that uses media as a tool for social change. His work focuses on empowering Black, Indigenous, and People of Color (BIPOC) filmmakers to tell the stories of/by BIPOC communities. Massiah is hosted by Vivek Bald in Creative Media Studies/Writing. Massiah’s first project will be the launch of a National Community Media Journalism Consortium, a platform to share local news on a broader scale across communities.

    Brian Nord, a scientist at Fermi National Accelerator Laboratory, will join the Laboratory for Nuclear Science, hosted by Jesse Thaler in the Department of Physics. Nord’s research interests include the connection between ethics, justice, and scientific discovery. His efforts will be aimed at introducing new insights into how we model physical systems, design scientific experiments, and approach the ethics of artificial intelligence. As a lead organizer of the Strike for Black Lives in 2020, Nord will engage with justice-oriented members of the MIT physics community to strategize actions for advocacy and activism.

    Brandon Ogbunu, an assistant professor in the Department of Ecology and Evolutionary Biology at Yale University, will be hosted by Matthew Shoulders in the Department of Chemistry. Ogbunu’s research focus is on implementing chemistry and materials science perspectives into his work on virus evolution. In addition to serving as a guest lecturer in graduate courses, he will be collaborating with the Office of Engineering Outreach Programs on their K-12 outreach and recruitment efforts.

    For more information about these scholars and the program, visit mlkscholars.mit.edu. More