More stories

  • in

    Success at the intersection of technology and finance

    Citadel founder and CEO Ken Griffin had some free advice for an at-capacity crowd of MIT students at the Wong Auditorium during a campus visit in April. “If you find yourself in a career where you’re not learning,” he told them, “it’s time to change jobs. In this world, if you’re not learning, you can find yourself irrelevant in the blink of an eye.”

    During a conversation with Bryan Landman ’11, senior quantitative research lead for Citadel’s Global Quantitative Strategies business, Griffin reflected on his career and offered predictions for the impact of technology on the finance sector. Citadel, which he launched in 1990, is now one of the world’s leading investment firms. Griffin also serves as non-executive chair of Citadel Securities, a market maker that is known as a key player in the modernization of markets and market structures.

    “We are excited to hear Ken share his perspective on how technology continues to shape the future of finance, including the emerging trends of quantum computing and AI,” said David Schmittlein, the John C Head III Dean and professor of marketing at MIT Sloan School of Management, who kicked off the program. The presentation was jointly sponsored by MIT Sloan, the MIT Schwarzman College of Computing, the School of Engineering, MIT Career Advising and Professional Development, and Citadel Securities Campus Recruiting.

    The future, in Griffin’s view, “is all about the application of engineering, software, and mathematics to markets. Successful entrepreneurs are those who have the tools to solve the unsolved problems of that moment in time.” He launched Citadel only one year after graduating from college. “History so far has been kind to the vision I had back in the late ’80s,” he said.

    Griffin realized very early in his career “that you could use a personal computer and quantitative finance to price traded securities in a way that was much more advanced than you saw on your typical equity trading desk on Wall Street.” Both businesses, he told the audience, are ultimately driven by research. “That’s where we formulate the ideas, and trading is how we monetize that research.”

    It’s also why Citadel and Citadel Securities employ several hundred software engineers. “We have a huge investment today in using modern technology to power our decision-making and trading,” said Griffin.

    One example of Citadel’s application of technology and science is the firm’s hiring of a meteorological team to expand the weather analytics expertise within its commodities business. While power supply is relatively easy to map and analyze, predicting demand is much more difficult. Citadel’s weather team feeds forecast data obtained from supercomputers to its traders. “Wind and solar are huge commodities,” Griffin explained, noting that the days with highest demand in the power market are cloudy, cold days with no wind. When you can forecast those days better than the market as a whole, that’s where you can identify opportunities, he added.

    Pros and cons of machine learning

    Asking about the impact of new technology on their sector, Landman noted that both Citadel and Citadel Securities are already leveraging machine learning. “In the market-making business,” Griffin said, “you see a real application for machine learning because you have so much data to parametrize the models with. But when you get into longer time horizon problems, machine learning starts to break down.”

    Griffin noted that the data obtained through machine learning is most helpful for investments with short time horizons, such as in its quantitative strategies business. “In our fundamental equities business,” he said, “machine learning is not as helpful as you would want because the underlying systems are not stationary.”

    Griffin was emphatic that “there has been a moment in time where being a really good statistician or really understanding machine-learning models was sufficient to make money. That won’t be the case for much longer.” One of the guiding principles at Citadel, he and Landman agreed, was that machine learning and other methodologies should not be used blindly. Each analyst has to cite the underlying economic theory driving their argument on investment decisions. “If you understand the problem in a different way than people who are just using the statistical models,” he said, “you have a real chance for a competitive advantage.”

    ChatGPT and a seismic shift

    Asked if ChatGPT will change history, Griffin predicted that the rise of capabilities in large language models will transform a substantial number of white collar jobs. “With open AI for most routine commercial legal documents, ChatGPT will do a better job writing a lease than a young lawyer. This is the first time we are seeing traditionally white-collar jobs at risk due to technology, and that’s a sea change.”

    Griffin urged MIT students to work with the smartest people they can find, as he did: “The magic of Citadel has been a testament to the idea that by surrounding yourself with bright, ambitious people, you can accomplish something special. I went to great lengths to hire the brightest people I could find and gave them responsibility and trust early in their careers.”

    Even more critical to success is the willingness to advocate for oneself, Griffin said, using Gerald Beeson, Citadel’s chief operating officer, as an example. Beeson, who started as an intern at the firm, “consistently sought more responsibility and had the foresight to train his own successors.” Urging students to take ownership of their careers, Griffin advised: “Make it clear that you’re willing to take on more responsibility, and think about what the roadblocks will be.”

    When microphones were handed to the audience, students inquired what changes Griffin would like to see in the hedge fund industry, how Citadel assesses the risk and reward of potential projects, and whether hedge funds should give back to the open source community. Asked about the role that Citadel — and its CEO — should play in “the wider society,” Griffin spoke enthusiastically of his belief in participatory democracy. “We need better people on both sides of the aisle,” he said. “I encourage all my colleagues to be politically active. It’s unfortunate when firms shut down political dialogue; we actually embrace it.”

    Closing on an optimistic note, Griffin urged the students in the audience to go after success, declaring, “The world is always awash in challenge and its shortcomings, but no matter what anybody says, you live at the greatest moment in the history of the planet. Make the most of it.” More

  • in

    Boosting passenger experience and increasing connectivity at the Hong Kong International Airport

    Recently, a cohort of 36 students from MIT and universities across Hong Kong came together for the MIT Entrepreneurship and Maker Skills Integrator (MEMSI), an intense two-week startup boot camp hosted at the MIT Hong Kong Innovation Node.

    “We’re very excited to be in Hong Kong,” said Professor Charles Sodini, LeBel Professor of Electrical Engineering and faculty director of the Node. “The dream always was to bring MIT and Hong Kong students together.”

    Students collaborated on six teams to meet real-world industry challenges through action learning, defining a problem, designing a solution, and crafting a business plan. The experience culminated in the MEMSI Showcase, where each team presented its process and unique solution to a panel of judges. “The MEMSI program is a great demonstration of important international educational goals for MIT,” says Professor Richard Lester, associate provost for international activities and chair of the Node Steering Committee at MIT. “It creates opportunities for our students to solve problems in a particular and distinctive cultural context, and to learn how innovations can cross international boundaries.” 

    Meeting an urgent challenge in the travel and tourism industry

    The Hong Kong Airport Authority (AAHK) served as the program’s industry partner for the third consecutive year, challenging students to conceive innovative ideas to make passenger travel more personalized from end-to-end while increasing connectivity. As the travel industry resuscitates profitability and welcomes crowds back amidst ongoing delays and labor shortages, the need for a more passenger-centric travel ecosystem is urgent.

    The airport is the third-busiest international passenger airport and the world’s busiest cargo transit. Students experienced an insider’s tour of the Hong Kong International Airport to gain on-the-ground orientation. They observed firsthand the complex logistics, possibilities, and constraints of operating with a team of 78,000 employees who serve 71.5 million passengers with unique needs and itineraries.

    Throughout the program, the cohort was coached and supported by MEMSI alumni, travel industry mentors, and MIT faculty such as Richard de Neufville, professor of engineering systems.

    The mood inside the open-plan MIT Hong Kong Innovation Node was nonstop energetic excitement for the entire program. Each of the six teams was composed of students from MIT and from Hong Kong universities. They learned to work together under time pressure, develop solutions, receive feedback from industry mentors, and iterate around the clock.

    “MEMSI was an enriching and amazing opportunity to learn about entrepreneurship while collaborating with a diverse team to solve a complex problem,” says Maria Li, a junior majoring in computer science, economics, and data science at MIT. “It was incredible to see the ideas we initially came up with as a team turn into a single, thought-out solution by the end.”

    Unsurprisingly given MIT’s focus on piloting the latest technology and the tech-savvy culture of Hong Kong as a global center, many team projects focused on virtual reality, apps, and wearable technology designed to make passengers’ journeys more individualized, efficient, or enjoyable.

    After observing geospatial patterns charting passengers’ movement through an airport, one team realized that many people on long trips aim to meet fitness goals by consciously getting their daily steps power walking the expansive terminals. The team’s prototype, FitAir, is a smart, biometric token integrated virtual coach, which plans walking routes within the airport to promote passenger health and wellness.

    Another team noted a common frustration among frequent travelers who manage multiple mileage rewards program profiles, passwords, and status reports. They proposed AirPoint, a digital wallet that consolidates different rewards programs and presents passengers with all their airport redemption opportunities in one place.

    “Today, there is no loser,” said Vivian Cheung, chief operating officer of AAHK, who served as one of the judges. “Everyone is a winner. I am a winner, too. I have learned a lot from the showcase. Some of the ideas, I believe, can really become a business.”

    Cheung noted that in just 12 days, all teams observed and solved her organization’s pain points and successfully designed solutions to address them.

    More than a competition

    Although many of the models pitched are inventive enough to potentially shape the future of travel, the main focus of MEMSI isn’t to act as yet another startup challenge and incubator.

    “What we’re really focusing on is giving students the ability to learn entrepreneurial thinking,” explains Marina Chan, senior director and head of education at the Node. “It’s the dynamic experience in a highly connected environment that makes being in Hong Kong truly unique. When students can adapt and apply theory to an international context, it builds deeper cultural competency.”

    From an aerial view, the boot camp produced many entrepreneurs in the making and lasting friendships, and respect for other cultural backgrounds and operating environments.

    “I learned the overarching process of how to make a startup pitch, all the way from idea generation, market research, and making business models, to the pitch itself and the presentation,” says Arun Wongprommoon, a senior double majoring in computer science and engineering and linguistics.  “It was all a black box to me before I came into the program.”

    He said he gained tremendous respect for the startup world and the pure hard work and collaboration required to get ahead.

    Spearheaded by the Node, MEMSI is a collaboration among the MIT Innovation Initiative, the Martin Trust Center for Entrepreneurship, the MIT International Science and Technology Initiatives, and Project Manus. Learn more about applying to MEMSI. More

  • in

    Report: CHIPS Act just the first step in addressing threats to US leadership in advanced computing

    When Liu He, a Chinese economist, politician, and “chip czar,” was tapped to lead the charge in a chipmaking arms race with the United States, his message lingered in the air, leaving behind a dewy glaze of tension: “For our country, technology is not just for growth… it is a matter of survival.”

    Once upon a time, the United States’ early technological prowess positioned the nation to outpace foreign rivals and cultivate a competitive advantage for domestic businesses. Yet, 30 years later, America’s lead in advanced computing is continuing to wane. What happened?

    A new report from an MIT researcher and two colleagues sheds light on the decline in U.S. leadership. The scientists looked at high-level measures to examine the shrinkage: overall capabilities, supercomputers, applied algorithms, and semiconductor manufacturing. Through their analysis, they found that not only has China closed the computing gap with the U.S., but nearly 80 percent of American leaders in the field believe that their Chinese competitors are improving capabilities faster — which, the team says, suggests a “broad threat to U.S. competitiveness.”

    To delve deeply into the fray, the scientists conducted the Advanced Computing Users Survey, sampling 120 top-tier organizations, including universities, national labs, federal agencies, and industry. The team estimates that this group comprises one-third and one-half of all the most significant computing users in the United States.

    “Advanced computing is crucial to scientific improvement, economic growth and the competitiveness of U.S. companies,” says Neil Thompson, director of the FutureTech Research Project at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), who helped lead the study.

    Thompson, who is also a principal investigator at MIT’s Initiative on the Digital Economy, wrote the paper with Chad Evans, executive vice president and secretary and treasurer to the board at the Council on Competitiveness, and Daniel Armbrust, who is the co-founder, initial CEO, and member of the board of directors at Silicon Catalyst and former president of SEMATECH, the semiconductor consortium that developed industry roadmaps.

    The semiconductor, supercomputer, and algorithm bonanza

    Supercomputers — the room-sized, “giant calculators” of the hardware world — are an industry no longer dominated by the United States. Through 2015, about half of the most powerful computers were sitting firmly in the U.S., and China was growing slowly from a very slow base. But in the past six years, China has swiftly caught up, reaching near parity with America.

    This disappearing lead matters. Eighty-four percent of U.S. survey respondents said they’re computationally constrained in running essential programs. “This result was telling, given who our respondents are: the vanguard of American research enterprises and academic institutions with privileged access to advanced national supercomputing resources,” says Thompson. 

    With regards to advanced algorithms, historically, the U.S. has fronted the charge, with two-thirds of all significant improvements dominated by U.S.-born inventors. But in recent decades, U.S. dominance in algorithms has relied on bringing in foreign talent to work in the U.S., which the researchers say is now in jeopardy. China has outpaced the U.S. and many other countries in churning out PhDs in STEM fields since 2007, with one report postulating a near-distant future (2025) where China will be home to nearly twice as many PhDs than in the U.S. China’s rise in algorithms can also be seen with the “Gordon Bell Prize,” an achievement for outstanding work in harnessing the power of supercomputers in varied applications. U.S. winners historically dominated the prize, but China has now equaled or surpassed Americans’ performance in the past five years.

    While the researchers note the CHIPS and Science Act of 2022 is a critical step in re-establishing the foundation of success for advanced computing, they propose recommendations to the U.S. Office of Science and Technology Policy. 

    First, they suggest democratizing access to U.S. supercomputing by building more mid-tier systems that push boundaries for many users, as well as building tools so users scaling up computations can have less up-front resource investment. They also recommend increasing the pool of innovators by funding many more electrical engineers and computer scientists being trained with longer-term US residency incentives and scholarships. Finally, in addition to this new framework, the scientists urge taking advantage of what already exists, via providing the private sector access to experimentation with high-performance computing through supercomputing sites in academia and national labs.

    All that and a bag of chips

    Computing improvements depend on continuous advances in transistor density and performance, but creating robust, new chips necessitate a harmonious blend of design and manufacturing.

    Over the last six years, China was not known as the savants of noteworthy chips. In fact, in the past five decades, the U.S. designed most of them. But this changed in the past six years when China created the HiSilicon Kirin 9000, propelling itself to the international frontier. This success was mainly obtained through partnerships with leading global chip designers that began in the 2000s. Now, China now has 14 companies among the world’s top 50 fabless designers. A decade ago, there was only one. 

    Competitive semiconductor manufacturing has been more mixed, where U.S.-led policies and internal execution issues have slowed China’s rise, but as of July 2022, the Semiconductor Manufacturing International Corporation (SMIC) has evidence of 7 nanometer logic, which was not expected until much later. However, with extreme ultraviolet export restrictions, progress below 7 nm means domestic technology development would be expensive. Currently, China is only at parity or better in two out of 12 segments of the semiconductor supply chain. Still, with government policy and investments, the team expects a whopping increase to seven segments in 10 years. So, for the moment, the U.S. retains leadership in hardware manufacturing, but with fewer dimensions of advantage.

    The authors recommend that the White House Office of Science and Technology Policy work with key national agencies, such as the U.S. Department of Defense, U.S. Department of Energy, and the National Science Foundation, to define initiatives to build the hardware and software systems needed for important computing paradigms and workloads critical for economic and security goals. “It is crucial that American enterprises can get the benefit of faster computers,” says Thompson. “With Moore’s Law slowing down, the best way to do this is to create a portfolio of specialized chips (or “accelerators”) that are customized to our needs.”

    The scientists further believe that to lead the next generation of computing, four areas must be addressed. First, by issuing grand challenges to the CHIPS Act National Semiconductor Technology Center, researchers and startups would be motivated to invest in research and development and to seek startup capital for new technologies in areas such as spintronics, neuromorphics, optical and quantum computing, and optical interconnect fabrics. By supporting allies in passing similar acts, overall investment in these technologies would increase, and supply chains would become more aligned and secure. Establishing test beds for researchers to test algorithms on new computing architectures and hardware would provide an essential platform for innovation and discovery. Finally, planning for post-exascale systems that achieve higher levels of performance through next-generation advances would ensure that current commercial technologies don’t limit future computing systems.

    “The advanced computing landscape is in rapid flux — technologically, economically, and politically, with both new opportunities for innovation and rising global rivalries,” says Daniel Reed, Presidential Professor and professor of computer science and electrical and computer engineering at the University of Utah. “The transformational insights from both deep learning and computational modeling depend on both continued semiconductor advances and their instantiation in leading edge, large-scale computing systems — hyperscale clouds and high-performance computing systems. Although the U.S. has historically led the world in both advanced semiconductors and high-performance computing, other nations have recognized that these capabilities are integral to 21st century economic competitiveness and national security, and they are investing heavily.”

    The research was funded, in part, through Thompson’s grant from Good Ventures, which supports his FutureTech Research Group. The paper is being published by the Georgetown Public Policy Review. More

  • in

    Putting clear bounds on uncertainty

    In science and technology, there has been a long and steady drive toward improving the accuracy of measurements of all kinds, along with parallel efforts to enhance the resolution of images. An accompanying goal is to reduce the uncertainty in the estimates that can be made, and the inferences drawn, from the data (visual or otherwise) that have been collected. Yet uncertainty can never be wholly eliminated. And since we have to live with it, at least to some extent, there is much to be gained by quantifying the uncertainty as precisely as possible.

    Expressed in other terms, we’d like to know just how uncertain our uncertainty is.

    That issue was taken up in a new study, led by Swami Sankaranarayanan, a postdoc at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), and his co-authors — Anastasios Angelopoulos and Stephen Bates of the University of California at Berkeley; Yaniv Romano of Technion, the Israel Institute of Technology; and Phillip Isola, an associate professor of electrical engineering and computer science at MIT. These researchers succeeded not only in obtaining accurate measures of uncertainty, they also found a way to display uncertainty in a manner the average person could grasp.

    Their paper, which was presented in December at the Neural Information Processing Systems Conference in New Orleans, relates to computer vision — a field of artificial intelligence that involves training computers to glean information from digital images. The focus of this research is on images that are partially smudged or corrupted (due to missing pixels), as well as on methods — computer algorithms, in particular — that are designed to uncover the part of the signal that is marred or otherwise concealed. An algorithm of this sort, Sankaranarayanan explains, “takes the blurred image as the input and gives you a clean image as the output” — a process that typically occurs in a couple of steps.

    First, there is an encoder, a kind of neural network specifically trained by the researchers for the task of de-blurring fuzzy images. The encoder takes a distorted image and, from that, creates an abstract (or “latent”) representation of a clean image in a form — consisting of a list of numbers — that is intelligible to a computer but would not make sense to most humans. The next step is a decoder, of which there are a couple of types, that are again usually neural networks. Sankaranarayanan and his colleagues worked with a kind of decoder called a “generative” model. In particular, they used an off-the-shelf version called StyleGAN, which takes the numbers from the encoded representation (of a cat, for instance) as its input and then constructs a complete, cleaned-up image (of that particular cat). So the entire process, including the encoding and decoding stages, yields a crisp picture from an originally muddied rendering.

    But how much faith can someone place in the accuracy of the resultant image? And, as addressed in the December 2022 paper, what is the best way to represent the uncertainty in that image? The standard approach is to create a “saliency map,” which ascribes a probability value — somewhere between 0 and 1 — to indicate the confidence the model has in the correctness of every pixel, taken one at a time. This strategy has a drawback, according to Sankaranarayanan, “because the prediction is performed independently for each pixel. But meaningful objects occur within groups of pixels, not within an individual pixel,” he adds, which is why he and his colleagues are proposing an entirely different way of assessing uncertainty.

    Their approach is centered around the “semantic attributes” of an image — groups of pixels that, when taken together, have meaning, making up a human face, for example, or a dog, or some other recognizable thing. The objective, Sankaranarayanan maintains, “is to estimate uncertainty in a way that relates to the groupings of pixels that humans can readily interpret.”

    Whereas the standard method might yield a single image, constituting the “best guess” as to what the true picture should be, the uncertainty in that representation is normally hard to discern. The new paper argues that for use in the real world, uncertainty should be presented in a way that holds meaning for people who are not experts in machine learning. Rather than producing a single image, the authors have devised a procedure for generating a range of images — each of which might be correct. Moreover, they can set precise bounds on the range, or interval, and provide a probabilistic guarantee that the true depiction lies somewhere within that range. A narrower range can be provided if the user is comfortable with, say, 90 percent certitude, and a narrower range still if more risk is acceptable.

    The authors believe their paper puts forth the first algorithm, designed for a generative model, which can establish uncertainty intervals that relate to meaningful (semantically-interpretable) features of an image and come with “a formal statistical guarantee.” While that is an important milestone, Sankaranarayanan considers it merely a step toward “the ultimate goal. So far, we have been able to do this for simple things, like restoring images of human faces or animals, but we want to extend this approach into more critical domains, such as medical imaging, where our ‘statistical guarantee’ could be especially important.”

    Suppose that the film, or radiograph, of a chest X-ray is blurred, he adds, “and you want to reconstruct the image. If you are given a range of images, you want to know that the true image is contained within that range, so you are not missing anything critical” — information that might reveal whether or not a patient has lung cancer or pneumonia. In fact, Sankaranarayanan and his colleagues have already begun working with a radiologist to see if their algorithm for predicting pneumonia could be useful in a clinical setting.

    Their work may also have relevance in the law enforcement field, he says. “The picture from a surveillance camera may be blurry, and you want to enhance that. Models for doing that already exist, but it is not easy to gauge the uncertainty. And you don’t want to make a mistake in a life-or-death situation.” The tools that he and his colleagues are developing could help identify a guilty person and help exonerate an innocent one as well.

    Much of what we do and many of the things happening in the world around us are shrouded in uncertainty, Sankaranarayanan notes. Therefore, gaining a firmer grasp of that uncertainty could help us in countless ways. For one thing, it can tell us more about exactly what it is we do not know.

    Angelopoulos was supported by the National Science Foundation. Bates was supported by the Foundations of Data Science Institute and the Simons Institute. Romano was supported by the Israel Science Foundation and by a Career Advancement Fellowship from Technion. Sankaranarayanan’s and Isola’s research for this project was sponsored by the U.S. Air Force Research Laboratory and the U.S. Air Force Artificial Intelligence Accelerator and was accomplished under Cooperative Agreement Number FA8750-19-2- 1000. MIT SuperCloud and the Lincoln Laboratory Supercomputing Center also provided computing resources that contributed to the results reported in this work. More

  • in

    Meet the 2022-23 Accenture Fellows

    Launched in October 2020, the MIT and Accenture Convergence Initiative for Industry and Technology underscores the ways in which industry and technology can collaborate to spur innovation. The five-year initiative aims to achieve its mission through research, education, and fellowships. To that end, Accenture has once again awarded five annual fellowships to MIT graduate students working on research in industry and technology convergence who are underrepresented, including by race, ethnicity, and gender.This year’s Accenture Fellows work across research areas including telemonitoring, human-computer interactions, operations research,  AI-mediated socialization, and chemical transformations. Their research covers a wide array of projects, including designing low-power processing hardware for telehealth applications; applying machine learning to streamline and improve business operations; improving mental health care through artificial intelligence; and using machine learning to understand the environmental and health consequences of complex chemical reactions.As part of the application process, student nominations were invited from each unit within the School of Engineering, as well as from the Institute’s four other schools and the MIT Schwarzman College of Computing. Five exceptional students were selected as fellows for the initiative’s third year.Drew Buzzell is a doctoral candidate in electrical engineering and computer science whose research concerns telemonitoring, a fast-growing sphere of telehealth in which information is collected through internet-of-things (IoT) connected devices and transmitted to the cloud. Currently, the high volume of information involved in telemonitoring — and the time and energy costs of processing it — make data analysis difficult. Buzzell’s work is focused on edge computing, a new computing architecture that seeks to address these challenges by managing data closer to the source, in a distributed network of IoT devices. Buzzell earned his BS in physics and engineering science and his MS in engineering science from the Pennsylvania State University.

    Mengying (Cathy) Fang is a master’s student in the MIT School of Architecture and Planning. Her research focuses on augmented reality and virtual reality platforms. Fang is developing novel sensors and machine components that combine computation, materials science, and engineering. Moving forward, she will explore topics including soft robotics techniques that could be integrated with clothes and wearable devices and haptic feedback in order to develop interactions with digital objects. Fang earned a BS in mechanical engineering and human-computer interaction from Carnegie Mellon University.

    Xiaoyue Gong is a doctoral candidate in operations research at the MIT Sloan School of Management. Her research aims to harness the power of machine learning and data science to reduce inefficiencies in the operation of businesses, organizations, and society. With the support of an Accenture Fellowship, Gong seeks to find solutions to operational problems by designing reinforcement learning methods and other machine learning techniques to embedded operational problems. Gong earned a BS in honors mathematics and interactive media arts from New York University.

    Ruby Liu is a doctoral candidate in medical engineering and medical physics. Their research addresses the growing pandemic of loneliness among older adults, which leads to poor health outcomes and presents particularly high risks for historically marginalized people, including members of the LGBTQ+ community and people of color. Liu is designing a network of interconnected AI agents that foster connections between user and agent, offering mental health care while strengthening and facilitating human-human connections. Liu received a BS in biomedical engineering from Johns Hopkins University.

    Joules Provenzano is a doctoral candidate in chemical engineering. Their work integrates machine learning and liquid chromatography-high resolution mass spectrometry (LC-HRMS) to improve our understanding of complex chemical reactions in the environment. As an Accenture Fellow, Provenzano will build upon recent advances in machine learning and LC-HRMS, including novel algorithms for processing real, experimental HR-MS data and new approaches in extracting structure-transformation rules and kinetics. Their research could speed the pace of discovery in the chemical sciences and benefits industries including oil and gas, pharmaceuticals, and agriculture. Provenzano earned a BS in chemical engineering and international and global studies from the Rochester Institute of Technology. More

  • in

    3 Questions: Why cybersecurity is on the agenda for corporate boards of directors

    Organizations of every size and in every industry are vulnerable to cybersecurity risks — a dynamic landscape of threats and vulnerabilities and a corresponding overload of possible mitigating controls. MIT Senior Lecturer Keri Pearlson, who is also the executive director of the research consortium Cybersecurity at MIT Sloan (CAMS) and an instructor for the new MIT Sloan Executive Education course Cybersecurity Governance for the Board of Directors, knows how business can get ahead of this risk. Here, she describes the current threat and explores how boards can mitigate their risk against cybercrime.

    Q: What does the current state of cyberattacks mean for businesses in 2023?

    A: Last year we were discussing how the pandemic heightened fear, uncertainty, doubt and chaos, opening new doors for malicious actors to do their cyber mischief in our organizations and our families. We saw an increase in ransomware and other cyber attacks, and we saw an increase in concern from operating executives and board of directors wondering how to keep the organization secure. Since then, we have seen a continued escalation of cyber incidents, many of which no longer make the headlines unless they are wildly unique, damaging, or different than previous incidents. For every new technology that cybersecurity professionals invent, it’s only a matter of time until malicious actors find a way around it. New leadership approaches are needed for 2023 as we move into the next phase of securing our organizations.

    In great part, this means ensuring deep cybersecurity competencies on our boards of directors. Cyber risk is so significant that a responsible board can no longer ignore it or just delegate it to risk management experts. In fact, an organization’s board of directors holds a uniquely vital role in safeguarding data and systems for the future because of their fiduciary responsibility to shareholders and their responsibility to oversee and mitigate business risk.

    As these cyber threats increase, and as companies bolster their cybersecurity budgets accordingly, the regulatory community is also advancing new requirements of companies. In March of this year, the SEC issued a proposed rule titled Cybersecurity Risk Management, Strategy, Governance, and Incident Disclosure. In it, the SEC describes its intention to require public companies to disclose whether their boards have members with cybersecurity expertise. Specifically, registrants will be required to disclose whether the entire board, a specific board member, or a board committee is responsible for the oversight of cyber risks; the processes by which the board is informed about cyber risks, and the frequency of its discussions on this topic; and whether and how the board or specified board committee considers cyber risks as part of its business strategy, risk management, and financial oversight.

    Q: How can boards help their organizations mitigate cyber risk?

    A: According to the studies I’ve conducted with my CAMS colleagues, most organizations focus on cyber protection rather than cyber resilience, and we believe that is a mistake. A company that invests only in protection is not managing the risk associated with getting up and running again in the event of a cyber incident, and they are not going to be able to respond appropriately to new regulations, either. Resiliency means having a practical plan for recovery and business continuation.

    Certainly, protection is part of the resilience equation, but if the pandemic taught us anything, it taught us that resilience is the ability to weather an attack and recover quickly with minimal impact to our operations. The ultimate goal of a cyber-resilient organization would be zero disruption from a cyber breach — no impact on operations, finances, technologies, supply chain or reputation. Board members should ask, What would it take for this to be the case? And they should ensure that executives and managers have made proper and appropriate preparations to respond and recover.

    Being a knowledgeable board member does not mean becoming a cybersecurity expert, but it does mean understanding basic concepts, risks, frameworks, and approaches. And it means having the ability to assess whether management appropriately comprehends related threats, has an appropriate cyber strategy, and can measure its effectiveness. Board members today require focused training on these critical areas to carry out their mission. Unfortunately, many enterprises fail to leverage their boards of directors in this capacity or prepare board members to actively contribute to strategy, protocols, and emergency action plans.

    Alongside my CAMS colleagues Stuart Madnick and Kevin Powers, I’m teaching a new  MIT Sloan Executive Education course, Cybersecurity Governance for the Board of Directors, designed to help organizations and their boards get up to speed. Participants will explore the board’s role in cybersecurity, as well as breach planning, response, and mitigation. And we will discuss the impact and requirements of the many new regulations coming forward, not just from the SEC, but also White House, Congress, and most states and countries around the world, which are imposing more high-level responsibilities on companies.

    Q: What are some examples of how companies, and specifically boards of directors, have successfully upped their cybersecurity game?

    A: To ensure boardroom skills reflect the patterns of the marketplace, companies such as FedEx, Hasbro, PNC, and UPS have transformed their approach to governing cyber risk, starting with board cyber expertise. In companies like these, building resiliency started with a clear plan — from the boardroom — built on business and economic analysis.

    In one company we looked at, the CEO realized his board was not well versed in the business context or financial exposure risk from a cyber attack, so he hired a third-party consulting firm to conduct a cybersecurity maturity assessment. The company CISO presented the results of the report to the enterprise risk management subcommittee, creating a productive dialogue around the business and financial impact of different investments in cybersecurity.  

    Another organization focused their board on the alignment of their cybersecurity program and operational risk. The CISO, chief risk officer, and board collaborated to understand the exposure of the organization from a risk perspective, resulting in optimizing their cyber insurance policy to mitigate the newly understood risk.

    One important takeaway from these examples is the importance of using the language of risk, resiliency, and reputation to bridge the gaps between technical cybersecurity needs and the oversight responsibilities executed by boards. Boards need to understand the financial exposure resulting from cyber risk, not just the technical components typically found in cyber presentations.

    Cyber risk is not going away. It’s escalating and becoming more sophisticated every day. Getting your board “on board” is key to meeting new guidelines, providing sufficient oversight to cybersecurity plans, and making organizations more resilient. More

  • in

    Celebrating open data

    The inaugural MIT Prize for Open Data, which included a $2,500 cash prize, was recently awarded to 10 individual and group research projects. Presented jointly by the School of Science and the MIT Libraries, the prize recognizes MIT-affiliated researchers who make their data openly accessible and reusable by others. The prize winners and 16 honorable mention recipients were honored at the Open Data @ MIT event held Oct. 28 at Hayden Library. 

    “By making data open, researchers create opportunities for novel uses of their data and for new insights to be gleaned,” says Chris Bourg, director of MIT Libraries. “Open data accelerates scholarly progress and discovery, advances equity in scholarly participation, and increases transparency, replicability, and trust in science.” 

    Recognizing shared values

    Spearheaded by Bourg and Rebecca Saxe, associate dean of the School of Science and John W. Jarve (1978) Professor of Brain and Cognitive Sciences, the MIT Prize for Open Data was launched to highlight the value of open data at MIT and to encourage the next generation of researchers. Nominations were solicited from across the Institute, with a focus on trainees: research technicians, undergraduate or graduate students, or postdocs.

    “By launching an MIT-wide prize and event, we aimed to create visibility for the scholars who create, use, and advocate for open data,” says Saxe. “Highlighting this research and creating opportunities for networking would also help open-data advocates across campus find each other.” 

    Recognizing researchers who share data was also one of the recommendations of the Ad Hoc Task Force on Open Access to MIT’s Research, which Bourg co-chaired with Hal Abelson, Class of 1922 Professor, Department of Electrical Engineering and Computer Science. An annual award was one of the strategies put forth by the task force to further the Institute’s mission to disseminate the fruits of its research and scholarship as widely as possible.

    Strong competition

    Winners and honorable mentions were chosen from more than 70 nominees, representing all five schools, the MIT Schwarzman College of Computing, and several research centers across MIT. A committee composed of faculty, staff, and a graduate student made the selections:

    Yunsie Chung, graduate student in the Department of Chemical Engineering, won for SolProp, the largest open-source dataset with temperature-dependent solubility values of organic compounds. 
    Matthew Groh, graduate student, MIT Media Lab, accepted on behalf of the team behind the Fitzpatrick 17k dataset, an open dataset consisting of nearly 17,000 images of skin disease alongside skin disease and skin tone annotations. 
    Tom Pollard, research scientist at the Institute for Medical Engineering and Science, accepted on behalf of the PhysioNet team. This data-sharing platform enables thousands of clinical and machine-learning research studies each year and allows researchers to share sensitive resources that would not be possible through typical data sharing platforms. 
    Joseph Replogle, graduate student with the Whitehead Institute for Biomedical Research, was recognized for the Genome-wide Perturb-seq dataset, the largest publicly available, single-cell transcriptional dataset collected to date. 
    Pedro Reynolds-Cuéllar, graduate student with the MIT Media Lab/Art, Culture, and Technology, and Diana Duarte, co-founder at Diversa, won for Retos, an open-data platform for detailed documentation and sharing of local innovations from under-resourced settings. 
    Maanas Sharma, an undergraduate student, led States of Emergency, a nationwide project analyzing and grading the responses of prison systems to Covid-19 using data scraped from public databases and manually collected data. 
    Djuna von Maydell, graduate student in the Department of Brain and Cognitive Sciences, created the first publicly available dataset of single-cell gene expression from postmortem human brain tissue of patients who are carriers of APOE4, the major Alzheimer’s disease risk gene. 
    Raechel Walker, graduate researcher in the MIT Media Lab, and her collaborators created a Data Activism Curriculum for high school students through the Mayor’s Summer Youth Employment Program in Cambridge, Massachusetts. Students learned how to use data science to recognize, mitigate, and advocate for people who are disproportionately impacted by systemic inequality. 
    Suyeol Yun, graduate student in the Department of Political Science, was recognized for DeepWTO, a project creating open data for use in legal natural language processing research using cases from the World Trade Organization. 
    Jonathan Zheng, graduate student in the Department of Chemical Engineering, won for an open IUPAC dataset for acid dissociation constants, or “pKas,” physicochemical properties that govern how acidic a chemical is in a solution.
    A full list of winners and honorable mentions is available on the Open Data @ MIT website.

    A campus-wide celebration

    Awards were presented at a celebratory event held in the Nexus in Hayden Library during International Open Access Week. School of Science Dean Nergis Mavalvala kicked off the program by describing the long and proud history of open scholarship at MIT, citing the Institute-wide faculty open access policy and the launch of the open-source digital repository DSpace. “When I was a graduate student, we were trying to figure out how to share our theses during the days of the nascent internet,” she said, “With DSpace, MIT was figuring it out for us.” 

    The centerpiece of the program was a series of five-minute presentations from the prize winners on their research. Presenters detailed the ways they created, used, or advocated for open data, and the value that openness brings to their respective fields. Winner Djuna von Maydell, a graduate student in Professor Li-Huei Tsai’s lab who studies the genetic causes of neurodegeneration, underscored why it is important to share data, particularly data obtained from postmortem human brains. 

    “This is data generated from human brains, so every data point stems from a living, breathing human being, who presumably made this donation in the hope that we would use it to advance knowledge and uncover truth,” von Maydell said. “To maximize the probability of that happening, we have to make it available to the scientific community.” 

    MIT community members who would like to learn more about making their research data open can consult MIT Libraries’ Data Services team.  More

  • in

    Ad hoc committee releases report on remote teaching best practices for on-campus education

    The Ad Hoc Committee on Leveraging Best Practices from Remote Teaching for On-Campus Education has released a report that captures how instructors are weaving lessons learned from remote teaching into in-person classes. Despite the challenges imposed by teaching and learning remotely during the Covid-19 pandemic, the report says, “there were seeds planted then that, we hope, will bear fruit in the coming years.”

    “In the long run, one of the best things about having lived through our remote learning experience may be the intense and broad focus on pedagogy that it necessitated,” the report continues. “In a moment when nobody could just teach the way they had always done before, all of us had to go back to first principles and ask ourselves: What are our learning goals for our students? How can we best help them to achieve these goals?”

    The committee’s work is a direct response to one of the Refinement and Implementation Committees (RIC) formed as part of Task Force 2021 and Beyond. Led by co-chairs Krishna Rajagopal, the William A. M. Burden Professor of Physics, and Janet Rankin, director of the MIT Teaching + Learning Lab, the committee engaged with faculty and instructional staff, associate department heads, and undergraduate and graduate officers across MIT.

    The findings are distilled into four broad themes:

    Community, Well-being, and Belonging. Conversations revealed new ways that instructors cultivated these key interrelated concepts, all of which are fundamental to student learning and success. Many instructors focused more on supporting well-being and building community and belonging during the height of the pandemic precisely because the MIT community, and everyone in it, was under such great stress. Some of the resulting practices are continuing, the committee found. Examples include introducing simple gestures, such as start-of-class welcoming practices, and providing extensions and greater flexibility on student assignments. Also, many across MIT felt that the week-long Thanksgiving break offered in 2020 should become a permanent fixture in the academic calendar, because it enhances the well-being of both students and instructors at a time in the fall semester when everyone’s batteries need recharging. 
    Enhancing Engagement. The committee found a variety of practices that have enhanced engagement between students and instructors; among students; and among instructors. For example, many instructors have continued to offer some office hours on Zoom, which seems to reduce barriers to participation for many students, while offering in-person office hours for those who want to take advantage of opportunities for more open-ended conversations. Several departments increased their usage of undergraduate teaching assistants (UTAs) in ways that make students’ learning experience more engaging and give the UTAs a real teaching experience. In addition, many instructors are leveraging out-of-class communication spaces like Slack, Perusall, and Piazza so students can work together, ask questions, and share ideas. 
    Enriching and Augmenting the Learning Environment. The report presents two ways in which instructors have enhanced learning within the classroom: through blended learning and by incorporating authentic experiences. Although blended learning techniques are not new at MIT, after having made it through remote teaching many faculty have found new ways to combine synchronous in-person teaching with asynchronous activities for on-campus students, such as pre-class or pre-lab sequences of videos with exercises interspersed, take-home lab kits, auto-graded online problems that give students immediate feedback, and recorded lab experiences for subsequent review. In addition, instructors found many creative ways to make students’ learning more authentic by going on virtual field trips, using Zoom to bring experts from around the world into MIT classrooms or to enable interactions with students at other universities, and live-streaming experiments that students could not otherwise experience since they cannot be performed in a teaching lab.   
     Assessing Learning. For all its challenges, the report notes, remote teaching prompted instructors to take a step back and think about what they wanted students to learn, how to support it, and how to measure it. The committee found a variety of examples of alternatives to traditional assessments, such as papers or timed, written exams, that instructors tried during the pandemic and are continuing to use. These alternatives include shorter, more frequent, lower-stakes assessments; oral exams or debates; asynchronous, open-book/notes exams; virtual poster sessions; alternate grading schemes; and uploading paper psets and exams into Gradescope to use its logistics and rubrics to improve grading effectiveness and efficiency.
    A large portion of the report is devoted to an extensive, annotated list of best practices from remote instruction that are being used in the classroom. Interestingly, Rankin says, “so many of the strategies and practices developed and used during the pandemic are based on, and supported by, solid educational research.”

    The report concludes with one broad recommendation: that all faculty and instructors read the findings and experiment with some of the best practices in their own instruction. “Our hope is that the practices shared in the report will continue to be adopted, adapted, and expanded by members of the teaching community at MIT, and that instructors’ openness in sharing and learning from each will continue,” Rankin says.

    Two additional, specific recommendations are included in the report. First, the committee endorses the RIC 16 recommendation that a Classroom Advisory Board be created to provide strategic input grounded in evolving pedagogy about future classroom use and technology needs. In its conversations, the committee found a number of ways that remote teaching and learning have impacted students’ and instructors’ perceptions as they have returned to the classroom. For example, during the pandemic students benefited from being able to see everyone else’s faces on Zoom. As a result, some instructors would prefer classrooms that enable students to face each other, such as semi-circular classrooms instead of rectangular ones.

    More generally, the committee concluded, MIT needs classrooms with seats and tables that can be quickly and flexibly reconfigured to facilitate varying pedagogical objectives. The Classroom Advisory Board could also examine classroom technology; this includes the role of videoconferencing to create authentic engagement between MIT students and people far from campus, and blended learning that allows students to experience more of the in-classroom engagement with their peers and instructors from which the “magic of MIT” originates.

    Second, the committee recommends that an implementation group be formed to investigate the possibility of changing the MIT academic calendar to create a one-week break over Thanksgiving. “Finalizing an implementation plan will require careful consideration of various significant logistical challenges,” the report says. “However, the resulting gains to both well-being and learning from this change to the fall calendar make doing so worthwhile.”

    Rankin notes that the report findings dovetail with the recently released MIT Strategic Action Plan for Belonging, Achievement and Composition. “I believe that one of the most important things that became really apparent during remote teaching was that community, inclusion, and belonging really matter and are necessary for both learning and teaching, and that instructors can and should play a central role in creating structures and processes to support them in their classrooms and other learning environments,” she says.

    Rajagopal finds it inspiring that “during a time of intense stress — that nobody ever wants to relive — there was such an intense focus on how we teach and how our students learn that, today, in essentially every direction we look we see colleagues improving on-campus education for tomorrow. I hope that the report will help instructors across the Institute, and perhaps elsewhere, learn from each other. Its readers will see, as our committee did, new ways in which students and instructors are finding those moments, those interactions, where the magic of MIT is created.”

    In addition to the report, the co-chairs recommend two other valuable remote teaching resources: a video interview series, TLL’s Fresh Perspectives, and Open Learning’s collection of examples of how MIT faculty and instructors leveraged digital technology to support and transform teaching and learning during the heart of the pandemic. More