More stories

  • in

    On the hunt for sustainable materials

    By the time she started high school, Avni Singhal had attended six different schools in a variety of settings, from a traditional public school to a self-paced program. The transitions opened her eyes to how widely educational environments can vary, and made her think about that impact on students.

    “Experiencing so many different types of educational systems exposed me to different ways of looking at things and how that shapes people’s worldviews,” says Singhal.

    Now a fourth-year PhD student in the Department of Materials Science and Engineering, Singhal is still thinking about increasing opportunities for her fellow students, while also pursuing her research. She devotes herself to both developing sustainable materials and improving the graduate experience in her department.

    She recently completed her two-year term as a student representative on the department’s graduate studies committee. In this role, she helped revamp the communication around the qualifying exams and introducing student input to the faculty search process.

    “It’s given me a lot of insight into how our department works,” says Singhal. “It’s a chance to get to know faculty, bring up issues that students experience, and work on changing things that we think could be improved.”

    At the same time, Singhal uses atomistic simulations to model material properties, with an eye toward sustainability. She is a part of the Learning Matter Lab, a group that merges data science tools with engineering and physics-based simulation to better design and understand materials. As part of a computational group, Singhal has worked on a range of projects in collaboration with other labs that are looking to combine computing with other disciplines. Some of this work is sponsored by the MIT Climate and Sustainability Consortium, which facilitates connections across MIT labs and industry.

    Joining the Learning Matter Lab was a step out of Singhal’s comfort zone. She arrived at MIT from the University of California at Berkeley with a joint degree in materials science and bioengineering, as well as a degree in electrical engineering and computer science.

    “I was generally interested in doing work on environment-related applications,” says Singhal. “I was pretty hesitant at first to switch entirely to computation because it’s a very different type of lifestyle of research than what I was doing before.”

    Singhal has taken the challenge in stride, contributing to projects including improving carbon capture molecules and developing new deconstructable, degradable plastics. Not only does Singhal have to understand the technical details of her own work, she also needs to understand the big picture and how to best wield the expertise of her collaborators.

    “When I came in, I was very wide-eyed, thinking computation can do everything because I had never done it before,” says Singhal. “It’s that curve where you know a little bit about something, and you think it can do everything. And then as you learn more, you learn where it can and can’t help us, where it can be valuable, and how to figure out in what part of a project it’s useful.”

    Singhal applies a similarly critical lens when thinking about graduate school as a whole. She notes that access to information and resources is often the main factor determining who enters selective educational programs, and that such access becomes increasingly limited at the graduate level.

    “I realized just how much applying is a function of knowing how to do it,” says Singhal, who co-organized and volunteers with the DMSE Application Assistance Program. The program matches prospective applicants with current students to give feedback on their application materials and provide insight into what it’s like attending MIT. Some of the first students Singhal mentored through the program are now also participants as well.

    “The further you get in your educational career, the more you realize how much assistance you got along the way to get where you are,” says Singhal. “That happens at every stage.”

    Looking toward the future, Singhal wants to continue to pursue research with a sustainability impact. She also wants to continue mentoring in some capacity but isn’t in a rush to figure out exactly what that will look like.

    “Grad school doesn’t mean I have to do one thing. I can stay open to all the possibilities of what comes next.”  More

  • in

    Educating national security leaders on artificial intelligence

    Understanding artificial intelligence and how it relates to matters of national security has become a top priority for military and government leaders in recent years. A new three-day custom program entitled “Artificial Intelligence for National Security Leaders” — AI4NSL for short — aims to educate leaders who may not have a technical background on the basics of AI, machine learning, and data science, and how these topics intersect with national security.

    “National security fundamentally is about two things: getting information out of sensors and processing that information. These are two things that AI excels at. The AI4NSL class engages national security leaders in understanding how to navigate the benefits and opportunities that AI affords, while also understanding its potential negative consequences,” says Aleksander Madry, the Cadence Design Systems Professor at MIT and one of the course’s faculty directors.

    Organized jointly by MIT’s School of Engineering, MIT Stephen A. Schwarzman College of Computing, and MIT Sloan Executive Education, AI4NSL wrapped up its fifth cohort in April. The course brings leaders from every branch of the U.S. military, as well as some foreign military leaders from NATO, to MIT’s campus, where they learn from faculty experts on a variety of technical topics in AI, as well as how to navigate organizational challenges that arise in this context.

    Play video

    AI for National Security Leaders | MIT Sloan Executive Education

    “We set out to put together a real executive education class on AI for senior national security leaders,” says Madry. “For three days, we are teaching these leaders not only an understanding of what this technology is about, but also how to best adopt these technologies organizationally.”

    The original idea sprang from discussions with senior U.S. Air Force (USAF) leaders and members of the Department of the Air Force (DAF)-MIT AI Accelerator in 2019.

    According to Major John Radovan, deputy director of the DAF-MIT AI Accelerator, in recent years it has become clear that national security leaders needed a deeper understanding of AI technologies and its implications on security, warfare, and military operations. In February 2020, Radovan and his team at the DAF-MIT AI Accelerator started building a custom course to help guide senior leaders in their discussions about AI.

    “This is the only course out there that is focused on AI specifically for national security,” says Radovan. “We didn’t want to make this course just for members of the Air Force — it had to be for all branches of the military. If we are going to operate as a joint force, we need to have the same vocabulary and the same mental models about how to use this technology.”

    After a pilot program in collaboration with MIT Open Learning and the MIT Computer Science and Artificial Intelligence Laboratory, Radovan connected with faculty at the School of Engineering and MIT Schwarzman College of Computing, including Madry, to refine the course’s curriculum. They enlisted the help of colleagues and faculty at MIT Sloan Executive Education to refine the class’s curriculum and cater the content to its audience. The result of this cross-school collaboration was a new iteration of AI4NSL, which was launched last summer.

    In addition to providing participants with a basic overview of AI technologies, the course places a heavy emphasis on organizational planning and implementation.

    “What we wanted to do was to create smart consumers at the command level. The idea was to present this content at a higher level so that people could understand the key frameworks, which will guide their thinking around the use and adoption of this material,” says Roberto Fernandez, the William F. Pounds Professor of Management and one of the AI4NSL instructors, as well as the other course’s faculty director.

    During the three-day course, instructors from MIT’s Department of Electrical Engineering and Computer Science, Department of Aeronautics and Astronautics, and MIT Sloan School of Management cover a wide range of topics.

    The first half of the course starts with a basic overview of concepts including AI, machine learning, deep learning, and the role of data. Instructors also present the problems and pitfalls of using AI technologies, including the potential for adversarial manipulation of machine learning systems, privacy challenges, and ethical considerations.

    In the middle of day two, the course shifts to examine the organizational perspective, encouraging participants to consider how to effectively implement these technologies in their own units.

    “What’s exciting about this course is the way it is formatted first in terms of understanding AI, machine learning, what data is, and how data feeds AI, and then giving participants a framework to go back to their units and build a strategy to make this work,” says Colonel Michelle Goyette, director of the Army Strategic Education Program at the Army War College and an AI4NSL participant.

    Throughout the course, breakout sessions provide participants with an opportunity to collaborate and problem-solve on an exercise together. These breakout sessions build upon one another as the participants are exposed to new concepts related to AI.

    “The breakout sessions have been distinctive because they force you to establish relationships with people you don’t know, so the networking aspect is key. Any time you can do more than receive information and actually get into the application of what you were taught, that really enhances the learning environment,” says Lieutenant General Brian Robinson, the commander of Air Education and Training Command for the USAF and an AI4NSL participant.

    This spirit of teamwork, collaboration, and bringing together individuals from different backgrounds permeates the three-day program. The AI4NSL classroom not only brings together national security leaders from all branches of the military, it also brings together faculty from three schools across MIT.

    “One of the things that’s most exciting about this program is the kind of overarching theme of collaboration,” says Rob Dietel, director of executive programs at Sloan School of Management. “We’re not drawing just from the MIT Sloan faculty, we’re bringing in top faculty from the Schwarzman College of Computing and the School of Engineering. It’s wonderful to be able to tap into those resources that are here on MIT’s campus to really make it the most impactful program that we can.”

    As new developments in generative AI, such as ChatGPT, and machine learning alter the national security landscape, the organizers at AI4NSL will continue to update the curriculum to ensure it is preparing leaders to understand the implications for their respective units.

    “The rate of change for AI and national security is so fast right now that it’s challenging to keep up, and that’s part of the reason we’ve designed this program. We’ve brought in some of our world-class faculty from different parts of MIT to really address the changing dynamic of AI,” adds Dietel. More

  • in

    Day of AI curriculum meets the moment

    MIT Responsible AI for Social Empowerment and Education (RAISE) recently celebrated the second annual Day of AI with two flagship local events. The Edward M. Kennedy Institute for the U.S. Senate in Boston hosted a human rights and data policy-focused event that was streamed worldwide. Dearborn STEM Academy in Roxbury, Massachusetts, hosted a student workshop in collaboration with Amazon Future Engineer. With over 8,000 registrations across all 50 U.S. states and 108 countries in 2023, participation in Day of AI has more than doubled since its inaugural year.

    Day of AI is a free curriculum of lessons and hands-on activities designed to teach kids of all ages and backgrounds the basics and responsible use of artificial intelligence, designed by researchers at MIT RAISE. This year, resources were available for educators to run at any time and in any increments they chose. The curriculum included five new modules to address timely topics like ChatGPT in School, Teachable Machines, AI and Social Media, Data Science and Me, and more. A collaboration with the International Society for Technology in Education also introduced modules for early elementary students. Educators across the world shared photos, videos, and stories of their students’ engagement, expressing excitement and even relief over the accessible lessons.

    Professor Cynthia Breazeal, director of RAISE, dean for digital learning at MIT, and head of the MIT Media Lab’s Personal Robots research group, said, “It’s been a year of extraordinary advancements in AI, and with that comes necessary conversations and concerns about who and what this technology is for. With our Day of AI events, we want to celebrate the teachers and students who are putting in the work to make sure that AI is for everyone.”

    Reflecting community values and protecting digital citizens

    Play video

    On May 18, 2023, MIT RAISE hosted a global Day of AI celebration featuring a flagship local event focused on human rights and data policy at the Edward M. Kennedy Institute for the U.S. Senate. Students from the Warren Prescott Middle School and New Mission High School heard from speakers the City of Boston, Liberty Mutual, and MIT to discuss the many benefits and challenges of artificial intelligence education. Video: MIT Open Learning

    MIT President Sally Kornbluth welcomed students from Warren Prescott Middle School and New Mission High School to the Day of AI program at the Edward M. Kennedy Institute. Kornbluth reflected on the exciting potential of AI, along with the ethical considerations society needs to be responsible for.

    “AI has the potential to do all kinds of fantastic things, including driving a car, helping us with the climate crisis, improving health care, and designing apps that we can’t even imagine yet. But what we have to make sure it doesn’t do is cause harm to individuals, to communities, to us — society as a whole,” she said.

    This theme resonated with each of the event speakers, whose jobs spanned the sectors of education, government, and business. Yo Deshpande, technologist for the public realm, and Michael Lawrence Evans, program director of new urban mechanics from the Boston Mayor’s Office, shared how Boston thinks about using AI to improve city life in ways that are “equitable, accessible, and delightful.” Deshpande said, “We have the opportunity to explore not only how AI works, but how using AI can line up with our values, the way we want to be in the world, and the way we want to be in our community.”

    Adam L’Italien, chief innovation officer at Liberty Mutual Insurance (one of Day of AI’s founding sponsors), compared our present moment with AI technologies to the early days of personal computers and internet connection. “Exposure to emerging technologies can accelerate progress in the world and in your own lives,” L’Italien said, while recognizing that the AI development process needs to be inclusive and mitigate biases.

    Human policies for artificial intelligence

    So how does society address these human rights concerns about AI? Marc Aidinoff ’21, former White House Office of Science and Technology Policy chief of staff, led a discussion on how government policy can influence the parameters of how technology is developed and used, like the Blueprint for an AI Bill of Rights. Aidinoff said, “The work of building the world you want to see is far harder than building the technical AI system … How do you work with other people and create a collective vision for what we want to do?” Warren Prescott Middle School students described how AI could be used to solve problems that humans couldn’t. But they also shared their concerns that AI could affect data privacy, learning deficits, social media addiction, job displacement, and propaganda.

    In a mock U.S. Senate trial activity designed by Daniella DiPaola, PhD student at the MIT Media Lab, the middle schoolers investigated what rights might be undermined by AI in schools, hospitals, law enforcement, and corporations. Meanwhile, New Mission High School students workshopped the ideas behind bill S.2314, the Social Media Addiction Reduction Technology (SMART) Act, in an activity designed by Raechel Walker, graduate research assistant in the Personal Robots Group, and Matt Taylor, research assistant at the Media Lab. They discussed what level of control could or should be introduced at the parental, educational, and governmental levels to reduce the risks of internet addiction.

    “Alexa, how do I program AI?”

    Play video

    The 2023 Day of AI celebration featured a flagship local event at the Dearborn STEM Academy in Roxbury in collaboration with Amazon Future Engineer. Students participated in a hands-on activity using MIT App Inventor as part of Day of AI’s Alexa lesson. Video: MIT Open Learning

    At Dearborn STEM Academy, Amazon Future Engineer helped students work through the Intro to Voice AI curriculum module in real-time. Students used MIT App Inventor to code basic commands for Alexa. In an interview with WCVB, Principal Darlene Marcano said, “It’s important that we expose our students to as many different experiences as possible. The students that are participating are on track to be future computer scientists and engineers.”

    Breazeal told Dearborn students, “We want you to have an informed voice about how you want AI to be used in society. We want you to feel empowered that you can shape the world. You can make things with AI to help make a better world and a better community.”

    Rohit Prasad ’08, senior vice president and head scientist for Alexa at Amazon, and Victor Reinoso ’97, global director of philanthropic education initiatives at Amazon, also joined the event. “Amazon and MIT share a commitment to helping students discover a world of possibilities through STEM and AI education,” said Reinoso. “There’s a lot of current excitement around the technological revolution with generative AI and large language models, so we’re excited to help students explore careers of the future and navigate the pathways available to them.” To highlight their continued investment in the local community and the school program, Amazon donated a $25,000 Innovation and Early College Pathways Program Grant to the Boston Public School system.

    Day of AI down under

    Not only was the Day of AI program widely adopted across the globe, Australian educators were inspired to adapt their own regionally specific curriculum. An estimated 161,000 AI professionals will be needed in Australia by 2030, according to the National Artificial Intelligence Center in the Commonwealth Scientific and Industrial Research Organization (CSIRO), an Australian government agency and Day of AI Australia project partner. CSIRO worked with the University of New South Wales to develop supplementary educational resources on AI ethics and machine learning. Day of AI Australia reached 85,000 students at 400-plus secondary schools this year, sparking curiosity in the next generation of AI experts.

    The interest in AI is accelerating as fast as the technology is being developed. Day of AI offers a unique opportunity for K-12 students to shape our world’s digital future and their own.

    “I hope that some of you will decide to be part of this bigger effort to help us figure out the best possible answers to questions that are raised by AI,” Kornbluth told students at the Edward M. Kennedy Institute. “We’re counting on you, the next generation, to learn how AI works and help make sure it’s for everyone.” More

  • in

    Bringing the social and ethical responsibilities of computing to the forefront

    There has been a remarkable surge in the use of algorithms and artificial intelligence to address a wide range of problems and challenges. While their adoption, particularly with the rise of AI, is reshaping nearly every industry sector, discipline, and area of research, such innovations often expose unexpected consequences that involve new norms, new expectations, and new rules and laws.

    To facilitate deeper understanding, the Social and Ethical Responsibilities of Computing (SERC), a cross-cutting initiative in the MIT Schwarzman College of Computing, recently brought together social scientists and humanists with computer scientists, engineers, and other computing faculty for an exploration of the ways in which the broad applicability of algorithms and AI has presented both opportunities and challenges in many aspects of society.

    “The very nature of our reality is changing. AI has the ability to do things that until recently were solely the realm of human intelligence — things that can challenge our understanding of what it means to be human,” remarked Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing, in his opening address at the inaugural SERC Symposium. “This poses philosophical, conceptual, and practical questions on a scale not experienced since the start of the Enlightenment. In the face of such profound change, we need new conceptual maps for navigating the change.”

    The symposium offered a glimpse into the vision and activities of SERC in both research and education. “We believe our responsibility with SERC is to educate and equip our students and enable our faculty to contribute to responsible technology development and deployment,” said Georgia Perakis, the William F. Pounds Professor of Management in the MIT Sloan School of Management, co-associate dean of SERC, and the lead organizer of the symposium. “We’re drawing from the many strengths and diversity of disciplines across MIT and beyond and bringing them together to gain multiple viewpoints.”

    Through a succession of panels and sessions, the symposium delved into a variety of topics related to the societal and ethical dimensions of computing. In addition, 37 undergraduate and graduate students from a range of majors, including urban studies and planning, political science, mathematics, biology, electrical engineering and computer science, and brain and cognitive sciences, participated in a poster session to exhibit their research in this space, covering such topics as quantum ethics, AI collusion in storage markets, computing waste, and empowering users on social platforms for better content credibility.

    Showcasing a diversity of work

    In three sessions devoted to themes of beneficent and fair computing, equitable and personalized health, and algorithms and humans, the SERC Symposium showcased work by 12 faculty members across these domains.

    One such project from a multidisciplinary team of archaeologists, architects, digital artists, and computational social scientists aimed to preserve endangered heritage sites in Afghanistan with digital twins. The project team produced highly detailed interrogable 3D models of the heritage sites, in addition to extended reality and virtual reality experiences, as learning resources for audiences that cannot access these sites.

    In a project for the United Network for Organ Sharing, researchers showed how they used applied analytics to optimize various facets of an organ allocation system in the United States that is currently undergoing a major overhaul in order to make it more efficient, equitable, and inclusive for different racial, age, and gender groups, among others.

    Another talk discussed an area that has not yet received adequate public attention: the broader implications for equity that biased sensor data holds for the next generation of models in computing and health care.

    A talk on bias in algorithms considered both human bias and algorithmic bias, and the potential for improving results by taking into account differences in the nature of the two kinds of bias.

    Other highlighted research included the interaction between online platforms and human psychology; a study on whether decision-makers make systemic prediction mistakes on the available information; and an illustration of how advanced analytics and computation can be leveraged to inform supply chain management, operations, and regulatory work in the food and pharmaceutical industries.

    Improving the algorithms of tomorrow

    “Algorithms are, without question, impacting every aspect of our lives,” said Asu Ozdaglar, deputy dean of academics for the MIT Schwarzman College of Computing and head of the Department of Electrical Engineering and Computer Science, in kicking off a panel she moderated on the implications of data and algorithms.

    “Whether it’s in the context of social media, online commerce, automated tasks, and now a much wider range of creative interactions with the advent of generative AI tools and large language models, there’s little doubt that much more is to come,” Ozdaglar said. “While the promise is evident to all of us, there’s a lot to be concerned as well. This is very much time for imaginative thinking and careful deliberation to improve the algorithms of tomorrow.”

    Turning to the panel, Ozdaglar asked experts from computing, social science, and data science for insights on how to understand what is to come and shape it to enrich outcomes for the majority of humanity.

    Sarah Williams, associate professor of technology and urban planning at MIT, emphasized the critical importance of comprehending the process of how datasets are assembled, as data are the foundation for all models. She also stressed the need for research to address the potential implication of biases in algorithms that often find their way in through their creators and the data used in their development. “It’s up to us to think about our own ethical solutions to these problems,” she said. “Just as it’s important to progress with the technology, we need to start the field of looking at these questions of what biases are in the algorithms? What biases are in the data, or in that data’s journey?”

    Shifting focus to generative models and whether the development and use of these technologies should be regulated, the panelists — which also included MIT’s Srini Devadas, professor of electrical engineering and computer science, John Horton, professor of information technology, and Simon Johnson, professor of entrepreneurship — all concurred that regulating open-source algorithms, which are publicly accessible, would be difficult given that regulators are still catching up and struggling to even set guardrails for technology that is now 20 years old.

    Returning to the question of how to effectively regulate the use of these technologies, Johnson proposed a progressive corporate tax system as a potential solution. He recommends basing companies’ tax payments on their profits, especially for large corporations whose massive earnings go largely untaxed due to offshore banking. By doing so, Johnson said that this approach can serve as a regulatory mechanism that discourages companies from trying to “own the entire world” by imposing disincentives.

    The role of ethics in computing education

    As computing continues to advance with no signs of slowing down, it is critical to educate students to be intentional in the social impact of the technologies they will be developing and deploying into the world. But can one actually be taught such things? If so, how?

    Caspar Hare, professor of philosophy at MIT and co-associate dean of SERC, posed this looming question to faculty on a panel he moderated on the role of ethics in computing education. All experienced in teaching ethics and thinking about the social implications of computing, each panelist shared their perspective and approach.

    A strong advocate for the importance of learning from history, Eden Medina, associate professor of science, technology, and society at MIT, said that “often the way we frame computing is that everything is new. One of the things that I do in my teaching is look at how people have confronted these issues in the past and try to draw from them as a way to think about possible ways forward.” Medina regularly uses case studies in her classes and referred to a paper written by Yale University science historian Joanna Radin on the Pima Indian Diabetes Dataset that raised ethical issues on the history of that particular collection of data that many don’t consider as an example of how decisions around technology and data can grow out of very specific contexts.

    Milo Phillips-Brown, associate professor of philosophy at Oxford University, talked about the Ethical Computing Protocol that he co-created while he was a SERC postdoc at MIT. The protocol, a four-step approach to building technology responsibly, is designed to train computer science students to think in a better and more accurate way about the social implications of technology by breaking the process down into more manageable steps. “The basic approach that we take very much draws on the fields of value-sensitive design, responsible research and innovation, participatory design as guiding insights, and then is also fundamentally interdisciplinary,” he said.

    Fields such as biomedicine and law have an ethics ecosystem that distributes the function of ethical reasoning in these areas. Oversight and regulation are provided to guide front-line stakeholders and decision-makers when issues arise, as are training programs and access to interdisciplinary expertise that they can draw from. “In this space, we have none of that,” said John Basl, associate professor of philosophy at Northeastern University. “For current generations of computer scientists and other decision-makers, we’re actually making them do the ethical reasoning on their own.” Basl commented further that teaching core ethical reasoning skills across the curriculum, not just in philosophy classes, is essential, and that the goal shouldn’t be for every computer scientist be a professional ethicist, but for them to know enough of the landscape to be able to ask the right questions and seek out the relevant expertise and resources that exists.

    After the final session, interdisciplinary groups of faculty, students, and researchers engaged in animated discussions related to the issues covered throughout the day during a reception that marked the conclusion of the symposium. More

  • in

    Learner in Afghanistan reaches beyond barriers to pursue career in data science

    Tahmina S. was a junior studying computer engineering at a top university in Afghanistan when a new government policy banned women from pursuing education. In August 2021, the Taliban prohibited girls from attending school beyond the sixth grade. While women were initially allowed to continue to attend universities, by October 2021, an order from the Ministry of Higher Education declared that all women in Afghanistan were suspended from attending public and private centers of higher education.

    Determined to continue her studies and pursue her ambitions, Tahmina found the MIT Refugee Action Hub (ReACT) and was accepted to its Certificate in Computer Science and Data Science program in 2022.

    “ReACT helped me realize that I can do big things and be a part of big things,” she says.

    MIT ReACT provides education and professional opportunities to learners from refugee and forcibly displaced communities worldwide. ReACT’s core pillars include academic development, human skills development, employment pathways, and network building. Since 2017, ReACT has offered its Certificate in Computer and Data Science (CDS) program free-of-cost to learners wherever they live. In 2022, ReACT welcomed its largest and most diverse cohort to date — 136 learners from 29 countries — including 25 learners from Afghanistan, more than half of whom are women.

    Tahmina was able to select her classes in the program, and especially valued learning Python — which has led to her studying other programming languages and gaining more skills in data science. She’s continuing to take online courses in hopes of completing her undergraduate degree, and someday pursuing a masters degree in computer science and becoming a data scientist.

    “It’s an important and fun career. I really love data,” she says. “If this is my only time for this experience, I will bring to the table what I have, and do my best.”

    In addition to the education ban, Tahmina also faced the challenge of accessing an internet connection, which is expensive where she lives. But she regularly studies between 12 and 14 hours a day to achieve her dreams.

    The ReACT program offers a blend of asynchronous and synchronous learning. Learners complete a curated series of online, rigorous MIT coursework through MITx with the support of teaching assistants and collaborators, and also participate in a series of interactive online workshops in interpersonal skills that are critical to success in education and careers.

    ReACT learners engage with MIT’s global network of experts including MIT staff, faculty, and alumni — as well as collaborators across technology, humanitarian, and government sectors.

    “I loved that experience a lot, it was a huge achievement. I’m grateful ReACT gave me a chance to be a part of that team of amazing people. I’m amazed I completed that program, because it was really challenging.”

    Theory into practice

    Tahmina was one of 10 students from the ReACT cohort accepted to the highly competitive MIT Innovation Leadership Bootcamp program. She worked on a team of five people who initiated a business proposal and took the project through each phase of the development process. Her team’s project was creating an app for finance management for users aged 23-51 — including all the graphic elements and a final presentation. One valuable aspect of the boot camp, Tahmina says, was presenting their project to real investors who then provided business insights and actionable feedback.

    As part of this ReACT cohort, Tahmina also participated in the Global Apprenticeship Program (GAP) pilot, an initiative led by Talanta and with the participation of MIT Open Learning as curriculum provider. The GAP initiative focuses on improving diverse emerging talent job preparedness and exploring how companies can successfully recruit, onboard, and retain this talent through remote, paid internships. Through the GAP pilot, Tahmina received training in professional skills, resume and interview preparation, and was matched with a financial sector firm for a four-month remote internship in data science.

    To prepare Tahmina and other learners for these professional experiences, ReACT trains its cohorts to work with people who have diverse backgrounds, experiences, and challenges. The nonprofit Na’amal offered workshops covering areas such as problem-solving, innovation and ideation, goal-setting, communication, teamwork, and infrastructure and info security. Tahmina was able to access English classes and learn valuable career skills, such as writing a resume.“This was an amazing part for me. There’s a huge difference going from theoretical to practical,” she says. “Not only do you have to have the theoretical experience, you have to have soft skills. You have to communicate everything you learn to other people, because other people in the business might not have that knowledge, so you have to tell the story in a way that they can understand.”

    ReACT wanted the women in the program to be mentored by women who were not only leaders in the tech field, but working in the same geographic region as learners. At the start of the internship, Na’amal connected Tahmina with a mentor, Maha Gad, who is head of talent development at Talabat and lives in Dubai. Tahmina met with Gad at the beginning and end of each month, giving her the opportunity to ask expansive questions. Tahmina says Gad encouraged her to research and plan first, and then worked with her to explore new tools, like Trello.

    Wanting to put her skills to use locally, Tahmina volunteered at the nonprofit Rumie, a community for Afghan women and girls, working as a learning designer, translator, team leader, and social media manager. She currently volunteers at Correspondents of the World as a story ambassador, helping Afghan people share stories, community, and culture — especially telling the stories of Afghan women and the changes they’ve made in the world.

    “It’s been the most beautiful journey of my life that I will never forget,” says Tahmina. “I found ReACT at a time when I had nothing, and I found the most valuable thing.” More

  • in

    Festival of Learning 2023 underscores importance of well-designed learning environments

    During its first in-person gathering since 2020, MIT’s Festival of Learning 2023 explored how the learning sciences can inform the Institute on how to best support students. Co-sponsored by MIT Open Learning and the Office of the Vice Chancellor (OVC), this annual event celebrates teaching and learning innovations with MIT instructors, students, and staff.

    Bror Saxberg SM ’85, PhD ’89, founder of LearningForge LLC and former chief learning officer at Kaplan, Inc., was invited as keynote speaker, with opening remarks by MIT Chancellor Melissa Nobles and Vice President for Open Learning Eric Grimson, and discussion moderated by Senior Associate Dean of Open Learning Christopher Capozzola. This year’s festival focused on how creating well-designed learning environments using learning engineering can increase learning success.

    Play video

    2023 Festival of Learning: Highlights

    Well-designed learning environments are key

    In his keynote speech “Learning Engineering: What We Know, What We Can Do,” Saxberg defined “learning engineering” as the practical application of learning sciences to real-world problems at scale. He said, “High levels can be reached by all learners, given access to well-designed instruction and motivation for enough practice opportunities.”

    Informed by decades of empirical evidence from the field of learning science, Saxberg’s own research, and insights from Kaplan, Inc., Saxberg finds that a hands-on strategy he calls “prepare, practice, perform” delivers better learning outcomes than a traditional “read, write, discuss” approach. Saxberg recommends educators devote at least 60 percent of learning time to hands-on approaches, such as producing, creating, and engaging. Only 20-30 percent of learning time should be spent in the more passive “knowledge acquisition” modes of listening and reading.

    “Here at MIT, a place that relies on data to make informed decisions, learning engineering can provide a framework for us to center in on the learner to identify the challenges associated with learning, and to apply the learning sciences in data-driven ways to improve instructional approaches,” said Nobles. During their opening remarks, Nobles and Grimson both emphasized how learning engineering at MIT is informed by the Institute’s commitment to educating the whole student, which encompasses student well-being and belonging in addition to academic rigor. “What lessons can we take away to change the way we think about education moving forward? This is a chance to iterate,” said Grimson.

    Well-designed learning environments are informed by understanding motivation, considering the connection between long-term and working memory, identifying the range of learners’ prior experience, grounding practice in authentic contexts (i.e., work environments), and using data-driven instructional approaches to iterate and improve.

    Play video

    2023 Festival of Learning: Keynote by Bror Saxberg

    Understand learner motivation

    Saxberg asserted that before developing course structures and teaching approaches known to encourage learning, educators must first examine learner motivation. Motivation doesn’t require enjoyment of the subject or task to spur engagement. Similar to how a well-designed physical training program can change your muscle cells, if a learner starts, persists, and exerts mental effort in a well-designed learning environment, they can change their neurons — they learn. Saxberg described four main barriers to learner motivation, and solutions for each:

    The learner doesn’t see the value of the lesson. Ways to address this include helping the learners find value; leveraging the learner’s expertise in another area to better understand the topic at hand; and making the activity itself enjoyable. “Finding value” could be as simple as explaining the practical applications of this knowledge in their future work in the field, or how this lesson prepares learners for their advanced level courses. 
    Self-efficacy for learners who don’t think they’re capable. Educators can point to parallel experiences with similar goals that students may have already achieved in another context. Alternatively, educators can share stories of professionals who have successfully transitioned from one area of expertise to another. 
    “Something” in the learner’s way, such as not having the time, space, or correct materials. This is an opportunity to demonstrate how a learner can use problem-solving skills to find a solution to their perceived problem. As with the barrier of self-efficacy, educators can assure learners that they are in control of the situation by sharing similar stories of those who’ve encountered the same problem and the solution they devised.
    The learner’s emotional state. This is no small barrier to motivation. If a learner is angry, depressed, scared, or grieving, it will be challenging for them to switch their mindset into learning mode. A wide array of emotions require a wide array of possible solutions, from structured conversation techniques to recommending professional help.
    Consider the cognitive load

    Saxberg has found that learning occurs when we use working memory to problem-solve, but our working memory can only process three to five verbal or conscious thoughts at a time. Long-term memory stores knowledge that can be accessed non-verbally and non-consciously, which is why experts appear to remember information effortlessly. Until a learner develops that expertise, extraneous information in a lesson will occupy space in their working memory, running the risk of distracting the learner from the desired learning outcome.

    To accommodate learners’ finite cognitive load, Saxberg suggested the solution of reevaluating which material is essential, then simplifying the exercise or removing unnecessary material accordingly. “That notion of, ‘what do we really need students to be able to do?’ helps you focus,” said Saxberg.

    Another solution is to leverage the knowledge, skills, and interests learners already bring to the course — these long-term memories can scaffold the new material. “What do you have in your head already, what do you love, what’s easy to draw from long-term memory? That would be the starting point for challenging new skills. It’s not the ending point because you want to use your new skills to then find out new things,” Saxberg said. Finally, consider how your course engages with the syllabi. Do you explain the reasoning behind the course structure? Do you show how the exercises or material will be applied to future courses or the field? Do you share best practices for engaging working memory and learning? By acknowledging and empathizing with the practical challenges that learners face, you can remove a barrier from their cognitive load.

    Ground practice in authentic contexts

    Saxberg stated that few experts read textbooks to learn new information — they discover what they need to know while working in the field, using those relevant facts in context. As such, students will have an easier time remembering facts if they’re practicing in relevant or similar environments to their future work.

    If students can practice classifying problems in real work contexts rather than theoretical practice problems, they can build a framework to classify what’s important. That helps students recognize the type of problem they’re trying to solve before trying to solve the problem itself. With enough hands-on practice and examples of how experts use processes and identify which principles are relevant, learners can holistically learn entire procedures. And that learning continues once learners graduate to the workforce: professionals often meet to exchange knowledge at conferences, charrettes, and other gatherings.

    Enhancing teaching at MIT

    The Festival of Learning furthers the Office of the Chancellor’s mission to advance academic innovation that will foster the growth of MIT students. The festival also aligns with the MIT Open Learning’s Residential Education team’s goal of making MIT education more effective and efficient. Throughout the year, their team offers continuous support to MIT faculty and instructors using digital technologies to augment and transform how they teach.

    “We are doubling down on our commitment to continuous growth in how we teach,” said Nobles. More

  • in

    Democratizing education: Bringing MIT excellence to the masses

    How do you quantify the value of education or measure success? For the team behind the MIT Institute for Data, Systems, and Society’s (IDSS) MicroMasters Program in Statistics and Data Science (SDS), providing over 1,000 individuals from around the globe with access to MIT-level programming feels like a pretty good place to start. 

    Thanks to the MIT-conceived MicroMasters-style format, SDS faculty director Professor Devavrat Shah and his colleagues have eliminated the physical restrictions created by a traditional brick-and-mortar education, allowing 1,178 learners and counting from 89 countries access to an MIT education.

    “Taking classes from a Nobel Prize winner doesn’t happen every day,” says Oscar Vele, a strategic development worker for the town of Cuenca, Ecuador. “My dream has always been to study at MIT. I knew it was not easy — now, through this program, my dream came true.”

    “With an online forum, in principle, admission is no longer the gate — the merit is a gate,” says Shah. “If you take a class that is MIT-level, and if you perform at MIT-level, then you should get MIT-level credentials.”

    The MM SDS program, delivered in collaboration with MIT Open Learning, plays a key role in the IDSS mission of advancing education in data science, and supports MIT’s overarching belief that everyone should be able to access a quality education no matter what their life circumstances may be.

    “Getting a program like this up and running to the point where it has credentials and credibility across the globe, is an important milestone for us,” says Shah. “Basically, for us, it says we are here to stay, and we are just getting started.”

    Since the program launched in 2018, Shah says he and his team have seen learners from all walks of life, from high-schoolers looking for a challenge to late-in-life learners looking to either evolve or refresh their knowledge.

    “Then there are individuals who want to prove to themselves that they can achieve serious knowledge and build a career,” Shah says. “Circumstances throughout their lives, whether it’s the country or socioeconomic conditions they’re born in, they have never had the opportunity to do something like this, and now they have an MIT-level education and credentials, which is a huge deal for them.”

    Many learners overcome challenges to complete the program, from financial hardships to balancing work, home life, and coursework, and finding private, internet-enabled space for learning — not to mention the added complications of a global pandemic. One Ukrainian learner even finished the program after fleeing her apartment for a bomb shelter.

    Remapping the way to a graduate degree

    For Diogo da Silva Branco Magalhaes, a 44-year-old lifelong learner, curiosity and the desire to evolve within his current profession brought him to the MicroMasters program. Having spent 15 years working in the public transport sector, da Silva Branco Magalhaes had a very specific challenge at the front of his mind: artificial intelligence.

    “It’s not science fiction; it’s already here,” he says. “Think about autonomous vehicles, on-demand transportation, mobility as a service — AI and data, in particular, are the driving force of a number of disruptions that will affect my industry.”

    When he signed up for the MicroMasters Program in Statistics and Data Science, da Silva Branco Magalhaes’ said he had no long-term plans, but was taking a first step. “I just wanted to have a first contact with this reality, understand the basics, and then let’s see how it goes,” he describes.

    Now, after earning his credentials in 2021, he finds himself a few weeks into an accelerated master’s program at Northwestern University, one of several graduate pathways supported by the MM SDS program.

    “I was really looking to gain some basic background knowledge; I didn’t expect the level of quality and depth they were able to provide in an online lecture format,” he says. “Having access to this kind of content — it’s a privilege, and now that we have it, we have to make the most of it.”

    A refreshing investment

    As an applied mathematician with 15 years of experience in the U.S. defense sector, Celia Wilson says she felt comfortable with her knowledge, though not 100 percent confident that her math skills could stand up against the next generation.

    “I felt I was getting left behind,” she says. “So I decided to take some time out and invest in myself, and this program was a great opportunity to systematize and refresh my knowledge of statistics and data science.”

    Since completing the course, Wilson says she has secured a new job as a director of data and analytics, where she is confident in her ability to manage a team of the “new breed of data scientists.” It turns out, however, that completing the program has given her an even greater gift than self-confidence.

    “Most importantly,” she adds, “it’s inspired my daughters to tell anyone who will listen that math is definitely for girls.”

    Connecting an engaged community

    Each course is connected to an online forum that allows learners to enhance their experience through real-time conversations with others in their cohort.

    “We have worked hard to provide a scalable version of the traditional teaching assistant support system that you would get in a usual on-campus class, with a great online forum for people to connect with each other as learners,” Shah says.

    David Khachatrian, a data scientist working on improving the drug discovery pipeline, says that leveraging the community to hone his ability to “think clearly and communicate effectively with others” mattered more than anything.

    “Take the opportunity to engage with your community of fellow learners and facilitators — answer questions for others to give back to the community, solidify your own understanding, and practice your ability to explain clearly,” Khachatrian says. “These skills and behaviors will help you to succeed not just in SDS, but wherever you go in the future.”

    “There were a lot of active contributions from a lot of learners and I felt it was really a very strong component of the course,” da Silva Branco Magalhaes adds. “I had some offline contact with other students who are connections that I’ve kept up with to this day.”

    A solid path forward

    “We have a dedicated team supporting the MM SDS community on the MIT side,” Shah says, citing the contributions of Karene Chu, MM SDS assistant director of education; Susana Kevorkova, the MM SDS program manager; and Jeremy Rossen, MM program coordinator. “They’ve done so much to ensure the success of the program and our learners, and they are constantly adding value to the program — like identifying real-time supplementary opportunities for learners to participate in, including the IDSS Policy Hackathon.”

    The program now holds online “graduation” ceremonies, where credential holders from all over the world share their experiences. Says Shah, who looks forward to celebrating the next 1,000 learners: “Every time I think about it, I feel emotional. It feels great, and it keeps us going.” More

  • in

    3 Questions: Leo Anthony Celi on ChatGPT and medicine

    Launched in November 2022, ChatGPT is a chatbot that can not only engage in human-like conversation, but also provide accurate answers to questions in a wide range of knowledge domains. The chatbot, created by the firm OpenAI, is based on a family of “large language models” — algorithms that can recognize, predict, and generate text based on patterns they identify in datasets containing hundreds of millions of words.

    In a study appearing in PLOS Digital Health this week, researchers report that ChatGPT performed at or near the passing threshold of the U.S. Medical Licensing Exam (USMLE) — a comprehensive, three-part exam that doctors must pass before practicing medicine in the United States. In an editorial accompanying the paper, Leo Anthony Celi, a principal research scientist at MIT’s Institute for Medical Engineering and Science, a practicing physician at Beth Israel Deaconess Medical Center, and an associate professor at Harvard Medical School, and his co-authors argue that ChatGPT’s success on this exam should be a wake-up call for the medical community.

    Q: What do you think the success of ChatGPT on the USMLE reveals about the nature of the medical education and evaluation of students? 

    A: The framing of medical knowledge as something that can be encapsulated into multiple choice questions creates a cognitive framing of false certainty. Medical knowledge is often taught as fixed model representations of health and disease. Treatment effects are presented as stable over time despite constantly changing practice patterns. Mechanistic models are passed on from teachers to students with little emphasis on how robustly those models were derived, the uncertainties that persist around them, and how they must be recalibrated to reflect advances worthy of incorporation into practice. 

    ChatGPT passed an examination that rewards memorizing the components of a system rather than analyzing how it works, how it fails, how it was created, how it is maintained. Its success demonstrates some of the shortcomings in how we train and evaluate medical students. Critical thinking requires appreciation that ground truths in medicine continually shift, and more importantly, an understanding how and why they shift.

    Q: What steps do you think the medical community should take to modify how students are taught and evaluated?  

    A: Learning is about leveraging the current body of knowledge, understanding its gaps, and seeking to fill those gaps. It requires being comfortable with and being able to probe the uncertainties. We fail as teachers by not teaching students how to understand the gaps in the current body of knowledge. We fail them when we preach certainty over curiosity, and hubris over humility.  

    Medical education also requires being aware of the biases in the way medical knowledge is created and validated. These biases are best addressed by optimizing the cognitive diversity within the community. More than ever, there is a need to inspire cross-disciplinary collaborative learning and problem-solving. Medical students need data science skills that will allow every clinician to contribute to, continually assess, and recalibrate medical knowledge.

    Q: Do you see any upside to ChatGPT’s success in this exam? Are there beneficial ways that ChatGPT and other forms of AI can contribute to the practice of medicine? 

    A: There is no question that large language models (LLMs) such as ChatGPT are very powerful tools in sifting through content beyond the capabilities of experts, or even groups of experts, and extracting knowledge. However, we will need to address the problem of data bias before we can leverage LLMs and other artificial intelligence technologies. The body of knowledge that LLMs train on, both medical and beyond, is dominated by content and research from well-funded institutions in high-income countries. It is not representative of most of the world.

    We have also learned that even mechanistic models of health and disease may be biased. These inputs are fed to encoders and transformers that are oblivious to these biases. Ground truths in medicine are continuously shifting, and currently, there is no way to determine when ground truths have drifted. LLMs do not evaluate the quality and the bias of the content they are being trained on. Neither do they provide the level of uncertainty around their output. But the perfect should not be the enemy of the good. There is tremendous opportunity to improve the way health care providers currently make clinical decisions, which we know are tainted with unconscious bias. I have no doubt AI will deliver its promise once we have optimized the data input. More