More stories

  • in

    Understanding viral justice

    In the wake of the Covid-19 pandemic, the word “viral” has a new resonance, and it’s not necessarily positive. Ruha Benjamin, a scholar who investigates the social dimensions of science, medicine, and technology, advocates a shift in perspective. She thinks justice can also be contagious. That’s the premise of Benjamin’s award-winning book “Viral Justice: How We Grow the World We Want,” as she shared with MIT Libraries staff on a June 14 visit. 

    “If this pandemic has taught us anything, it’s that something almost undetectable can be deadly, and that we can transmit it without even knowing,” said Benjamin, professor of African American studies at Princeton University. “Doesn’t this imply that small things, seemingly minor actions, decisions, or habits, could have exponential effects in the other direction, tipping the scales towards justice?” 

    To seek a more just world, Benjamin exhorted library staff to notice the ways exclusion is built into our daily lives, showing examples of park benches with armrests at regular intervals. On the surface they appear welcoming, but they also make lying down — or sleeping — impossible. This idea is taken to the extreme with “Pay and Sit,” an art installation by Fabian Brunsing in the form of a bench that deploys sharp spikes on the seat if the user doesn’t pay a meter. It serves as a powerful metaphor for discriminatory design. 

    “Dr. Benjamin’s keynote was seriously mind-blowing,” said Cherry Ibrahim, human resources generalist in the MIT Libraries. “One part that really grabbed my attention was when she talked about benches purposely designed to prevent unhoused people from sleeping on them. There are these hidden spikes in our community that we might not even realize because they don’t directly impact us.” 

    Benjamin urged the audience to look for those “spikes,” which new technologies can make even more insidious — gender and racial bias in facial recognition, the use of racial data in software used to predict student success, algorithmic bias in health care — often in the guise of progress. She coined the term “the New Jim Code” to describe the combination of coded bias and the imagined objectivity we ascribe to technology. 

    “At the MIT Libraries, we’re deeply concerned with combating inequities through our work, whether it’s democratizing access to data or investigating ways disparate communities can participate in scholarship with minimal bias or barriers,” says Director of Libraries Chris Bourg. “It’s our mission to remove the ‘spikes’ in the systems through which we create, use, and share knowledge.”

    Calling out the harms encoded into our digital world is critical, argues Benjamin, but we must also create alternatives. This is where the collective power of individuals can be transformative. Benjamin shared examples of those who are “re-imagining the default settings of technology and society,” citing initiatives like Data for Black Lives movement and the Detroit Community Technology Project. “I’m interested in the way that everyday people are changing the digital ecosystem and demanding different kinds of rights and responsibilities and protections,” she said.

    In 2020, Benjamin founded the Ida B. Wells Just Data Lab with a goal of bringing together students, educators, activists, and artists to develop a critical and creative approach to data conception, production, and circulation. Its projects have examined different aspects of data and racial inequality: assessing the impact of Covid-19 on student learning; providing resources that confront the experience of Black mourning, grief, and mental health; or developing a playbook for Black maternal mental health. Through the lab’s student-led projects Benjamin sees the next generation re-imagining technology in ways that respond to the needs of marginalized people.

    “If inequity is woven into the very fabric of our society — we see it from policing to education to health care to work — then each twist, coil, and code is a chance for us to weave new patterns, practices, and politics,” she said. “The vastness of the problems that we’re up against will be their undoing.” More

  • in

    A new way to look at data privacy

    Imagine that a team of scientists has developed a machine-learning model that can predict whether a patient has cancer from lung scan images. They want to share this model with hospitals around the world so clinicians can start using it in diagnosis.

    But there’s a problem. To teach their model how to predict cancer, they showed it millions of real lung scan images, a process called training. Those sensitive data, which are now encoded into the inner workings of the model, could potentially be extracted by a malicious agent. The scientists can prevent this by adding noise, or more generic randomness, to the model that makes it harder for an adversary to guess the original data. However, perturbation reduces a model’s accuracy, so the less noise one can add, the better.

    MIT researchers have developed a technique that enables the user to potentially add the smallest amount of noise possible, while still ensuring the sensitive data are protected.

    The researchers created a new privacy metric, which they call Probably Approximately Correct (PAC) Privacy, and built a framework based on this metric that can automatically determine the minimal amount of noise that needs to be added. Moreover, this framework does not need knowledge of the inner workings of a model or its training process, which makes it easier to use for different types of models and applications.

    In several cases, the researchers show that the amount of noise required to protect sensitive data from adversaries is far less with PAC Privacy than with other approaches. This could help engineers create machine-learning models that provably hide training data, while maintaining accuracy in real-world settings.

    “PAC Privacy exploits the uncertainty or entropy of the sensitive data in a meaningful way,  and this allows us to add, in many cases, an order of magnitude less noise. This framework allows us to understand the characteristics of arbitrary data processing and privatize it automatically without artificial modifications. While we are in the early days and we are doing simple examples, we are excited about the promise of this technique,” says Srini Devadas, the Edwin Sibley Webster Professor of Electrical Engineering and co-author of a new paper on PAC Privacy.

    Devadas wrote the paper with lead author Hanshen Xiao, an electrical engineering and computer science graduate student. The research will be presented at the International Cryptography Conference (Crypto 2023).

    Defining privacy

    A fundamental question in data privacy is: How much sensitive data could an adversary recover from a machine-learning model with noise added to it?

    Differential Privacy, one popular privacy definition, says privacy is achieved if an adversary who observes the released model cannot infer whether an arbitrary individual’s data is used for the training processing. But provably preventing an adversary from distinguishing data usage often requires large amounts of noise to obscure it. This noise reduces the model’s accuracy.

    PAC Privacy looks at the problem a bit differently. It characterizes how hard it would be for an adversary to reconstruct any part of randomly sampled or generated sensitive data after noise has been added, rather than only focusing on the distinguishability problem.

    For instance, if the sensitive data are images of human faces, differential privacy would focus on whether the adversary can tell if someone’s face was in the dataset. PAC Privacy, on the other hand, could look at whether an adversary could extract a silhouette — an approximation — that someone could recognize as a particular individual’s face.

    Once they established the definition of PAC Privacy, the researchers created an algorithm that automatically tells the user how much noise to add to a model to prevent an adversary from confidently reconstructing a close approximation of the sensitive data. This algorithm guarantees privacy even if the adversary has infinite computing power, Xiao says.

    To find the optimal amount of noise, the PAC Privacy algorithm relies on the uncertainty, or entropy, in the original data from the viewpoint of the adversary.

    This automatic technique takes samples randomly from a data distribution or a large data pool and runs the user’s machine-learning training algorithm on that subsampled data to produce an output learned model. It does this many times on different subsamplings and compares the variance across all outputs. This variance determines how much noise one must add — a smaller variance means less noise is needed.

    Algorithm advantages

    Different from other privacy approaches, the PAC Privacy algorithm does not need knowledge of the inner workings of a model, or the training process.

    When implementing PAC Privacy, a user can specify their desired level of confidence at the outset. For instance, perhaps the user wants a guarantee that an adversary will not be more than 1 percent confident that they have successfully reconstructed the sensitive data to within 5 percent of its actual value. The PAC Privacy algorithm automatically tells the user the optimal amount of noise that needs to be added to the output model before it is shared publicly, in order to achieve those goals.

    “The noise is optimal, in the sense that if you add less than we tell you, all bets could be off. But the effect of adding noise to neural network parameters is complicated, and we are making no promises on the utility drop the model may experience with the added noise,” Xiao says.

    This points to one limitation of PAC Privacy — the technique does not tell the user how much accuracy the model will lose once the noise is added. PAC Privacy also involves repeatedly training a machine-learning model on many subsamplings of data, so it can be computationally expensive.  

    To improve PAC Privacy, one approach is to modify a user’s machine-learning training process so it is more stable, meaning that the output model it produces does not change very much when the input data is subsampled from a data pool.  This stability would create smaller variances between subsample outputs, so not only would the PAC Privacy algorithm need to be run fewer times to identify the optimal amount of noise, but it would also need to add less noise.

    An added benefit of stabler models is that they often have less generalization error, which means they can make more accurate predictions on previously unseen data, a win-win situation between machine learning and privacy, Devadas adds.

    “In the next few years, we would love to look a little deeper into this relationship between stability and privacy, and the relationship between privacy and generalization error. We are knocking on a door here, but it is not clear yet where the door leads,” he says.

    “Obfuscating the usage of an individual’s data in a model is paramount to protecting their privacy. However, to do so can come at the cost of the datas’ and therefore model’s utility,” says Jeremy Goodsitt, senior machine learning engineer at Capital One, who was not involved with this research. “PAC provides an empirical, black-box solution, which can reduce the added noise compared to current practices while maintaining equivalent privacy guarantees. In addition, its empirical approach broadens its reach to more data consuming applications.”

    This research is funded, in part, by DSTA Singapore, Cisco Systems, Capital One, and a MathWorks Fellowship. More

  • in

    Making sense of all things data

    Data, and more specifically using data, is not a new concept, but it remains an elusive one. It comes with terms like “the internet of things” (IoT) and “the cloud,” and no matter how often those are explained, smart people can still be confused. And then there’s the amount of information available and the speed with which it comes in. Software is omnipresent. It’s in coffeemakers and watches, gathering data every second. The question becomes how to take all the new technology and take advantage of the potential insights and analytics. It’s not a small ask.

    “Putting our arms around what digital transformation is can be difficult to do,” says Abel Sanchez. But as the executive director and research director of MIT’s Geospatial Data Center, that’s exactly what he does with his work in helping industries and executives shift their operations in order to make sense of their data and be able to use it to help their bottom lines.

    Play video

    Handling the pace

    Data can lead to making better business decisions. That’s not a new or surprising insight, but as Sanchez says, people still tend to work off of intuition. Part of the problem is that they don’t know what to do with their available data, and there’s usually plenty of available data. Part of that problem is that there’s so much information being produced from so many sources. As soon as a person wakes up and turns on their phone or starts their car, software is running. It’s coming in fast, but because it’s also complex, “it outperforms people,” he says.

    As an example with Uber, once a person clicks on the app for a ride, predictive models start firing at the rate of 1 million per second. It’s all in order to optimize the trip, taking into account factors such as school schedules, roadway conditions, traffic, and a driver’s availability. It’s helpful for the task, but it’s something that “no human would be able to do,” he says. 

    The solution requires a few components. One is a new way to store data. In the past, the classic was creating the “perfect library,” which was too structured. The response to that was to create a “data lake,” where all the information would go in and somehow people would make sense of it. “This also failed,” Sanchez says.

    Data storage needs to be re-imaged, in which a key element is greater accessibility. In most corporations, only 10-20 percent of employees have the access and technical skill to work with the data. The rest have to go through a centralized resource and get into a queue, an inefficient system. The goal, Sanchez says, is to democratize the information by going to a modern stack, which would convert what he calls “dormant data” into “active data.” The result? Better decisions could be made.

    The first, big step companies need to take is the will to make the change. Part of it is an investment of money, but it’s also an attitude shift. Corporations can have an embedded culture where things have always been done a certain way and deviating from that is resisted because it’s different. But when it comes to data, a new approach is needed. Managing and curating the information can no longer rest in the hands of one person with the institutional memory. It’s not possible. It’s also not practical because companies are losing out on efficiency and productivity, because with technology, “What use to take years to do, now you can do in days,” Sanchez says.

    Play video

    The new player

    The above exemplifies what’s been involved with coordinating data along four intertwined components: IoT, AI, the cloud, and security. The first two create the information, which then gets stored in the cloud, but it’s all for naught without robust security. But one relative newcomer has come into the picture. It’s blockchain technology, a term that is often said but still not fully understood, adding further to the confusion.

    Sanchez says that information has been handled and organized a certain way with the World Wide Web. Blockchain is an opportunity to be more nimble and productive by offering the chance to have an accepted identity, currency, and logic that works on a global scale. The holdup has always been that there’s never been any agreement on those three components on a global scale. It leads to people being shut out, inefficiency, and lost business.

    One example, Sanchez says, of blockchain’s potential is with hospitals. In the United States, they’re private and information has to be constantly integrated from doctors, insurance companies, labs, government regulators, and pharmaceutical companies. It leads to repeated steps to do something as simple as recognizing a patient’s identity, which often can’t be agreed upon. With blockchain, these various entities can create a consortium using open source code with no barriers of access, and it could quickly and easily identify a patient because it set up an agreement, and with it “remove that level of effort.” It’s an incremental step, but one which can be built upon that reduces cost and risk.

    Another example — “one of the best examples,” Sanchez says — is what was done in Indonesia. Most of the rice, corn, and wheat that comes from this area is produced from smallholder farms. For the people making loans, it’s expensive to understand the risk of cultivating these plots of land. Compounding that is that these farmers don’t have state-issued identities or credit records, so, “They don’t exist in the modern economic sense,” he says. They don’t have access to loans, and banks are losing out on potential good customers.

    With this project, blockchain allowed local people to gather information about the farms on their smartphones. Banks could acquire the information and compensate the people with tokens, thereby incentivizing the work. The bank would see the creditworthiness of the farms, and farmers could end up getting fair loans.

    In the end, it creates a beneficial circle for the banks, farmers, and community, but it also represents what can be done with digital transformation by allowing businesses to optimize their processes, make better decisions, and ultimately profit.

    “It’s a tremendous new platform,” Sanchez says. “This is the promise.” More

  • in

    3 Questions: Honing robot perception and mapping

    Walking to a friend’s house or browsing the aisles of a grocery store might feel like simple tasks, but they in fact require sophisticated capabilities. That’s because humans are able to effortlessly understand their surroundings and detect complex information about patterns, objects, and their own location in the environment.

    What if robots could perceive their environment in a similar way? That question is on the minds of MIT Laboratory for Information and Decision Systems (LIDS) researchers Luca Carlone and Jonathan How. In 2020, a team led by Carlone released the first iteration of Kimera, an open-source library that enables a single robot to construct a three-dimensional map of its environment in real time, while labeling different objects in view. Last year, Carlone’s and How’s research groups (SPARK Lab and Aerospace Controls Lab) introduced Kimera-Multi, an updated system in which multiple robots communicate among themselves in order to create a unified map. A 2022 paper associated with the project recently received this year’s IEEE Transactions on Robotics King-Sun Fu Memorial Best Paper Award, given to the best paper published in the journal in 2022.

    Carlone, who is the Leonardo Career Development Associate Professor of Aeronautics and Astronautics, and How, the Richard Cockburn Maclaurin Professor in Aeronautics and Astronautics, spoke to LIDS about Kimera-Multi and the future of how robots might perceive and interact with their environment.

    Q: Currently your labs are focused on increasing the number of robots that can work together in order to generate 3D maps of the environment. What are some potential advantages to scaling this system?

    How: The key benefit hinges on consistency, in the sense that a robot can create an independent map, and that map is self-consistent but not globally consistent. We’re aiming for the team to have a consistent map of the world; that’s the key difference in trying to form a consensus between robots as opposed to mapping independently.

    Carlone: In many scenarios it’s also good to have a bit of redundancy. For example, if we deploy a single robot in a search-and-rescue mission, and something happens to that robot, it would fail to find the survivors. If multiple robots are doing the exploring, there’s a much better chance of success. Scaling up the team of robots also means that any given task may be completed in a shorter amount of time.

    Q: What are some of the lessons you’ve learned from recent experiments, and challenges you’ve had to overcome while designing these systems?

    Carlone: Recently we did a big mapping experiment on the MIT campus, in which eight robots traversed up to 8 kilometers in total. The robots have no prior knowledge of the campus, and no GPS. Their main tasks are to estimate their own trajectory and build a map around it. You want the robots to understand the environment as humans do; humans not only understand the shape of obstacles, to get around them without hitting them, but also understand that an object is a chair, a desk, and so on. There’s the semantics part.

    The interesting thing is that when the robots meet each other, they exchange information to improve their map of the environment. For instance, if robots connect, they can leverage information to correct their own trajectory. The challenge is that if you want to reach a consensus between robots, you don’t have the bandwidth to exchange too much data. One of the key contributions of our 2022 paper is to deploy a distributed protocol, in which robots exchange limited information but can still agree on how the map looks. They don’t send camera images back and forth but only exchange specific 3D coordinates and clues extracted from the sensor data. As they continue to exchange such data, they can form a consensus.

    Right now we are building color-coded 3D meshes or maps, in which the color contains some semantic information, like “green” corresponds to grass, and “magenta” to a building. But as humans, we have a much more sophisticated understanding of reality, and we have a lot of prior knowledge about relationships between objects. For instance, if I was looking for a bed, I would go to the bedroom instead of exploring the entire house. If you start to understand the complex relationships between things, you can be much smarter about what the robot can do in the environment. We’re trying to move from capturing just one layer of semantics, to a more hierarchical representation in which the robots understand rooms, buildings, and other concepts.

    Q: What kinds of applications might Kimera and similar technologies lead to in the future?

    How: Autonomous vehicle companies are doing a lot of mapping of the world and learning from the environments they’re in. The holy grail would be if these vehicles could communicate with each other and share information, then they could improve models and maps that much quicker. The current solutions out there are individualized. If a truck pulls up next to you, you can’t see in a certain direction. Could another vehicle provide a field of view that your vehicle otherwise doesn’t have? This is a futuristic idea because it requires vehicles to communicate in new ways, and there are privacy issues to overcome. But if we could resolve those issues, you could imagine a significantly improved safety situation, where you have access to data from multiple perspectives, not only your field of view.

    Carlone: These technologies will have a lot of applications. Earlier I mentioned search and rescue. Imagine that you want to explore a forest and look for survivors, or map buildings after an earthquake in a way that can help first responders access people who are trapped. Another setting where these technologies could be applied is in factories. Currently, robots that are deployed in factories are very rigid. They follow patterns on the floor, and are not really able to understand their surroundings. But if you’re thinking about much more flexible factories in the future, robots will have to cooperate with humans and exist in a much less structured environment. More

  • in

    Learning the language of molecules to predict their properties

    Discovering new materials and drugs typically involves a manual, trial-and-error process that can take decades and cost millions of dollars. To streamline this process, scientists often use machine learning to predict molecular properties and narrow down the molecules they need to synthesize and test in the lab.

    Researchers from MIT and the MIT-Watson AI Lab have developed a new, unified framework that can simultaneously predict molecular properties and generate new molecules much more efficiently than these popular deep-learning approaches.

    To teach a machine-learning model to predict a molecule’s biological or mechanical properties, researchers must show it millions of labeled molecular structures — a process known as training. Due to the expense of discovering molecules and the challenges of hand-labeling millions of structures, large training datasets are often hard to come by, which limits the effectiveness of machine-learning approaches.

    By contrast, the system created by the MIT researchers can effectively predict molecular properties using only a small amount of data. Their system has an underlying understanding of the rules that dictate how building blocks combine to produce valid molecules. These rules capture the similarities between molecular structures, which helps the system generate new molecules and predict their properties in a data-efficient manner.

    This method outperformed other machine-learning approaches on both small and large datasets, and was able to accurately predict molecular properties and generate viable molecules when given a dataset with fewer than 100 samples.

    “Our goal with this project is to use some data-driven methods to speed up the discovery of new molecules, so you can train a model to do the prediction without all of these cost-heavy experiments,” says lead author Minghao Guo, a computer science and electrical engineering (EECS) graduate student.

    Guo’s co-authors include MIT-IBM Watson AI Lab research staff members Veronika Thost, Payel Das, and Jie Chen; recent MIT graduates Samuel Song ’23 and Adithya Balachandran ’23; and senior author Wojciech Matusik, a professor of electrical engineering and computer science and a member of the MIT-IBM Watson AI Lab, who leads the Computational Design and Fabrication Group within the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). The research will be presented at the International Conference for Machine Learning.

    Learning the language of molecules

    To achieve the best results with machine-learning models, scientists need training datasets with millions of molecules that have similar properties to those they hope to discover. In reality, these domain-specific datasets are usually very small. So, researchers use models that have been pretrained on large datasets of general molecules, which they apply to a much smaller, targeted dataset. However, because these models haven’t acquired much domain-specific knowledge, they tend to perform poorly.

    The MIT team took a different approach. They created a machine-learning system that automatically learns the “language” of molecules — what is known as a molecular grammar — using only a small, domain-specific dataset. It uses this grammar to construct viable molecules and predict their properties.

    In language theory, one generates words, sentences, or paragraphs based on a set of grammar rules. You can think of a molecular grammar the same way. It is a set of production rules that dictate how to generate molecules or polymers by combining atoms and substructures.

    Just like a language grammar, which can generate a plethora of sentences using the same rules, one molecular grammar can represent a vast number of molecules. Molecules with similar structures use the same grammar production rules, and the system learns to understand these similarities.

    Since structurally similar molecules often have similar properties, the system uses its underlying knowledge of molecular similarity to predict properties of new molecules more efficiently. 

    “Once we have this grammar as a representation for all the different molecules, we can use it to boost the process of property prediction,” Guo says.

    The system learns the production rules for a molecular grammar using reinforcement learning — a trial-and-error process where the model is rewarded for behavior that gets it closer to achieving a goal.

    But because there could be billions of ways to combine atoms and substructures, the process to learn grammar production rules would be too computationally expensive for anything but the tiniest dataset.

    The researchers decoupled the molecular grammar into two parts. The first part, called a metagrammar, is a general, widely applicable grammar they design manually and give the system at the outset. Then it only needs to learn a much smaller, molecule-specific grammar from the domain dataset. This hierarchical approach speeds up the learning process.

    Big results, small datasets

    In experiments, the researchers’ new system simultaneously generated viable molecules and polymers, and predicted their properties more accurately than several popular machine-learning approaches, even when the domain-specific datasets had only a few hundred samples. Some other methods also required a costly pretraining step that the new system avoids.

    The technique was especially effective at predicting physical properties of polymers, such as the glass transition temperature, which is the temperature required for a material to transition from solid to liquid. Obtaining this information manually is often extremely costly because the experiments require extremely high temperatures and pressures.

    To push their approach further, the researchers cut one training set down by more than half — to just 94 samples. Their model still achieved results that were on par with methods trained using the entire dataset.

    “This grammar-based representation is very powerful. And because the grammar itself is a very general representation, it can be deployed to different kinds of graph-form data. We are trying to identify other applications beyond chemistry or material science,” Guo says.

    In the future, they also want to extend their current molecular grammar to include the 3D geometry of molecules and polymers, which is key to understanding the interactions between polymer chains. They are also developing an interface that would show a user the learned grammar production rules and solicit feedback to correct rules that may be wrong, boosting the accuracy of the system.

    This work is funded, in part, by the MIT-IBM Watson AI Lab and its member company, Evonik. More

  • in

    Educating national security leaders on artificial intelligence

    Understanding artificial intelligence and how it relates to matters of national security has become a top priority for military and government leaders in recent years. A new three-day custom program entitled “Artificial Intelligence for National Security Leaders” — AI4NSL for short — aims to educate leaders who may not have a technical background on the basics of AI, machine learning, and data science, and how these topics intersect with national security.

    “National security fundamentally is about two things: getting information out of sensors and processing that information. These are two things that AI excels at. The AI4NSL class engages national security leaders in understanding how to navigate the benefits and opportunities that AI affords, while also understanding its potential negative consequences,” says Aleksander Madry, the Cadence Design Systems Professor at MIT and one of the course’s faculty directors.

    Organized jointly by MIT’s School of Engineering, MIT Stephen A. Schwarzman College of Computing, and MIT Sloan Executive Education, AI4NSL wrapped up its fifth cohort in April. The course brings leaders from every branch of the U.S. military, as well as some foreign military leaders from NATO, to MIT’s campus, where they learn from faculty experts on a variety of technical topics in AI, as well as how to navigate organizational challenges that arise in this context.

    Play video

    AI for National Security Leaders | MIT Sloan Executive Education

    “We set out to put together a real executive education class on AI for senior national security leaders,” says Madry. “For three days, we are teaching these leaders not only an understanding of what this technology is about, but also how to best adopt these technologies organizationally.”

    The original idea sprang from discussions with senior U.S. Air Force (USAF) leaders and members of the Department of the Air Force (DAF)-MIT AI Accelerator in 2019.

    According to Major John Radovan, deputy director of the DAF-MIT AI Accelerator, in recent years it has become clear that national security leaders needed a deeper understanding of AI technologies and its implications on security, warfare, and military operations. In February 2020, Radovan and his team at the DAF-MIT AI Accelerator started building a custom course to help guide senior leaders in their discussions about AI.

    “This is the only course out there that is focused on AI specifically for national security,” says Radovan. “We didn’t want to make this course just for members of the Air Force — it had to be for all branches of the military. If we are going to operate as a joint force, we need to have the same vocabulary and the same mental models about how to use this technology.”

    After a pilot program in collaboration with MIT Open Learning and the MIT Computer Science and Artificial Intelligence Laboratory, Radovan connected with faculty at the School of Engineering and MIT Schwarzman College of Computing, including Madry, to refine the course’s curriculum. They enlisted the help of colleagues and faculty at MIT Sloan Executive Education to refine the class’s curriculum and cater the content to its audience. The result of this cross-school collaboration was a new iteration of AI4NSL, which was launched last summer.

    In addition to providing participants with a basic overview of AI technologies, the course places a heavy emphasis on organizational planning and implementation.

    “What we wanted to do was to create smart consumers at the command level. The idea was to present this content at a higher level so that people could understand the key frameworks, which will guide their thinking around the use and adoption of this material,” says Roberto Fernandez, the William F. Pounds Professor of Management and one of the AI4NSL instructors, as well as the other course’s faculty director.

    During the three-day course, instructors from MIT’s Department of Electrical Engineering and Computer Science, Department of Aeronautics and Astronautics, and MIT Sloan School of Management cover a wide range of topics.

    The first half of the course starts with a basic overview of concepts including AI, machine learning, deep learning, and the role of data. Instructors also present the problems and pitfalls of using AI technologies, including the potential for adversarial manipulation of machine learning systems, privacy challenges, and ethical considerations.

    In the middle of day two, the course shifts to examine the organizational perspective, encouraging participants to consider how to effectively implement these technologies in their own units.

    “What’s exciting about this course is the way it is formatted first in terms of understanding AI, machine learning, what data is, and how data feeds AI, and then giving participants a framework to go back to their units and build a strategy to make this work,” says Colonel Michelle Goyette, director of the Army Strategic Education Program at the Army War College and an AI4NSL participant.

    Throughout the course, breakout sessions provide participants with an opportunity to collaborate and problem-solve on an exercise together. These breakout sessions build upon one another as the participants are exposed to new concepts related to AI.

    “The breakout sessions have been distinctive because they force you to establish relationships with people you don’t know, so the networking aspect is key. Any time you can do more than receive information and actually get into the application of what you were taught, that really enhances the learning environment,” says Lieutenant General Brian Robinson, the commander of Air Education and Training Command for the USAF and an AI4NSL participant.

    This spirit of teamwork, collaboration, and bringing together individuals from different backgrounds permeates the three-day program. The AI4NSL classroom not only brings together national security leaders from all branches of the military, it also brings together faculty from three schools across MIT.

    “One of the things that’s most exciting about this program is the kind of overarching theme of collaboration,” says Rob Dietel, director of executive programs at Sloan School of Management. “We’re not drawing just from the MIT Sloan faculty, we’re bringing in top faculty from the Schwarzman College of Computing and the School of Engineering. It’s wonderful to be able to tap into those resources that are here on MIT’s campus to really make it the most impactful program that we can.”

    As new developments in generative AI, such as ChatGPT, and machine learning alter the national security landscape, the organizers at AI4NSL will continue to update the curriculum to ensure it is preparing leaders to understand the implications for their respective units.

    “The rate of change for AI and national security is so fast right now that it’s challenging to keep up, and that’s part of the reason we’ve designed this program. We’ve brought in some of our world-class faculty from different parts of MIT to really address the changing dynamic of AI,” adds Dietel. More

  • in

    Researchers teach an AI to write better chart captions

    Chart captions that explain complex trends and patterns are important for improving a reader’s ability to comprehend and retain the data being presented. And for people with visual disabilities, the information in a caption often provides their only means of understanding the chart.

    But writing effective, detailed captions is a labor-intensive process. While autocaptioning techniques can alleviate this burden, they often struggle to describe cognitive features that provide additional context.

    To help people author high-quality chart captions, MIT researchers have developed a dataset to improve automatic captioning systems. Using this tool, researchers could teach a machine-learning model to vary the level of complexity and type of content included in a chart caption based on the needs of users.

    The MIT researchers found that machine-learning models trained for autocaptioning with their dataset consistently generated captions that were precise, semantically rich, and described data trends and complex patterns. Quantitative and qualitative analyses revealed that their models captioned charts more effectively than other autocaptioning systems.  

    The team’s goal is to provide the dataset, called VisText, as a tool researchers can use as they work on the thorny problem of chart autocaptioning. These automatic systems could help provide captions for uncaptioned online charts and improve accessibility for people with visual disabilities, says co-lead author Angie Boggust, a graduate student in electrical engineering and computer science at MIT and member of the Visualization Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL).

    “We’ve tried to embed a lot of human values into our dataset so that when we and other researchers are building automatic chart-captioning systems, we don’t end up with models that aren’t what people want or need,” she says.

    Boggust is joined on the paper by co-lead author and fellow graduate student Benny J. Tang and senior author Arvind Satyanarayan, associate professor of computer science at MIT who leads the Visualization Group in CSAIL. The research will be presented at the Annual Meeting of the Association for Computational Linguistics.

    Human-centered analysis

    The researchers were inspired to develop VisText from prior work in the Visualization Group that explored what makes a good chart caption. In that study, researchers found that sighted users and blind or low-vision users had different preferences for the complexity of semantic content in a caption. 

    The group wanted to bring that human-centered analysis into autocaptioning research. To do that, they developed VisText, a dataset of charts and associated captions that could be used to train machine-learning models to generate accurate, semantically rich, customizable captions.

    Developing effective autocaptioning systems is no easy task. Existing machine-learning methods often try to caption charts the way they would an image, but people and models interpret natural images differently from how we read charts. Other techniques skip the visual content entirely and caption a chart using its underlying data table. However, such data tables are often not available after charts are published.

    Given the shortfalls of using images and data tables, VisText also represents charts as scene graphs. Scene graphs, which can be extracted from a chart image, contain all the chart data but also include additional image context.

    “A scene graph is like the best of both worlds — it contains almost all the information present in an image while being easier to extract from images than data tables. As it’s also text, we can leverage advances in modern large language models for captioning,” Tang explains.

    They compiled a dataset that contains more than 12,000 charts — each represented as a data table, image, and scene graph — as well as associated captions. Each chart has two separate captions: a low-level caption that describes the chart’s construction (like its axis ranges) and a higher-level caption that describes statistics, relationships in the data, and complex trends.

    The researchers generated low-level captions using an automated system and crowdsourced higher-level captions from human workers.

    “Our captions were informed by two key pieces of prior research: existing guidelines on accessible descriptions of visual media and a conceptual model from our group for categorizing semantic content. This ensured that our captions featured important low-level chart elements like axes, scales, and units for readers with visual disabilities, while retaining human variability in how captions can be written,” says Tang.

    Translating charts

    Once they had gathered chart images and captions, the researchers used VisText to train five machine-learning models for autocaptioning. They wanted to see how each representation — image, data table, and scene graph — and combinations of the representations affected the quality of the caption.

    “You can think about a chart captioning model like a model for language translation. But instead of saying, translate this German text to English, we are saying translate this ‘chart language’ to English,” Boggust says.

    Their results showed that models trained with scene graphs performed as well or better than those trained using data tables. Since scene graphs are easier to extract from existing charts, the researchers argue that they might be a more useful representation.

    They also trained models with low-level and high-level captions separately. This technique, known as semantic prefix tuning, enabled them to teach the model to vary the complexity of the caption’s content.

    In addition, they conducted a qualitative examination of captions produced by their best-performing method and categorized six types of common errors. For instance, a directional error occurs if a model says a trend is decreasing when it is actually increasing.

    This fine-grained, robust qualitative evaluation was important for understanding how the model was making its errors. For example, using quantitative methods, a directional error might incur the same penalty as a repetition error, where the model repeats the same word or phrase. But a directional error could be more misleading to a user than a repetition error. The qualitative analysis helped them understand these types of subtleties, Boggust says.

    These sorts of errors also expose limitations of current models and raise ethical considerations that researchers must consider as they work to develop autocaptioning systems, she adds.

    Generative machine-learning models, such as those that power ChatGPT, have been shown to hallucinate or give incorrect information that can be misleading. While there is a clear benefit to using these models for autocaptioning existing charts, it could lead to the spread of misinformation if charts are captioned incorrectly.

    “Maybe this means that we don’t just caption everything in sight with AI. Instead, perhaps we provide these autocaptioning systems as authorship tools for people to edit. It is important to think about these ethical implications throughout the research process, not just at the end when we have a model to deploy,” she says.

    Boggust, Tang, and their colleagues want to continue optimizing the models to reduce some common errors. They also want to expand the VisText dataset to include more charts, and more complex charts, such as those with stacked bars or multiple lines. And they would also like to gain insights into what these autocaptioning models are actually learning about chart data.

    This research was supported, in part, by a Google Research Scholar Award, the National Science Foundation, the MLA@CSAIL Initiative, and the United States Air Force Research Laboratory. More

  • in

    Day of AI curriculum meets the moment

    MIT Responsible AI for Social Empowerment and Education (RAISE) recently celebrated the second annual Day of AI with two flagship local events. The Edward M. Kennedy Institute for the U.S. Senate in Boston hosted a human rights and data policy-focused event that was streamed worldwide. Dearborn STEM Academy in Roxbury, Massachusetts, hosted a student workshop in collaboration with Amazon Future Engineer. With over 8,000 registrations across all 50 U.S. states and 108 countries in 2023, participation in Day of AI has more than doubled since its inaugural year.

    Day of AI is a free curriculum of lessons and hands-on activities designed to teach kids of all ages and backgrounds the basics and responsible use of artificial intelligence, designed by researchers at MIT RAISE. This year, resources were available for educators to run at any time and in any increments they chose. The curriculum included five new modules to address timely topics like ChatGPT in School, Teachable Machines, AI and Social Media, Data Science and Me, and more. A collaboration with the International Society for Technology in Education also introduced modules for early elementary students. Educators across the world shared photos, videos, and stories of their students’ engagement, expressing excitement and even relief over the accessible lessons.

    Professor Cynthia Breazeal, director of RAISE, dean for digital learning at MIT, and head of the MIT Media Lab’s Personal Robots research group, said, “It’s been a year of extraordinary advancements in AI, and with that comes necessary conversations and concerns about who and what this technology is for. With our Day of AI events, we want to celebrate the teachers and students who are putting in the work to make sure that AI is for everyone.”

    Reflecting community values and protecting digital citizens

    Play video

    On May 18, 2023, MIT RAISE hosted a global Day of AI celebration featuring a flagship local event focused on human rights and data policy at the Edward M. Kennedy Institute for the U.S. Senate. Students from the Warren Prescott Middle School and New Mission High School heard from speakers the City of Boston, Liberty Mutual, and MIT to discuss the many benefits and challenges of artificial intelligence education. Video: MIT Open Learning

    MIT President Sally Kornbluth welcomed students from Warren Prescott Middle School and New Mission High School to the Day of AI program at the Edward M. Kennedy Institute. Kornbluth reflected on the exciting potential of AI, along with the ethical considerations society needs to be responsible for.

    “AI has the potential to do all kinds of fantastic things, including driving a car, helping us with the climate crisis, improving health care, and designing apps that we can’t even imagine yet. But what we have to make sure it doesn’t do is cause harm to individuals, to communities, to us — society as a whole,” she said.

    This theme resonated with each of the event speakers, whose jobs spanned the sectors of education, government, and business. Yo Deshpande, technologist for the public realm, and Michael Lawrence Evans, program director of new urban mechanics from the Boston Mayor’s Office, shared how Boston thinks about using AI to improve city life in ways that are “equitable, accessible, and delightful.” Deshpande said, “We have the opportunity to explore not only how AI works, but how using AI can line up with our values, the way we want to be in the world, and the way we want to be in our community.”

    Adam L’Italien, chief innovation officer at Liberty Mutual Insurance (one of Day of AI’s founding sponsors), compared our present moment with AI technologies to the early days of personal computers and internet connection. “Exposure to emerging technologies can accelerate progress in the world and in your own lives,” L’Italien said, while recognizing that the AI development process needs to be inclusive and mitigate biases.

    Human policies for artificial intelligence

    So how does society address these human rights concerns about AI? Marc Aidinoff ’21, former White House Office of Science and Technology Policy chief of staff, led a discussion on how government policy can influence the parameters of how technology is developed and used, like the Blueprint for an AI Bill of Rights. Aidinoff said, “The work of building the world you want to see is far harder than building the technical AI system … How do you work with other people and create a collective vision for what we want to do?” Warren Prescott Middle School students described how AI could be used to solve problems that humans couldn’t. But they also shared their concerns that AI could affect data privacy, learning deficits, social media addiction, job displacement, and propaganda.

    In a mock U.S. Senate trial activity designed by Daniella DiPaola, PhD student at the MIT Media Lab, the middle schoolers investigated what rights might be undermined by AI in schools, hospitals, law enforcement, and corporations. Meanwhile, New Mission High School students workshopped the ideas behind bill S.2314, the Social Media Addiction Reduction Technology (SMART) Act, in an activity designed by Raechel Walker, graduate research assistant in the Personal Robots Group, and Matt Taylor, research assistant at the Media Lab. They discussed what level of control could or should be introduced at the parental, educational, and governmental levels to reduce the risks of internet addiction.

    “Alexa, how do I program AI?”

    Play video

    The 2023 Day of AI celebration featured a flagship local event at the Dearborn STEM Academy in Roxbury in collaboration with Amazon Future Engineer. Students participated in a hands-on activity using MIT App Inventor as part of Day of AI’s Alexa lesson. Video: MIT Open Learning

    At Dearborn STEM Academy, Amazon Future Engineer helped students work through the Intro to Voice AI curriculum module in real-time. Students used MIT App Inventor to code basic commands for Alexa. In an interview with WCVB, Principal Darlene Marcano said, “It’s important that we expose our students to as many different experiences as possible. The students that are participating are on track to be future computer scientists and engineers.”

    Breazeal told Dearborn students, “We want you to have an informed voice about how you want AI to be used in society. We want you to feel empowered that you can shape the world. You can make things with AI to help make a better world and a better community.”

    Rohit Prasad ’08, senior vice president and head scientist for Alexa at Amazon, and Victor Reinoso ’97, global director of philanthropic education initiatives at Amazon, also joined the event. “Amazon and MIT share a commitment to helping students discover a world of possibilities through STEM and AI education,” said Reinoso. “There’s a lot of current excitement around the technological revolution with generative AI and large language models, so we’re excited to help students explore careers of the future and navigate the pathways available to them.” To highlight their continued investment in the local community and the school program, Amazon donated a $25,000 Innovation and Early College Pathways Program Grant to the Boston Public School system.

    Day of AI down under

    Not only was the Day of AI program widely adopted across the globe, Australian educators were inspired to adapt their own regionally specific curriculum. An estimated 161,000 AI professionals will be needed in Australia by 2030, according to the National Artificial Intelligence Center in the Commonwealth Scientific and Industrial Research Organization (CSIRO), an Australian government agency and Day of AI Australia project partner. CSIRO worked with the University of New South Wales to develop supplementary educational resources on AI ethics and machine learning. Day of AI Australia reached 85,000 students at 400-plus secondary schools this year, sparking curiosity in the next generation of AI experts.

    The interest in AI is accelerating as fast as the technology is being developed. Day of AI offers a unique opportunity for K-12 students to shape our world’s digital future and their own.

    “I hope that some of you will decide to be part of this bigger effort to help us figure out the best possible answers to questions that are raised by AI,” Kornbluth told students at the Edward M. Kennedy Institute. “We’re counting on you, the next generation, to learn how AI works and help make sure it’s for everyone.” More