More stories

  • in

    Investigating at the interface of data science and computing

    A visual model of Guy Bresler’s research would probably look something like a Venn diagram. He works at the four-way intersection where theoretical computer science, statistics, probability, and information theory collide.

    “There are always new things to do be done at the interface. There are always opportunities for entirely new questions to ask,” says Bresler, an associate professor who recently earned tenure in MIT’s Department of Electrical Engineering and Computer Science (EECS).

    A theoretician, he aims to understand the delicate interplay between structure in data, the complexity of models, and the amount of computation needed to learn those models. Recently, his biggest focus has been trying to unveil fundamental phenomena that are broadly responsible for determining the computational complexity of statistics problems — and finding the “sweet spot” where available data and computation resources enable researchers to effectively solve a problem.

    When trying to solve a complex statistics problem, there is often a tug-of-war between data and computation. Without enough data, the computation needed to solve a statistical problem can be intractable, or at least consume a staggering amount of resources. But get just enough data and suddenly the intractable becomes solvable; the amount of computation needed to come up with a solution drops dramatically.

    The majority of modern statistical problems exhibits this sort of trade-off between computation and data, with applications ranging from drug development to weather prediction. Another well-studied and practically important example is cryo-electron microscopy, Bresler says. With this technique, researchers use an electron microscope to take images of molecules in different orientations. The central challenge is how to solve the inverse problem — determining the molecule’s structure given the noisy data. Many statistical problems can be formulated as inverse problems of this sort.

    One aim of Bresler’s work is to elucidate relationships between the wide variety of different statistics problems currently being studied. The dream is to classify statistical problems into equivalence classes, as has been done for other types of computational problems in the field of computational complexity. Showing these sorts of relationships means that, instead of trying to understand each problem in isolation, researchers can transfer their understanding from a well-studied problem to a poorly understood one, he says.

    Adopting a theoretical approach

    For Bresler, a desire to theoretically understand various basic phenomena inspired him to follow a path into academia.

    Both of his parents worked as professors and showed how fulfilling academia can be, he says. His earliest introduction to the theoretical side of engineering came from his father, who is an electrical engineer and theoretician studying signal processing. Bresler was inspired by his work from an early age. As an undergraduate at the University of Illinois at Urbana-Champaign, he bounced between physics, math, and computer science courses. But no matter the topic, he gravitated toward the theoretical viewpoint.

    In graduate school at the University of California at Berkeley, Bresler enjoyed the opportunity to work in a wide variety of topics spanning probability, theoretical computer science, and mathematics. His driving motivator was a love of learning new things.

    “Working at the interface of multiple fields with new questions, there is a feeling that one had better learn as much as possible if one is to have any chance of finding the right tools to answer those questions,” he says.

    That curiosity led him to MIT for a postdoc in the Laboratory for Information and Decision Systems (LIDS) in 2013, and then he joined the faculty two years later as an assistant professor in EECS. He was named an associate professor in 2019.

    Bresler says he was drawn to the intellectual atmosphere at MIT, as well as the supportive environment for launching bold research quests and trying to make progress in new areas of study.

    Opportunities for collaboration

    “What really struck me was how vibrant and energetic and collaborative MIT is. I have this mental list of more than 20 people here who I would love to have lunch with every single week and collaborate with on research. So just based on sheer numbers, joining MIT was a clear win,” he says.

    He’s especially enjoyed collaborating with his students, who continually teach him new things and ask deep questions that drive exciting research projects. One such student, Matthew Brennan, who was one of Bresler’s closest collaborators, tragically and unexpectedly passed away in January, 2021.

    The shock from Brennan’s death is still raw for Bresler, and it derailed his research for a time.

    “Beyond his own prodigious capabilities and creativity, he had this amazing ability to listen to an idea of mine that was almost completely wrong, extract from it a useful piece, and then pass the ball back,” he says. “We had the same vision for what we wanted to achieve in the work, and we were driven to try to tell a certain story. At the time, almost nobody was pursuing this particular line of work, and it was in a way kind of lonely. But he trusted me, and we encouraged one another to keep at it when things seemed bleak.”

    Those lessons in perseverance fuel Bresler as he and his students continue exploring questions that, by their nature, are difficult to answer.

    One area he’s worked in on-and-off for over a decade involves learning graphical models from data. Models of certain types of data, such as time-series data consisting of temperature readings, are often constructed by domain experts who have relevant knowledge and can build a reasonable model, he explains.

    But for many types of data with complex dependencies, such as social network or biological data, it is not at all clear what structure a model should take. Bresler’s work seeks to estimate a structured model from data, which could then be used for downstream applications like making recommendations or better predicting the weather.

    The basic question of identifying good models, whether algorithmically in a complex setting or analytically, by specifying a useful toy model for theoretical analysis, connects the abstract work with engineering practice, he says.

    “In general, modeling is an art. Real life is complicated and if you write down some super-complicated model that tries to capture every feature of a problem, it is doomed,” says Bresler. “You have to think about the problem and understand the practical side of things on some level to identify the correct features of the problem to be modeled, so that you can hope to actually solve it and gain insight into what one should do in practice.”

    Outside the lab, Bresler often finds himself solving very different kinds of problems. He is an avid rock climber and spends much of his free time bouldering throughout New England.

    “I really love it. It is a good excuse to get outside and get sucked into a whole different world. Even though there is problem solving involved, and there are similarities at the philosophical level, it is totally orthogonal to sitting down and doing math,” he says. More

  • in

    Costis Daskalakis appointed inaugural Avanessians Professor in the MIT Schwarzman College of Computing

    The MIT Stephen A. Schwarzman College of Computing has named Costis Daskalakis as the inaugural holder of the Avanessians Professorship. His chair began on July 1.

    Daskalakis is the first person appointed to this position generously endowed by Armen Avanessians ’81. Established in the MIT Schwarzman College of Computing, the new chair provides Daskalakis with additional support to pursue his research and develop his career.

    “I’m delighted to recognize Costis for his scholarship and extraordinary achievements with this distinguished professorship,” says Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science.

    A professor in the MIT Department of Electrical Engineering and Computer Science, Daskalakis is a theoretical computer scientist who works at the interface of game theory, economics, probability theory, statistics, and machine learning. He has resolved long-standing open problems about the computational complexity of the Nash equilibrium, the mathematical structure and computational complexity of multi-item auctions, and the behavior of machine-learning methods such as the expectation-maximization algorithm. He has obtained computationally and statistically efficient methods for statistical hypothesis testing and learning in high-dimensional settings, as well as results characterizing the structure and concentration properties of high-dimensional distributions. His current work focuses on multi-agent learning, learning from biased and dependent data, causal inference, and econometrics.

    A native of Greece, Daskalakis joined the MIT faculty in 2009. He is a member of the Computer Science and Artificial Intelligence Laboratory and is affiliated with the Laboratory for Information and Decision Systems and the Operations Research Center. He is also an investigator in the Foundations of Data Science Institute.

    He has previously received such honors as the 2018 Nevanlinna Prize from the International Mathematical Union, the 2018 ACM Grace Murray Hopper Award, the Kalai Game Theory and Computer Science Prize from the Game Theory Society, and the 2008 ACM Doctoral Dissertation Award. More

  • in

    Building explainability into the components of machine-learning models

    Explanation methods that help users understand and trust machine-learning models often describe how much certain features used in the model contribute to its prediction. For example, if a model predicts a patient’s risk of developing cardiac disease, a physician might want to know how strongly the patient’s heart rate data influences that prediction.

    But if those features are so complex or convoluted that the user can’t understand them, does the explanation method do any good?

    MIT researchers are striving to improve the interpretability of features so decision makers will be more comfortable using the outputs of machine-learning models. Drawing on years of field work, they developed a taxonomy to help developers craft features that will be easier for their target audience to understand.

    “We found that out in the real world, even though we were using state-of-the-art ways of explaining machine-learning models, there is still a lot of confusion stemming from the features, not from the model itself,” says Alexandra Zytek, an electrical engineering and computer science PhD student and lead author of a paper introducing the taxonomy.

    To build the taxonomy, the researchers defined properties that make features interpretable for five types of users, from artificial intelligence experts to the people affected by a machine-learning model’s prediction. They also offer instructions for how model creators can transform features into formats that will be easier for a layperson to comprehend.

    They hope their work will inspire model builders to consider using interpretable features from the beginning of the development process, rather than trying to work backward and focus on explainability after the fact.

    MIT co-authors include Dongyu Liu, a postdoc; visiting professor Laure Berti-Équille, research director at IRD; and senior author Kalyan Veeramachaneni, principal research scientist in the Laboratory for Information and Decision Systems (LIDS) and leader of the Data to AI group. They are joined by Ignacio Arnaldo, a principal data scientist at Corelight. The research is published in the June edition of the Association for Computing Machinery Special Interest Group on Knowledge Discovery and Data Mining’s peer-reviewed Explorations Newsletter.

    Real-world lessons

    Features are input variables that are fed to machine-learning models; they are usually drawn from the columns in a dataset. Data scientists typically select and handcraft features for the model, and they mainly focus on ensuring features are developed to improve model accuracy, not on whether a decision-maker can understand them, Veeramachaneni explains.

    For several years, he and his team have worked with decision makers to identify machine-learning usability challenges. These domain experts, most of whom lack machine-learning knowledge, often don’t trust models because they don’t understand the features that influence predictions.

    For one project, they partnered with clinicians in a hospital ICU who used machine learning to predict the risk a patient will face complications after cardiac surgery. Some features were presented as aggregated values, like the trend of a patient’s heart rate over time. While features coded this way were “model ready” (the model could process the data), clinicians didn’t understand how they were computed. They would rather see how these aggregated features relate to original values, so they could identify anomalies in a patient’s heart rate, Liu says.

    By contrast, a group of learning scientists preferred features that were aggregated. Instead of having a feature like “number of posts a student made on discussion forums” they would rather have related features grouped together and labeled with terms they understood, like “participation.”

    “With interpretability, one size doesn’t fit all. When you go from area to area, there are different needs. And interpretability itself has many levels,” Veeramachaneni says.

    The idea that one size doesn’t fit all is key to the researchers’ taxonomy. They define properties that can make features more or less interpretable for different decision makers and outline which properties are likely most important to specific users.

    For instance, machine-learning developers might focus on having features that are compatible with the model and predictive, meaning they are expected to improve the model’s performance.

    On the other hand, decision makers with no machine-learning experience might be better served by features that are human-worded, meaning they are described in a way that is natural for users, and understandable, meaning they refer to real-world metrics users can reason about.

    “The taxonomy says, if you are making interpretable features, to what level are they interpretable? You may not need all levels, depending on the type of domain experts you are working with,” Zytek says.

    Putting interpretability first

    The researchers also outline feature engineering techniques a developer can employ to make features more interpretable for a specific audience.

    Feature engineering is a process in which data scientists transform data into a format machine-learning models can process, using techniques like aggregating data or normalizing values. Most models also can’t process categorical data unless they are converted to a numerical code. These transformations are often nearly impossible for laypeople to unpack.

    Creating interpretable features might involve undoing some of that encoding, Zytek says. For instance, a common feature engineering technique organizes spans of data so they all contain the same number of years. To make these features more interpretable, one could group age ranges using human terms, like infant, toddler, child, and teen. Or rather than using a transformed feature like average pulse rate, an interpretable feature might simply be the actual pulse rate data, Liu adds.

    “In a lot of domains, the tradeoff between interpretable features and model accuracy is actually very small. When we were working with child welfare screeners, for example, we retrained the model using only features that met our definitions for interpretability, and the performance decrease was almost negligible,” Zytek says.

    Building off this work, the researchers are developing a system that enables a model developer to handle complicated feature transformations in a more efficient manner, to create human-centered explanations for machine-learning models. This new system will also convert algorithms designed to explain model-ready datasets into formats that can be understood by decision makers. More

  • in

    Living better with algorithms

    Laboratory for Information and Decision Systems (LIDS) student Sarah Cen remembers the lecture that sent her down the track to an upstream question.

    At a talk on ethical artificial intelligence, the speaker brought up a variation on the famous trolley problem, which outlines a philosophical choice between two undesirable outcomes.

    The speaker’s scenario: Say a self-driving car is traveling down a narrow alley with an elderly woman walking on one side and a small child on the other, and no way to thread between both without a fatality. Who should the car hit?

    Then the speaker said: Let’s take a step back. Is this the question we should even be asking?

    That’s when things clicked for Cen. Instead of considering the point of impact, a self-driving car could have avoided choosing between two bad outcomes by making a decision earlier on — the speaker pointed out that, when entering the alley, the car could have determined that the space was narrow and slowed to a speed that would keep everyone safe.

    Recognizing that today’s AI safety approaches often resemble the trolley problem, focusing on downstream regulation such as liability after someone is left with no good choices, Cen wondered: What if we could design better upstream and downstream safeguards to such problems? This question has informed much of Cen’s work.

    “Engineering systems are not divorced from the social systems on which they intervene,” Cen says. Ignoring this fact risks creating tools that fail to be useful when deployed or, more worryingly, that are harmful.

    Cen arrived at LIDS in 2018 via a slightly roundabout route. She first got a taste for research during her undergraduate degree at Princeton University, where she majored in mechanical engineering. For her master’s degree, she changed course, working on radar solutions in mobile robotics (primarily for self-driving cars) at Oxford University. There, she developed an interest in AI algorithms, curious about when and why they misbehave. So, she came to MIT and LIDS for her doctoral research, working with Professor Devavrat Shah in the Department of Electrical Engineering and Computer Science, for a stronger theoretical grounding in information systems.

    Auditing social media algorithms

    Together with Shah and other collaborators, Cen has worked on a wide range of projects during her time at LIDS, many of which tie directly to her interest in the interactions between humans and computational systems. In one such project, Cen studies options for regulating social media. Her recent work provides a method for translating human-readable regulations into implementable audits.

    To get a sense of what this means, suppose that regulators require that any public health content — for example, on vaccines — not be vastly different for politically left- and right-leaning users. How should auditors check that a social media platform complies with this regulation? Can a platform be made to comply with the regulation without damaging its bottom line? And how does compliance affect the actual content that users do see?

    Designing an auditing procedure is difficult in large part because there are so many stakeholders when it comes to social media. Auditors have to inspect the algorithm without accessing sensitive user data. They also have to work around tricky trade secrets, which can prevent them from getting a close look at the very algorithm that they are auditing because these algorithms are legally protected. Other considerations come into play as well, such as balancing the removal of misinformation with the protection of free speech.

    To meet these challenges, Cen and Shah developed an auditing procedure that does not need more than black-box access to the social media algorithm (which respects trade secrets), does not remove content (which avoids issues of censorship), and does not require access to users (which preserves users’ privacy).

    In their design process, the team also analyzed the properties of their auditing procedure, finding that it ensures a desirable property they call decision robustness. As good news for the platform, they show that a platform can pass the audit without sacrificing profits. Interestingly, they also found the audit naturally incentivizes the platform to show users diverse content, which is known to help reduce the spread of misinformation, counteract echo chambers, and more.

    Who gets good outcomes and who gets bad ones?

    In another line of research, Cen looks at whether people can receive good long-term outcomes when they not only compete for resources, but also don’t know upfront what resources are best for them.

    Some platforms, such as job-search platforms or ride-sharing apps, are part of what is called a matching market, which uses an algorithm to match one set of individuals (such as workers or riders) with another (such as employers or drivers). In many cases, individuals have matching preferences that they learn through trial and error. In labor markets, for example, workers learn their preferences about what kinds of jobs they want, and employers learn their preferences about the qualifications they seek from workers.

    But learning can be disrupted by competition. If workers with a particular background are repeatedly denied jobs in tech because of high competition for tech jobs, for instance, they may never get the knowledge they need to make an informed decision about whether they want to work in tech. Similarly, tech employers may never see and learn what these workers could do if they were hired.

    Cen’s work examines this interaction between learning and competition, studying whether it is possible for individuals on both sides of the matching market to walk away happy.

    Modeling such matching markets, Cen and Shah found that it is indeed possible to get to a stable outcome (workers aren’t incentivized to leave the matching market), with low regret (workers are happy with their long-term outcomes), fairness (happiness is evenly distributed), and high social welfare.

    Interestingly, it’s not obvious that it’s possible to get stability, low regret, fairness, and high social welfare simultaneously.  So another important aspect of the research was uncovering when it is possible to achieve all four criteria at once and exploring the implications of those conditions.

    What is the effect of X on Y?

    For the next few years, though, Cen plans to work on a new project, studying how to quantify the effect of an action X on an outcome Y when it’s expensive — or impossible — to measure this effect, focusing in particular on systems that have complex social behaviors.

    For instance, when Covid-19 cases surged in the pandemic, many cities had to decide what restrictions to adopt, such as mask mandates, business closures, or stay-home orders. They had to act fast and balance public health with community and business needs, public spending, and a host of other considerations.

    Typically, in order to estimate the effect of restrictions on the rate of infection, one might compare the rates of infection in areas that underwent different interventions. If one county has a mask mandate while its neighboring county does not, one might think comparing the counties’ infection rates would reveal the effectiveness of mask mandates. 

    But of course, no county exists in a vacuum. If, for instance, people from both counties gather to watch a football game in the maskless county every week, people from both counties mix. These complex interactions matter, and Sarah plans to study questions of cause and effect in such settings.

    “We’re interested in how decisions or interventions affect an outcome of interest, such as how criminal justice reform affects incarceration rates or how an ad campaign might change the public’s behaviors,” Cen says.

    Cen has also applied the principles of promoting inclusivity to her work in the MIT community.

    As one of three co-presidents of the Graduate Women in MIT EECS student group, she helped organize the inaugural GW6 research summit featuring the research of women graduate students — not only to showcase positive role models to students, but also to highlight the many successful graduate women at MIT who are not to be underestimated.

    Whether in computing or in the community, a system taking steps to address bias is one that enjoys legitimacy and trust, Cen says. “Accountability, legitimacy, trust — these principles play crucial roles in society and, ultimately, will determine which systems endure with time.”  More

  • in

    On the road to cleaner, greener, and faster driving

    No one likes sitting at a red light. But signalized intersections aren’t just a minor nuisance for drivers; vehicles consume fuel and emit greenhouse gases while waiting for the light to change.

    What if motorists could time their trips so they arrive at the intersection when the light is green? While that might be just a lucky break for a human driver, it could be achieved more consistently by an autonomous vehicle that uses artificial intelligence to control its speed.

    In a new study, MIT researchers demonstrate a machine-learning approach that can learn to control a fleet of autonomous vehicles as they approach and travel through a signalized intersection in a way that keeps traffic flowing smoothly.

    Using simulations, they found that their approach reduces fuel consumption and emissions while improving average vehicle speed. The technique gets the best results if all cars on the road are autonomous, but even if only 25 percent use their control algorithm, it still leads to substantial fuel and emissions benefits.

    “This is a really interesting place to intervene. No one’s life is better because they were stuck at an intersection. With a lot of other climate change interventions, there is a quality-of-life difference that is expected, so there is a barrier to entry there. Here, the barrier is much lower,” says senior author Cathy Wu, the Gilbert W. Winslow Career Development Assistant Professor in the Department of Civil and Environmental Engineering and a member of the Institute for Data, Systems, and Society (IDSS) and the Laboratory for Information and Decision Systems (LIDS).

    The lead author of the study is Vindula Jayawardana, a graduate student in LIDS and the Department of Electrical Engineering and Computer Science. The research will be presented at the European Control Conference.

    Intersection intricacies

    While humans may drive past a green light without giving it much thought, intersections can present billions of different scenarios depending on the number of lanes, how the signals operate, the number of vehicles and their speeds, the presence of pedestrians and cyclists, etc.

    Typical approaches for tackling intersection control problems use mathematical models to solve one simple, ideal intersection. That looks good on paper, but likely won’t hold up in the real world, where traffic patterns are often about as messy as they come.

    Wu and Jayawardana shifted gears and approached the problem using a model-free technique known as deep reinforcement learning. Reinforcement learning is a trial-and-error method where the control algorithm learns to make a sequence of decisions. It is rewarded when it finds a good sequence. With deep reinforcement learning, the algorithm leverages assumptions learned by a neural network to find shortcuts to good sequences, even if there are billions of possibilities.

    This is useful for solving a long-horizon problem like this; the control algorithm must issue upwards of 500 acceleration instructions to a vehicle over an extended time period, Wu explains.

    “And we have to get the sequence right before we know that we have done a good job of mitigating emissions and getting to the intersection at a good speed,” she adds.

    But there’s an additional wrinkle. The researchers want the system to learn a strategy that reduces fuel consumption and limits the impact on travel time. These goals can be conflicting.

    “To reduce travel time, we want the car to go fast, but to reduce emissions, we want the car to slow down or not move at all. Those competing rewards can be very confusing to the learning agent,” Wu says.

    While it is challenging to solve this problem in its full generality, the researchers employed a workaround using a technique known as reward shaping. With reward shaping, they give the system some domain knowledge it is unable to learn on its own. In this case, they penalized the system whenever the vehicle came to a complete stop, so it would learn to avoid that action.

    Traffic tests

    Once they developed an effective control algorithm, they evaluated it using a traffic simulation platform with a single intersection. The control algorithm is applied to a fleet of connected autonomous vehicles, which can communicate with upcoming traffic lights to receive signal phase and timing information and observe their immediate surroundings. The control algorithm tells each vehicle how to accelerate and decelerate.

    Their system didn’t create any stop-and-go traffic as vehicles approached the intersection. (Stop-and-go traffic occurs when cars are forced to come to a complete stop due to stopped traffic ahead). In simulations, more cars made it through in a single green phase, which outperformed a model that simulates human drivers. When compared to other optimization methods also designed to avoid stop-and-go traffic, their technique resulted in larger fuel consumption and emissions reductions. If every vehicle on the road is autonomous, their control system can reduce fuel consumption by 18 percent and carbon dioxide emissions by 25 percent, while boosting travel speeds by 20 percent.

    “A single intervention having 20 to 25 percent reduction in fuel or emissions is really incredible. But what I find interesting, and was really hoping to see, is this non-linear scaling. If we only control 25 percent of vehicles, that gives us 50 percent of the benefits in terms of fuel and emissions reduction. That means we don’t have to wait until we get to 100 percent autonomous vehicles to get benefits from this approach,” she says.

    Down the road, the researchers want to study interaction effects between multiple intersections. They also plan to explore how different intersection set-ups (number of lanes, signals, timings, etc.) can influence travel time, emissions, and fuel consumption. In addition, they intend to study how their control system could impact safety when autonomous vehicles and human drivers share the road. For instance, even though autonomous vehicles may drive differently than human drivers, slower roadways and roadways with more consistent speeds could improve safety, Wu says.

    While this work is still in its early stages, Wu sees this approach as one that could be more feasibly implemented in the near-term.

    “The aim in this work is to move the needle in sustainable mobility. We want to dream, as well, but these systems are big monsters of inertia. Identifying points of intervention that are small changes to the system but have significant impact is something that gets me up in the morning,” she says.  

    This work was supported, in part, by the MIT-IBM Watson AI Lab. More

  • in

    System helps severely motor-impaired individuals type more quickly and accurately

    In 1995, French fashion magazine editor Jean-Dominique Bauby suffered a seizure while driving a car, which left him with a condition known as locked-in syndrome, a neurological disease in which the patient is completely paralyzed and can only move muscles that control the eyes.

    Bauby, who had signed a book contract shortly before his accident, wrote the memoir “The Diving Bell and the Butterfly” using a dictation system in which his speech therapist recited the alphabet and he would blink when she said the correct letter. They wrote the 130-page book one blink at a time.

    Technology has come a long way since Bauby’s accident. Many individuals with severe motor impairments caused by locked-in syndrome, cerebral palsy, amyotrophic lateral sclerosis, or other conditions can communicate using computer interfaces where they select letters or words in an onscreen grid by activating a single switch, often by pressing a button, releasing a puff of air, or blinking.

    But these row-column scanning systems are very rigid, and, similar to the technique used by Bauby’s speech therapist, they highlight each option one at a time, making them frustratingly slow for some users. And they are not suitable for tasks where options can’t be arranged in a grid, like drawing, browsing the web, or gaming.

    A more flexible system being developed by researchers at MIT places individual selection indicators next to each option on a computer screen. The indicators can be placed anywhere — next to anything someone might click with a mouse — so a user does not need to cycle through a grid of choices to make selections. The system, called Nomon, incorporates probabilistic reasoning to learn how users make selections, and then adjusts the interface to improve their speed and accuracy.

    Participants in a user study were able to type faster using Nomon than with a row-column scanning system. The users also performed better on a picture selection task, demonstrating how Nomon could be used for more than typing.

    “It is so cool and exciting to be able to develop software that has the potential to really help people. Being able to find those signals and turn them into communication as we are used to it is a really interesting problem,” says senior author Tamara Broderick, an associate professor in the MIT Department of Electrical Engineering and Computer Science (EECS) and a member of the Laboratory for Information and Decision Systems and the Institute for Data, Systems, and Society.

    Joining Broderick on the paper are lead author Nicholas Bonaker, an EECS graduate student; Emli-Mari Nel, head of innovation and machine learning at Averly and a visiting lecturer at the University of Witwatersrand in South Africa; and Keith Vertanen, an associate professor at Michigan Tech. The research is being presented at the ACM Conference on Human Factors in Computing Systems.

    On the clock

    In the Nomon interface, a small analog clock is placed next to every option the user can select. (A gnomon is the part of a sundial that casts a shadow.) The user looks at one option and then clicks their switch when that clock’s hand passes a red “noon” line. After each click, the system changes the phases of the clocks to separate the most probable next targets. The user clicks repeatedly until their target is selected.

    When used as a keyboard, Nomon’s machine-learning algorithms try to guess the next word based on previous words and each new letter as the user makes selections.

    Broderick developed a simplified version of Nomon several years ago but decided to revisit it to make the system easier for motor-impaired individuals to use. She enlisted the help of then-undergraduate Bonaker to redesign the interface.

    They first consulted nonprofit organizations that work with motor-impaired individuals, as well as a motor-impaired switch user, to gather feedback on the Nomon design.

    Then they designed a user study that would better represent the abilities of motor-impaired individuals. They wanted to make sure to thoroughly vet the system before using much of the valuable time of motor-impaired users, so they first tested on non-switch users, Broderick explains.

    Switching up the switch

    To gather more representative data, Bonaker devised a webcam-based switch that was harder to use than simply clicking a key. The non-switch users had to lean their bodies to one side of the screen and then back to the other side to register a click.

    “And they have to do this at precisely the right time, so it really slows them down. We did some empirical studies which showed that they were much closer to the response times of motor-impaired individuals,” Broderick says.

    They ran a 10-session user study with 13 non-switch participants and one single-switch user with an advanced form of spinal muscular dystrophy. In the first nine sessions, participants used Nomon and a row-column scanning interface for 20 minutes each to perform text entry, and in the 10th session they used the two systems for a picture selection task.

    Non-switch users typed 15 percent faster using Nomon, while the motor-impaired user typed even faster than the non-switch users. When typing unfamiliar words, the users were 20 percent faster overall and made half as many errors. In their final session, they were able to complete the picture selection task 36 percent faster using Nomon.

    “Nomon is much more forgiving than row-column scanning. With row-column scanning, even if you are just slightly off, now you’ve chosen B instead of A and that’s an error,” Broderick says.

    Adapting to noisy clicks

    With its probabilistic reasoning, Nomon incorporates everything it knows about where a user is likely to click to make the process faster, easier, and less error-prone. For instance, if the user selects “Q,” Nomon will make it as easy as possible for the user to select “U” next.

    Nomon also learns how a user clicks. So, if the user always clicks a little after the clock’s hand strikes noon, the system adapts to that in real time. It also adapts to noisiness. If a user’s click is often off the mark, the system requires extra clicks to ensure accuracy.

    This probabilistic reasoning makes Nomon powerful but also requires a higher click-load than row-column scanning systems. Clicking multiple times can be a trying task for severely motor-impaired users.

    Broderick hopes to reduce the click-load by incorporating gaze tracking into Nomon, which would give the system more robust information about what a user might choose next based on which part of the screen they are looking at. The researchers also want to find a better way to automatically adjust the clock speeds to help users be more accurate and efficient.

    They are working on a new series of studies in which they plan to partner with more motor-impaired users.

    “So far, the feedback from motor-impaired users has been invaluable to us; we’re very grateful to the motor-impaired user who commented on our initial interface and the separate motor-impaired user who participated in our study. We’re currently extending our study to work with a bigger and more diverse group of our target population. With their help, we’re already making further improvements to our interface and working to better understand the performance of Nomon,” she says.

    “Nonspeaking individuals with motor disabilities are currently not provided with efficient communication solutions for interacting with either speaking partners or computer systems. This ‘communication gap’ is a known unresolved problem in human-computer interaction, and so far there are no good solutions. This paper demonstrates that a highly creative approach underpinned by a statistical model can provide tangible performance gains to the users who need it the most: nonspeaking individuals reliant on a single switch to communicate,” says Per Ola Kristensson, professor of interactive systems engineering at Cambridge University, who was not involved with this research. “The paper also demonstrates the value of complementing insights from computational experiments with the involvement of end-users and other stakeholders in the design process. I find this a highly creative and important paper in an area where it is notoriously difficult to make significant progress.”

    This research was supported, in part, by the Seth Teller Memorial Fund to Advanced Technology for People with Disabilities, a Peter J. Eloranta Summer Undergraduate Research Fellowship, the MIT Quest for Intelligence, and the National Science Foundation. More

  • in

    A tool for predicting the future

    Whether someone is trying to predict tomorrow’s weather, forecast future stock prices, identify missed opportunities for sales in retail, or estimate a patient’s risk of developing a disease, they will likely need to interpret time-series data, which are a collection of observations recorded over time.

    Making predictions using time-series data typically requires several data-processing steps and the use of complex machine-learning algorithms, which have such a steep learning curve they aren’t readily accessible to nonexperts.

    To make these powerful tools more user-friendly, MIT researchers developed a system that directly integrates prediction functionality on top of an existing time-series database. Their simplified interface, which they call tspDB (time series predict database), does all the complex modeling behind the scenes so a nonexpert can easily generate a prediction in only a few seconds.

    The new system is more accurate and more efficient than state-of-the-art deep learning methods when performing two tasks: predicting future values and filling in missing data points.

    One reason tspDB is so successful is that it incorporates a novel time-series-prediction algorithm, explains electrical engineering and computer science (EECS) graduate student Abdullah Alomar, an author of a recent research paper in which he and his co-authors describe the algorithm. This algorithm is especially effective at making predictions on multivariate time-series data, which are data that have more than one time-dependent variable. In a weather database, for instance, temperature, dew point, and cloud cover each depend on their past values.

    The algorithm also estimates the volatility of a multivariate time series to provide the user with a confidence level for its predictions.

    “Even as the time-series data becomes more and more complex, this algorithm can effectively capture any time-series structure out there. It feels like we have found the right lens to look at the model complexity of time-series data,” says senior author Devavrat Shah, the Andrew and Erna Viterbi Professor in EECS and a member of the Institute for Data, Systems, and Society and of the Laboratory for Information and Decision Systems.

    Joining Alomar and Shah on the paper is lead author Anish Agrawal, a former EECS graduate student who is currently a postdoc at the Simons Institute at the University of California at Berkeley. The research will be presented at the ACM SIGMETRICS conference.

    Adapting a new algorithm

    Shah and his collaborators have been working on the problem of interpreting time-series data for years, adapting different algorithms and integrating them into tspDB as they built the interface.

    About four years ago, they learned about a particularly powerful classical algorithm, called singular spectrum analysis (SSA), that imputes and forecasts single time series. Imputation is the process of replacing missing values or correcting past values. While this algorithm required manual parameter selection, the researchers suspected it could enable their interface to make effective predictions using time series data. In earlier work, they removed this need to manually intervene for algorithmic implementation.  

    The algorithm for single time series transformed it into a matrix and utilized matrix estimation procedures. The key intellectual challenge was how to adapt it to utilize multiple time series.  After a few years of struggle, they realized the answer was something very simple: “Stack” the matrices for each individual time series, treat it as a one big matrix, and then apply the single time-series algorithm on it.

    This utilizes information across multiple time series naturally — both across the time series and across time, which they describe in their new paper.

    This recent publication also discusses interesting alternatives, where instead of transforming the multivariate time series into a big matrix, it is viewed as a three-dimensional tensor. A tensor is a multi-dimensional array, or grid, of numbers. This established a promising connection between the classical field of time series analysis and the growing field of tensor estimation, Alomar says.

    “The variant of mSSA that we introduced actually captures all of that beautifully. So, not only does it provide the most likely estimation, but a time-varying confidence interval, as well,” Shah says.

    The simpler, the better

    They tested the adapted mSSA against other state-of-the-art algorithms, including deep-learning methods, on real-world time-series datasets with inputs drawn from the electricity grid, traffic patterns, and financial markets.

    Their algorithm outperformed all the others on imputation and it outperformed all but one of the other algorithms when it came to forecasting future values. The researchers also demonstrated that their tweaked version of mSSA can be applied to any kind of time-series data.

    “One reason I think this works so well is that the model captures a lot of time series dynamics, but at the end of the day, it is still a simple model. When you are working with something simple like this, instead of a neural network that can easily overfit the data, you can actually perform better,” Alomar says.

    The impressive performance of mSSA is what makes tspDB so effective, Shah explains. Now, their goal is to make this algorithm accessible to everyone.

    One a user installs tspDB on top of an existing database, they can run a prediction query with just a few keystrokes in about 0.9 milliseconds, as compared to 0.5 milliseconds for a standard search query. The confidence intervals are also designed to help nonexperts to make a more informed decision by incorporating the degree of uncertainty of the predictions into their decision making.

    For instance, the system could enable a nonexpert to predict future stock prices with high accuracy in just a few minutes, even if the time-series dataset contains missing values.

    Now that the researchers have shown why mSSA works so well, they are targeting new algorithms that can be incorporated into tspDB. One of these algorithms utilizes the same model to automatically enable change point detection, so if the user believes their time series will change its behavior at some point, the system will automatically detect that change and incorporate that into its predictions.

    They also want to continue gathering feedback from current tspDB users to see how they can improve the system’s functionality and user-friendliness, Shah says.

    “Our interest at the highest level is to make tspDB a success in the form of a broadly utilizable, open-source system. Time-series data are very important, and this is a beautiful concept of actually building prediction functionalities directly into the database. It has never been done before, and so we want to make sure the world uses it,” he says.

    “This work is very interesting for a number of reasons. It provides a practical variant of mSSA which requires no hand tuning, they provide the first known analysis of mSSA, and the authors demonstrate the real-world value of their algorithm by being competitive with or out-performing several known algorithms for imputations and predictions in (multivariate) time series for several real-world data sets,” says Vishal Misra, a professor of computer science at Columbia University who was not involved with this research. “At the heart of it all is the beautiful modeling work where they cleverly exploit correlations across time (within a time series) and space (across time series) to create a low-rank spatiotemporal factor representation of a multivariate time series. Importantly this model connects the field of time series analysis to that of the rapidly evolving topic of tensor completion, and I expect a lot of follow-on research spurred by this paper.” More

  • in

    Systems scientists find clues to why false news snowballs on social media

    The spread of misinformation on social media is a pressing societal problem that tech companies and policymakers continue to grapple with, yet those who study this issue still don’t have a deep understanding of why and how false news spreads.

    To shed some light on this murky topic, researchers at MIT developed a theoretical model of a Twitter-like social network to study how news is shared and explore situations where a non-credible news item will spread more widely than the truth. Agents in the model are driven by a desire to persuade others to take on their point of view: The key assumption in the model is that people bother to share something with their followers if they think it is persuasive and likely to move others closer to their mindset. Otherwise they won’t share.

    The researchers found that in such a setting, when a network is highly connected or the views of its members are sharply polarized, news that is likely to be false will spread more widely and travel deeper into the network than news with higher credibility.

    This theoretical work could inform empirical studies of the relationship between news credibility and the size of its spread, which might help social media companies adapt networks to limit the spread of false information.

    “We show that, even if people are rational in how they decide to share the news, this could still lead to the amplification of information with low credibility. With this persuasion motive, no matter how extreme my beliefs are — given that the more extreme they are the more I gain by moving others’ opinions — there is always someone who would amplify [the information],” says senior author Ali Jadbabaie, professor and head of the Department of Civil and Environmental Engineering and a core faculty member of the Institute for Data, Systems, and Society (IDSS) and a principal investigator in the Laboratory for Information and Decision Systems (LIDS).

    Joining Jadbabaie on the paper are first author Chin-Chia Hsu, a graduate student in the Social and Engineering Systems program in IDSS, and Amir Ajorlou, a LIDS research scientist. The research will be presented this week at the IEEE Conference on Decision and Control.

    Pondering persuasion

    This research draws on a 2018 study by Sinan Aral, the David Austin Professor of Management at the MIT Sloan School of Management; Deb Roy, an associate professor of media arts and sciences at the Media Lab; and former postdoc Soroush Vosoughi (now an assistant professor of computer science at Dartmouth University). Their empirical study of data from Twitter found that false news spreads wider, faster, and deeper than real news.

    Jadbabaie and his collaborators wanted to drill down on why this occurs.

    They hypothesized that persuasion might be a strong motive for sharing news — perhaps agents in the network want to persuade others to take on their point of view — and decided to build a theoretical model that would let them explore this possibility.

    In their model, agents have some prior belief about a policy, and their goal is to persuade followers to move their beliefs closer to the agent’s side of the spectrum.

    A news item is initially released to a small, random subgroup of agents, which must decide whether to share this news with their followers. An agent weighs the newsworthiness of the item and its credibility, and updates its belief based on how surprising or convincing the news is. 

    “They will make a cost-benefit analysis to see if, on average, this piece of news will move people closer to what they think or move them away. And we include a nominal cost for sharing. For instance, taking some action, if you are scrolling on social media, you have to stop to do that. Think of that as a cost. Or a reputation cost might come if I share something that is embarrassing. Everyone has this cost, so the more extreme and the more interesting the news is, the more you want to share it,” Jadbabaie says.

    If the news affirms the agent’s perspective and has persuasive power that outweighs the nominal cost, the agent will always share the news. But if an agent thinks the news item is something others may have already seen, the agent is disincentivized to share it.

    Since an agent’s willingness to share news is a product of its perspective and how persuasive the news is, the more extreme an agent’s perspective or the more surprising the news, the more likely the agent will share it.

    The researchers used this model to study how information spreads during a news cascade, which is an unbroken sharing chain that rapidly permeates the network.

    Connectivity and polarization

    The team found that when a network has high connectivity and the news is surprising, the credibility threshold for starting a news cascade is lower. High connectivity means that there are multiple connections between many users in the network.

    Likewise, when the network is largely polarized, there are plenty of agents with extreme views who want to share the news item, starting a news cascade. In both these instances, news with low credibility creates the largest cascades.

    “For any piece of news, there is a natural network speed limit, a range of connectivity, that facilitates good transmission of information where the size of the cascade is maximized by true news. But if you exceed that speed limit, you will get into situations where inaccurate news or news with low credibility has a larger cascade size,” Jadbabaie says.

    If the views of users in the network become more diverse, it is less likely that a poorly credible piece of news will spread more widely than the truth.

    Jadbabaie and his colleagues designed the agents in the network to behave rationally, so the model would better capture actions real humans might take if they want to persuade others.

    “Someone might say that is not why people share, and that is valid. Why people do certain things is a subject of intense debate in cognitive science, social psychology, neuroscience, economics, and political science,” he says. “Depending on your assumptions, you end up getting different results. But I feel like this assumption of persuasion being the motive is a natural assumption.”

    Their model also shows how costs can be manipulated to reduce the spread of false information. Agents make a cost-benefit analysis and won’t share news if the cost to do so outweighs the benefit of sharing.

    “We don’t make any policy prescriptions, but one thing this work suggests is that, perhaps, having some cost associated with sharing news is not a bad idea. The reason you get lots of these cascades is because the cost of sharing the news is actually very low,” he says.

    This work was supported by an Army Research Office Multidisciplinary University Research Initiative grant and a Vannevar Bush Fellowship from the Office of the Secretary of Defense. More