More stories

  • in

    AI copilot enhances human precision for safer aviation

    Imagine you’re in an airplane with two pilots, one human and one computer. Both have their “hands” on the controllers, but they’re always looking out for different things. If they’re both paying attention to the same thing, the human gets to steer. But if the human gets distracted or misses something, the computer quickly takes over.

    Meet the Air-Guardian, a system developed by researchers at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). As modern pilots grapple with an onslaught of information from multiple monitors, especially during critical moments, Air-Guardian acts as a proactive copilot; a partnership between human and machine, rooted in understanding attention.

    But how does it determine attention, exactly? For humans, it uses eye-tracking, and for the neural system, it relies on something called “saliency maps,” which pinpoint where attention is directed. The maps serve as visual guides highlighting key regions within an image, aiding in grasping and deciphering the behavior of intricate algorithms. Air-Guardian identifies early signs of potential risks through these attention markers, instead of only intervening during safety breaches like traditional autopilot systems. 

    The broader implications of this system reach beyond aviation. Similar cooperative control mechanisms could one day be used in cars, drones, and a wider spectrum of robotics.

    “An exciting feature of our method is its differentiability,” says MIT CSAIL postdoc Lianhao Yin, a lead author on a new paper about Air-Guardian. “Our cooperative layer and the entire end-to-end process can be trained. We specifically chose the causal continuous-depth neural network model because of its dynamic features in mapping attention. Another unique aspect is adaptability. The Air-Guardian system isn’t rigid; it can be adjusted based on the situation’s demands, ensuring a balanced partnership between human and machine.”

    In field tests, both the pilot and the system made decisions based on the same raw images when navigating to the target waypoint. Air-Guardian’s success was gauged based on the cumulative rewards earned during flight and shorter path to the waypoint. The guardian reduced the risk level of flights and increased the success rate of navigating to target points. 

    “This system represents the innovative approach of human-centric AI-enabled aviation,” adds Ramin Hasani, MIT CSAIL research affiliate and inventor of liquid neural networks. “Our use of liquid neural networks provides a dynamic, adaptive approach, ensuring that the AI doesn’t merely replace human judgment but complements it, leading to enhanced safety and collaboration in the skies.”

    The true strength of Air-Guardian is its foundational technology. Using an optimization-based cooperative layer using visual attention from humans and machine, and liquid closed-form continuous-time neural networks (CfC) known for its prowess in deciphering cause-and-effect relationships, it analyzes incoming images for vital information. Complementing this is the VisualBackProp algorithm, which identifies the system’s focal points within an image, ensuring clear understanding of its attention maps. 

    For future mass adoption, there’s a need to refine the human-machine interface. Feedback suggests an indicator, like a bar, might be more intuitive to signify when the guardian system takes control.

    Air-Guardian heralds a new age of safer skies, offering a reliable safety net for those moments when human attention wavers.

    “The Air-Guardian system highlights the synergy between human expertise and machine learning, furthering the objective of using machine learning to augment pilots in challenging scenarios and reduce operational errors,” says Daniela Rus, the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT, director of CSAIL, and senior author on the paper.”One of the most interesting outcomes of using a visual attention metric in this work is the potential for allowing earlier interventions and greater interpretability by human pilots,” says Stephanie Gil, assistant professor of computer science at Harvard University, who was not involved in the work. “This showcases a great example of how AI can be used to work with a human, lowering the barrier for achieving trust by using natural communication mechanisms between the human and the AI system.”

    This research was partially funded by the U.S. Air Force (USAF) Research Laboratory, the USAF Artificial Intelligence Accelerator, the Boeing Co., and the Office of Naval Research. The findings don’t necessarily reflect the views of the U.S. government or the USAF. More

  • in

    Why big changes early in life can help later on

    Imagine moving from state to state while growing up in the U.S., transferring between high schools, and eventually attending college out of state. The first two events might seem disruptive, and the third involves departing a local community. And yet, these things may be exactly what helps some people thrive later in life.

    That’s one implication of a newly published study about social networks co-authored by an MIT professor, which finds that so-called long ties — connections between people who otherwise lack any mutual contacts — are highly associated with greater economic success in life. Those long ties are fostered partly by turning points such as moving between states, and switching schools.

    The study, based on a large quantity of Facebook data, both illuminates how productive social networks are structured and identifies specific life events that significantly shape people’s networks.

    “People who have more long ties [on Facebook], and who have stronger long ties, have better economic indicators,” says Dean Eckles, an MIT professor and co-author of a new paper detailing the study’s findings.

    “Our hope is that the study provides better evidence of this really strong relationship, at the scale of the entire U.S,” Eckles says. “There hasn’t really been this sort of investigation into those types of disruptive life events.”

    The paper, “Long ties, disruptive life events, and economic prosperity,” appears in open-access form in Proceedings of the National Academy of Sciences. The authors are Eaman Jahani PhD ’21, a postdoc and lecturer at the University of California at Berkeley, who received his doctorate from MIT’s Institute for Data, Systems, and Society, and the Statistics and Data Science Center; Samuel P. Fraiberger, a data scientist at the World Bank; Michael Bailey, an economist and research scientist manager at Meta Platforms (which operates Facebook); and Eckles, an associate professor of marketing at MIT Sloan School of Management. Jahani, who worked at Meta when the study was conducted, performed the initial research, and the aggregate data analysis protected the privacy of individuals in compliance with regulations.

    On the move

    In recent decades, scholars have often analyzed social networks while building on a 1973 study by Stanford University’s Mark Granovetter, “The Strength of Weak Ties,” one of the 10 most-cited social science papers of all time. In it, Granovetter postulated that a network’s “weak ties”— the people you know less well — are vital. Your best friends may have networks quite similar to your own, but your “weak ties” provide additional connections useful for employment, and more. Granovetter also edited this current paper for PNAS.

    To conduct the study, the scholars mapped all reciprocal interactions among U.S.-based Facebook accounts from December 2020 to June 2021, to build a data-rich picture of social networks in action. The researchers maintain a distinction between “long” and “short” ties; in this definition, long ties have no other mutual connections at all, while short ties have some.

    Ultimately the scholars found that, when assessing everyone who has lived in the same state since 2012, those who had previously moved among U.S. states had 13 percent more long ties on Facebook than those who had not. Similarly, people who had switched high schools had 10 percent more long ties than people who had not.

    Facebook does not have income data for its users, so the scholars used a series of proxy measures to evaluate financial success. People with more long ties tend to live in higher-income areas, have more internet-connected devices, use more expensive mobile phones, and make more donations to charitable causes, compared to those who do not.

    Additionally, the research evaluates whether or not moving among states, or switching schools, is itself what causes people to have more long ties. After all, it could be the case that families who move more often have qualities that lead family members to be more proactive about forging ties with people.

    To examine this, the research team analyzed a subgroup of Facebook users who had switched high schools only when their first high school closed — meaning it was not their choice to change. Those people had 6 percent more long ties than those who had attended the same high schools but not been forced to switch; given this common pool of school attendees forced into divergent circumstances, the evidence suggests that making the school change itself “shapes the proclivity to connect with different communities,” as the scholars write in the paper. 

    “It’s a plausibly random nudge,” Eckles says, “and we find the people who were exposed to these high school closures end up with more long ties. I think that is one of the compelling elements pointing toward a causal story here.”

    Three types of events, same trend

    As the scholars acknowledge in the paper, there are some limitations to the study. Because it focuses on Facebook interactions, the research does not account for offline activities that may sustain social networks. It is also likely that economic success itself shapes people’s social networks, and not just that networks help shape success. Some people may have opportunities to maintain long ties, through professional work or travel, that others do not.

    On the other hand, the study does uncover long-term social network ties that had not been evaluated before, and, as the authors write,”having three different types of events — involving different processes by which people are selected into the disruption — pointing to the same conclusions makes for a more robust and notable pattern.”

    Other scholars in the field believe the study is a notable piece of research. In a commentary on the paper also published in PNAS, Michael Macy, a sociology professor at Cornell University, writes that “the authors demonstrate the importance of contributing to cumulative knowledge by confirming hypotheses derived from foundational theory while at the same time elaborating on what was previously known by digging deeper into the underlying causal mechanisms. In short, the paper is must reading not only for area specialists but for social scientists across the disciplines.”

    For his part, Eckles emphasizes that the researchers are releasing anonymized data from the study, so that other scholars can build on it, and develop additional insights about social network structure, while complying with all privacy regulations.

    “We’ve released [that] data and made it public, and we’re really happy to be doing that,” Eckles says. “We want to make as much of this as possible open to others. That’s one of the things that I’m hoping is part of the broader impact of the paper.”

    Jahani worked as a contractor at Meta Platforms, which operates Facebook, while conducting the research. Eckles has received past funding from Meta, as well as conference sponsorship, and previously worked there, before joining MIT.   More

  • in

    Researchers create a tool for accurately simulating complex systems

    Researchers often use simulations when designing new algorithms, since testing ideas in the real world can be both costly and risky. But since it’s impossible to capture every detail of a complex system in a simulation, they typically collect a small amount of real data that they replay while simulating the components they want to study.

    Known as trace-driven simulation (the small pieces of real data are called traces), this method sometimes results in biased outcomes. This means researchers might unknowingly choose an algorithm that is not the best one they evaluated, and which will perform worse on real data than the simulation predicted that it should.

    MIT researchers have developed a new method that eliminates this source of bias in trace-driven simulation. By enabling unbiased trace-driven simulations, the new technique could help researchers design better algorithms for a variety of applications, including improving video quality on the internet and increasing the performance of data processing systems.

    The researchers’ machine-learning algorithm draws on the principles of causality to learn how the data traces were affected by the behavior of the system. In this way, they can replay the correct, unbiased version of the trace during the simulation.

    When compared to a previously developed trace-driven simulator, the researchers’ simulation method correctly predicted which newly designed algorithm would be best for video streaming — meaning the one that led to less rebuffering and higher visual quality. Existing simulators that do not account for bias would have pointed researchers to a worse-performing algorithm.

    “Data are not the only thing that matter. The story behind how the data are generated and collected is also important. If you want to answer a counterfactual question, you need to know the underlying data generation story so you only intervene on those things that you really want to simulate,” says Arash Nasr-Esfahany, an electrical engineering and computer science (EECS) graduate student and co-lead author of a paper on this new technique.

    He is joined on the paper by co-lead authors and fellow EECS graduate students Abdullah Alomar and Pouya Hamadanian; recent graduate student Anish Agarwal PhD ’21; and senior authors Mohammad Alizadeh, an associate professor of electrical engineering and computer science; and Devavrat Shah, the Andrew and Erna Viterbi Professor in EECS and a member of the Institute for Data, Systems, and Society and of the Laboratory for Information and Decision Systems. The research was recently presented at the USENIX Symposium on Networked Systems Design and Implementation.

    Specious simulations

    The MIT researchers studied trace-driven simulation in the context of video streaming applications.

    In video streaming, an adaptive bitrate algorithm continually decides the video quality, or bitrate, to transfer to a device based on real-time data on the user’s bandwidth. To test how different adaptive bitrate algorithms impact network performance, researchers can collect real data from users during a video stream for a trace-driven simulation.

    They use these traces to simulate what would have happened to network performance had the platform used a different adaptive bitrate algorithm in the same underlying conditions.

    Researchers have traditionally assumed that trace data are exogenous, meaning they aren’t affected by factors that are changed during the simulation. They would assume that, during the period when they collected the network performance data, the choices the bitrate adaptation algorithm made did not affect those data.

    But this is often a false assumption that results in biases about the behavior of new algorithms, making the simulation invalid, Alizadeh explains.

    “We recognized, and others have recognized, that this way of doing simulation can induce errors. But I don’t think people necessarily knew how significant those errors could be,” he says.

    To develop a solution, Alizadeh and his collaborators framed the issue as a causal inference problem. To collect an unbiased trace, one must understand the different causes that affect the observed data. Some causes are intrinsic to a system, while others are affected by the actions being taken.

    In the video streaming example, network performance is affected by the choices the bitrate adaptation algorithm made — but it’s also affected by intrinsic elements, like network capacity.

    “Our task is to disentangle these two effects, to try to understand what aspects of the behavior we are seeing are intrinsic to the system and how much of what we are observing is based on the actions that were taken. If we can disentangle these two effects, then we can do unbiased simulations,” he says.

    Learning from data

    But researchers often cannot directly observe intrinsic properties. This is where the new tool, called CausalSim, comes in. The algorithm can learn the underlying characteristics of a system using only the trace data.

    CausalSim takes trace data that were collected through a randomized control trial, and estimates the underlying functions that produced those data. The model tells the researchers, under the exact same underlying conditions that a user experienced, how a new algorithm would change the outcome.

    Using a typical trace-driven simulator, bias might lead a researcher to select a worse-performing algorithm, even though the simulation indicates it should be better. CausalSim helps researchers select the best algorithm that was tested.

    The MIT researchers observed this in practice. When they used CausalSim to design an improved bitrate adaptation algorithm, it led them to select a new variant that had a stall rate that was nearly 1.4 times lower than a well-accepted competing algorithm, while achieving the same video quality. The stall rate is the amount of time a user spent rebuffering the video.

    By contrast, an expert-designed trace-driven simulator predicted the opposite. It indicated that this new variant should cause a stall rate that was nearly 1.3 times higher. The researchers tested the algorithm on real-world video streaming and confirmed that CausalSim was correct.

    “The gains we were getting in the new variant were very close to CausalSim’s prediction, while the expert simulator was way off. This is really exciting because this expert-designed simulator has been used in research for the past decade. If CausalSim can so clearly be better than this, who knows what we can do with it?” says Hamadanian.

    During a 10-month experiment, CausalSim consistently improved simulation accuracy, resulting in algorithms that made about half as many errors as those designed using baseline methods.

    In the future, the researchers want to apply CausalSim to situations where randomized control trial data are not available or where it is especially difficult to recover the causal dynamics of the system. They also want to explore how to design and monitor systems to make them more amenable to causal analysis. More

  • in

    Drones navigate unseen environments with liquid neural networks

    In the vast, expansive skies where birds once ruled supreme, a new crop of aviators is taking flight. These pioneers of the air are not living creatures, but rather a product of deliberate innovation: drones. But these aren’t your typical flying bots, humming around like mechanical bees. Rather, they’re avian-inspired marvels that soar through the sky, guided by liquid neural networks to navigate ever-changing and unseen environments with precision and ease.

    Inspired by the adaptable nature of organic brains, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have introduced a method for robust flight navigation agents to master vision-based fly-to-target tasks in intricate, unfamiliar environments. The liquid neural networks, which can continuously adapt to new data inputs, showed prowess in making reliable decisions in unknown domains like forests, urban landscapes, and environments with added noise, rotation, and occlusion. These adaptable models, which outperformed many state-of-the-art counterparts in navigation tasks, could enable potential real-world drone applications like search and rescue, delivery, and wildlife monitoring.

    The researchers’ recent study, published today in Science Robotics, details how this new breed of agents can adapt to significant distribution shifts, a long-standing challenge in the field. The team’s new class of machine-learning algorithms, however, captures the causal structure of tasks from high-dimensional, unstructured data, such as pixel inputs from a drone-mounted camera. These networks can then extract crucial aspects of a task (i.e., understand the task at hand) and ignore irrelevant features, allowing acquired navigation skills to transfer targets seamlessly to new environments.

    Play video

    Drones navigate unseen environments with liquid neural networks.

    “We are thrilled by the immense potential of our learning-based control approach for robots, as it lays the groundwork for solving problems that arise when training in one environment and deploying in a completely distinct environment without additional training,” says Daniela Rus, CSAIL director and the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT. “Our experiments demonstrate that we can effectively teach a drone to locate an object in a forest during summer, and then deploy the model in winter, with vastly different surroundings, or even in urban settings, with varied tasks such as seeking and following. This adaptability is made possible by the causal underpinnings of our solutions. These flexible algorithms could one day aid in decision-making based on data streams that change over time, such as medical diagnosis and autonomous driving applications.”

    A daunting challenge was at the forefront: Do machine-learning systems understand the task they are given from data when flying drones to an unlabeled object? And, would they be able to transfer their learned skill and task to new environments with drastic changes in scenery, such as flying from a forest to an urban landscape? What’s more, unlike the remarkable abilities of our biological brains, deep learning systems struggle with capturing causality, frequently over-fitting their training data and failing to adapt to new environments or changing conditions. This is especially troubling for resource-limited embedded systems, like aerial drones, that need to traverse varied environments and respond to obstacles instantaneously. 

    The liquid networks, in contrast, offer promising preliminary indications of their capacity to address this crucial weakness in deep learning systems. The team’s system was first trained on data collected by a human pilot, to see how they transferred learned navigation skills to new environments under drastic changes in scenery and conditions. Unlike traditional neural networks that only learn during the training phase, the liquid neural net’s parameters can change over time, making them not only interpretable, but more resilient to unexpected or noisy data. 

    In a series of quadrotor closed-loop control experiments, the drones underwent range tests, stress tests, target rotation and occlusion, hiking with adversaries, triangular loops between objects, and dynamic target tracking. They tracked moving targets, and executed multi-step loops between objects in never-before-seen environments, surpassing performance of other cutting-edge counterparts. 

    The team believes that the ability to learn from limited expert data and understand a given task while generalizing to new environments could make autonomous drone deployment more efficient, cost-effective, and reliable. Liquid neural networks, they noted, could enable autonomous air mobility drones to be used for environmental monitoring, package delivery, autonomous vehicles, and robotic assistants. 

    “The experimental setup presented in our work tests the reasoning capabilities of various deep learning systems in controlled and straightforward scenarios,” says MIT CSAIL Research Affiliate Ramin Hasani. “There is still so much room left for future research and development on more complex reasoning challenges for AI systems in autonomous navigation applications, which has to be tested before we can safely deploy them in our society.”

    “Robust learning and performance in out-of-distribution tasks and scenarios are some of the key problems that machine learning and autonomous robotic systems have to conquer to make further inroads in society-critical applications,” says Alessio Lomuscio, professor of AI safety in the Department of Computing at Imperial College London. “In this context, the performance of liquid neural networks, a novel brain-inspired paradigm developed by the authors at MIT, reported in this study is remarkable. If these results are confirmed in other experiments, the paradigm here developed will contribute to making AI and robotic systems more reliable, robust, and efficient.”

    Clearly, the sky is no longer the limit, but rather a vast playground for the boundless possibilities of these airborne marvels. 

    Hasani and PhD student Makram Chahine; Patrick Kao ’22, MEng ’22; and PhD student Aaron Ray SM ’21 wrote the paper with Ryan Shubert ’20, MEng ’22; MIT postdocs Mathias Lechner and Alexander Amini; and Rus.

    This research was supported, in part, by Schmidt Futures, the U.S. Air Force Research Laboratory, the U.S. Air Force Artificial Intelligence Accelerator, and the Boeing Co. More

  • in

    A new chip for decoding data transmissions demonstrates record-breaking energy efficiency

    Imagine using an online banking app to deposit money into your account. Like all information sent over the internet, those communications could be corrupted by noise that inserts errors into the data.

    To overcome this problem, senders encode data before they are transmitted, and then a receiver uses a decoding algorithm to correct errors and recover the original message. In some instances, data are received with reliability information that helps the decoder figure out which parts of a transmission are likely errors.

    Researchers at MIT and elsewhere have developed a decoder chip that employs a new statistical model to use this reliability information in a way that is much simpler and faster than conventional techniques.

    Their chip uses a universal decoding algorithm the team previously developed, which can unravel any error correcting code. Typically, decoding hardware can only process one particular type of code. This new, universal decoder chip has broken the record for energy-efficient decoding, performing between 10 and 100 times better than other hardware.

    This advance could enable mobile devices with fewer chips, since they would no longer need separate hardware for multiple codes. This would reduce the amount of material needed for fabrication, cutting costs and improving sustainability. By making the decoding process less energy intensive, the chip could also improve device performance and lengthen battery life. It could be especially useful for demanding applications like augmented and virtual reality and 5G networks.

    “This is the first time anyone has broken below the 1 picojoule-per-bit barrier for decoding. That is roughly the same amount of energy you need to transmit a bit inside the system. It had been a big symbolic threshold, but it also changes the balance in the receiver of what might be the most pressing part from an energy perspective — we can move that away from the decoder to other elements,” says Muriel Médard, the School of Science NEC Professor of Software Science and Engineering, a professor in the Department of Electrical Engineering and Computer Science, and a co-author of a paper presenting the new chip.

    Médard’s co-authors include lead author Arslan Riaz, a graduate student at Boston University (BU); Rabia Tugce Yazicigil, assistant professor of electrical and computer engineering at BU; and Ken R. Duffy, then director of the Hamilton Institute at Maynooth University and now a professor at Northeastern University, as well as others from MIT, BU, and Maynooth University. The work is being presented at the International Solid-States Circuits Conference.

    Smarter sorting

    Digital data are transmitted over a network in the form of bits (0s and 1s). A sender encodes data by adding an error-correcting code, which is a redundant string of 0s and 1s that can be viewed as a hash. Information about this hash is held in a specific code book. A decoding algorithm at the receiver, designed for this particular code, uses its code book and the hash structure to retrieve the original information, which may have been jumbled by noise. Since each algorithm is code-specific, and most require dedicated hardware, a device would need many chips to decode different codes.

    The researchers previously demonstrated GRAND (Guessing Random Additive Noise Decoding), a universal decoding algorithm that can crack any code. GRAND works by guessing the noise that affected the transmission, subtracting that noise pattern from the received data, and then checking what remains in a code book. It guesses a series of noise patterns in the order they are likely to occur.

    Data are often received with reliability information, also called soft information, that helps a decoder figure out which pieces are errors. The new decoding chip, called ORBGRAND (Ordered Reliability Bits GRAND), uses this reliability information to sort data based on how likely each bit is to be an error.

    But it isn’t as simple as ordering single bits. While the most unreliable bit might be the likeliest error, perhaps the third and fourth most unreliable bits together are as likely to be an error as the seventh-most unreliable bit. ORBGRAND uses a new statistical model that can sort bits in this fashion, considering that multiple bits together are as likely to be an error as some single bits.

    “If your car isn’t working, soft information might tell you that it is probably the battery. But if it isn’t the battery alone, maybe it is the battery and the alternator together that are causing the problem. This is how a rational person would troubleshoot — you’d say that it could actually be these two things together before going down the list to something that is much less likely,” Médard says.

    This is a much more efficient approach than traditional decoders, which would instead look at the code structure and have a performance that is generally designed for the worst-case.

    “With a traditional decoder, you’d pull out the blueprint of the car and examine each and every piece. You’ll find the problem, but it will take you a long time and you’ll get very frustrated,” Médard explains.

    ORBGRAND stops sorting as soon as a code word is found, which is often very soon. The chip also employs parallelization, generating and testing multiple noise patterns simultaneously so it finds the code word faster. Because the decoder stops working once it finds the code word, its energy consumption stays low even though it runs multiple processes simultaneously.

    Record-breaking efficiency

    When they compared their approach to other chips, ORBGRAND decoded with maximum accuracy while consuming only 0.76 picojoules of energy per bit, breaking the previous performance record. ORBGRAND consumes between 10 and 100 times less energy than other devices.

    One of the biggest challenges of developing the new chip came from this reduced energy consumption, Médard says. With ORBGRAND, generating noise sequences is now so energy-efficient that other processes the researchers hadn’t focused on before, like checking the code word in a code book, consume most of the effort.

    “Now, this checking process, which is like turning on the car to see if it works, is the hardest part. So, we need to find more efficient ways to do that,” she says.

    The team is also exploring ways to change the modulation of transmissions so they can take advantage of the improved efficiency of the ORBGRAND chip. They also plan to see how their technique could be utilized to more efficiently manage multiple transmissions that overlap.

    The research is funded, in part, by the U.S. Defense Advanced Research Projects Agency (DARPA) and Science Foundation Ireland. More

  • in

    Researchers discover major roadblock in alleviating network congestion

    When users want to send data over the internet faster than the network can handle, congestion can occur — the same way traffic congestion snarls the morning commute into a big city.

    Computers and devices that transmit data over the internet break the data down into smaller packets and use a special algorithm to decide how fast to send those packets. These congestion control algorithms seek to fully discover and utilize available network capacity while sharing it fairly with other users who may be sharing the same network. These algorithms try to minimize delay caused by data waiting in queues in the network.

    Over the past decade, researchers in industry and academia have developed several algorithms that attempt to achieve high rates while controlling delays. Some of these, such as the BBR algorithm developed by Google, are now widely used by many websites and applications.

    But a team of MIT researchers has discovered that these algorithms can be deeply unfair. In a new study, they show there will always be a network scenario where at least one sender receives almost no bandwidth compared to other senders; that is, a problem known as “starvation” cannot be avoided.

    “What is really surprising about this paper and the results is that when you take into account the real-world complexity of network paths and all the things they can do to data packets, it is basically impossible for delay-controlling congestion control algorithms to avoid starvation using current methods,” says Mohammad Alizadeh, associate professor of electrical engineering and computer science (EECS).

    While Alizadeh and his co-authors weren’t able to find a traditional congestion control algorithm that could avoid starvation, there may be algorithms in a different class that could prevent this problem. Their analysis also suggests that changing how these algorithms work, so that they allow for larger variations in delay, could help prevent starvation in some network situations.

    Alizadeh wrote the paper with first author and EECS graduate student Venkat Arun and senior author Hari Balakrishnan, the Fujitsu Professor of Computer Science and Artificial Intelligence. The research will be presented at the ACM Special Interest Group on Data Communications (SIGCOMM) conference.

    Controlling congestion

    Congestion control is a fundamental problem in networking that researchers have been trying to tackle since the 1980s.

    A user’s computer does not know how fast to send data packets over the network because it lacks information, such as the quality of the network connection or how many other senders are using the network. Sending packets too slowly makes poor use of the available bandwidth. But sending them too quickly can overwhelm the network, and in doing so, packets will start to get dropped. These packets must be resent, which leads to longer delays. Delays can also be caused by packets waiting in queues for a long time.

    Congestion control algorithms use packet losses and delays as signals to infer congestion and decide how fast to send data. But the internet is complicated, and packets can be delayed and lost for reasons unrelated to network congestion. For instance, data could be held up in a queue along the way and then released with a burst of other packets, or the receiver’s acknowledgement might be delayed. The authors call delays that are not caused by congestion “jitter.”

    Even if a congestion control algorithm measures delay perfectly, it can’t tell the difference between delay caused by congestion and delay caused by jitter. Delay caused by jitter is unpredictable and confuses the sender. Because of this ambiguity, users start estimating delay differently, which causes them to send packets at unequal rates. Eventually, this leads to a situation where starvation occurs and someone gets shut out completely, Arun explains.

    “We started the project because we lacked a theoretical understanding of congestion control behavior in the presence of jitter. To place it on a firmer theoretical footing, we built a mathematical model that was simple enough to think about, yet able to capture some of the complexities of the internet. It has been very rewarding to have math tell us things we didn’t know and that have practical relevance,” he says.

    Studying starvation

    The researchers fed their mathematical model to a computer, gave it a series of commonly used congestion control algorithms, and asked the computer to find an algorithm that could avoid starvation, using their model.

    “We couldn’t do it. We tried every algorithm that we are aware of, and some new ones we made up. Nothing worked. The computer always found a situation where some people get all the bandwidth and at least one person gets basically nothing,” Arun says.

    The researchers were surprised by this result, especially since these algorithms are widely believed to be reasonably fair. They started suspecting that it may not be possible to avoid starvation, an extreme form of unfairness. This motivated them to define a class of algorithms they call “delay-convergent algorithms” that they proved will always suffer from starvation under their network model. All existing congestion control algorithms that control delay (that the researchers are aware of) are delay-convergent.

    The fact that such simple failure modes of these widely used algorithms remained unknown for so long illustrates how difficult it is to understand algorithms through empirical testing alone, Arun adds. It underscores the importance of a solid theoretical foundation.

    But all hope is not lost. While all the algorithms they tested failed, there may be other algorithms which are not delay-convergent that might be able to avoid starvation This suggests that one way to fix the problem might be to design congestion control algorithms that vary the delay range more widely, so the range is larger than any delay that might occur due to jitter in the network.

    “To control delays, algorithms have tried to also bound the variations in delay about a desired equilibrium, but there is nothing wrong in potentially creating greater delay variation to get better measurements of congestive delays. It is just a new design philosophy you would have to adopt,” Balakrishnan adds.

    Now, the researchers want to keep pushing to see if they can find or build an algorithm that will eliminate starvation. They also want to apply this approach of mathematical modeling and computational proofs to other thorny, unsolved problems in networked systems.

    “We are increasingly reliant on computer systems for very critical things, and we need to put their reliability on a firmer conceptual footing. We’ve shown the surprising things you can discover when you put in the time to come up with these formal specifications of what the problem actually is,” says Alizadeh.

    The NASA University Leadership Initiative (grant #80NSSC20M0163) provided funds to assist the authors with their research, but the research paper solely reflects the opinions and conclusions of its authors and not any NASA entity. This work was also partially funded by the National Science Foundation, award number 1751009. More

  • in

    Security tool guarantees privacy in surveillance footage

    Surveillance cameras have an identity problem, fueled by an inherent tension between utility and privacy. As these powerful little devices have cropped up seemingly everywhere, the use of machine learning tools has automated video content analysis at a massive scale — but with increasing mass surveillance, there are currently no legally enforceable rules to limit privacy invasions. 

    Security cameras can do a lot — they’ve become smarter and supremely more competent than their ghosts of grainy pictures past, the ofttimes “hero tool” in crime media. (“See that little blurry blue blob in the right hand corner of that densely populated corner — we got him!”) Now, video surveillance can help health officials measure the fraction of people wearing masks, enable transportation departments to monitor the density and flow of vehicles, bikes, and pedestrians, and provide businesses with a better understanding of shopping behaviors. But why has privacy remained a weak afterthought? 

    The status quo is to retrofit video with blurred faces or black boxes. Not only does this prevent analysts from asking some genuine queries (e.g., Are people wearing masks?), it also doesn’t always work; the system may miss some faces and leave them unblurred for the world to see. Dissatisfied with this status quo, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), in collaboration with other institutions, came up with a system to better guarantee privacy in video footage from surveillance cameras. Called “Privid,” the system lets analysts submit video data queries, and adds a little bit of noise (extra data) to the end result to ensure that an individual can’t be identified. The system builds on a formal definition of privacy — “differential privacy” — which allows access to aggregate statistics about private data without revealing personally identifiable information.

    Typically, analysts would just have access to the entire video to do whatever they want with it, but Privid makes sure the video isn’t a free buffet. Honest analysts can get access to the information they need, but that access is restrictive enough that malicious analysts can’t do too much with it. To enable this, rather than running the code over the entire video in one shot, Privid breaks the video into small pieces and runs processing code over each chunk. Instead of getting results back from each piece, the segments are aggregated, and that additional noise is added. (There’s also information on the error bound you’re going to get on your result — maybe a 2 percent error margin, given the extra noisy data added). 

    For example, the code might output the number of people observed in each video chunk, and the aggregation might be the “sum,” to count the total number of people wearing face coverings, or the “average” to estimate the density of crowds. 

    Privid allows analysts to use their own deep neural networks that are commonplace for video analytics today. This gives analysts the flexibility to ask questions that the designers of Privid did not anticipate. Across a variety of videos and queries, Privid was accurate within 79 to 99 percent of a non-private system.

    “We’re at a stage right now where cameras are practically ubiquitous. If there’s a camera on every street corner, every place you go, and if someone could actually process all of those videos in aggregate, you can imagine that entity building a very precise timeline of when and where a person has gone,” says MIT CSAIL PhD student ​​Frank Cangialosi, the lead author on a paper about Privid. “People are already worried about location privacy with GPS — video data in aggregate could capture not only your location history, but also moods, behaviors, and more at each location.” 

    Privid introduces a new notion of “duration-based privacy,” which decouples the definition of privacy from its enforcement — with obfuscation, if your privacy goal is to protect all people, the enforcement mechanism needs to do some work to find the people to protect, which it may or may not do perfectly. With this mechanism, you don’t need to fully specify everything, and you’re not hiding more information than you need to. 

    Let’s say we have a video overlooking a street. Two analysts, Alice and Bob, both claim they want to count the number of people that pass by each hour, so they submit a video processing module and ask for a sum aggregation.

    The first analyst is the city planning department, which hopes to use this information to understand footfall patterns and plan sidewalks for the city. Their model counts people and outputs this count for each video chunk.

    The other analyst is malicious. They hope to identify every time “Charlie” passes by the camera. Their model only looks for Charlie’s face, and outputs a large number if Charlie is present (i.e., the “signal” they’re trying to extract), or zero otherwise. Their hope is that the sum will be non-zero if Charlie was present. 

    From Privid’s perspective, these two queries look identical. It’s hard to reliably determine what their models might be doing internally, or what the analyst hopes to use the data for. This is where the noise comes in. Privid executes both of the queries, and adds the same amount of noise for each. In the first case, because Alice was counting all people, this noise will only have a small impact on the result, but likely won’t impact the usefulness. 

    In the second case, since Bob was looking for a specific signal (Charlie was only visible for a few chunks), the noise is enough to prevent them from knowing if Charlie was there or not. If they see a non-zero result, it might be because Charlie was actually there, or because the model outputs “zero,” but the noise made it non-zero. Privid didn’t need to know anything about when or where Charlie appeared, the system just needed to know a rough upper bound on how long Charlie might appear for, which is easier to specify than figuring out the exact locations, which prior methods rely on. 

    The challenge is determining how much noise to add — Privid wants to add just enough to hide everyone, but not so much that it would be useless for analysts. Adding noise to the data and insisting on queries over time windows means that your result isn’t going to be as accurate as it could be, but the results are still useful while providing better privacy. 

    Cangialosi wrote the paper with Princeton PhD student Neil Agarwal, MIT CSAIL PhD student Venkat Arun, assistant professor at the University of Chicago Junchen Jiang, assistant professor at Rutgers University and former MIT CSAIL postdoc Srinivas Narayana, associate professor at Rutgers University Anand Sarwate, and assistant professor at Princeton University and Ravi Netravali SM ’15, PhD ’18. Cangialosi will present the paper at the USENIX Symposium on Networked Systems Design and Implementation Conference in April in Renton, Washington. 

    This work was partially supported by a Sloan Research Fellowship and National Science Foundation grants. More

  • in

    A universal system for decoding any type of data sent across a network

    Every piece of data that travels over the internet — from paragraphs in an email to 3D graphics in a virtual reality environment — can be altered by the noise it encounters along the way, such as electromagnetic interference from a microwave or Bluetooth device. The data are coded so that when they arrive at their destination, a decoding algorithm can undo the negative effects of that noise and retrieve the original data.

    Since the 1950s, most error-correcting codes and decoding algorithms have been designed together. Each code had a structure that corresponded with a particular, highly complex decoding algorithm, which often required the use of dedicated hardware.

    Researchers at MIT, Boston University, and Maynooth University in Ireland have now created the first silicon chip that is able to decode any code, regardless of its structure, with maximum accuracy, using a universal decoding algorithm called Guessing Random Additive Noise Decoding (GRAND). By eliminating the need for multiple, computationally complex decoders, GRAND enables increased efficiency that could have applications in augmented and virtual reality, gaming, 5G networks, and connected devices that rely on processing a high volume of data with minimal delay.

    The research at MIT is led by Muriel Médard, the Cecil H. and Ida Green Professor in the Department of Electrical Engineering and Computer Science, and was co-authored by Amit Solomon and Wei Ann, both graduate students at MIT; Rabia Tugce Yazicigil, assistant professor of electrical and computer engineering at Boston University; Arslan Riaz and Vaibhav Bansal, both graduate students at Boston University; Ken R. Duffy, director of the Hamilton Institute at the National University of Ireland at Maynooth; and Kevin Galligan, a Maynooth graduate student. The research will be presented at the European Solid-States Device Research and Circuits Conference next week.

    Focus on noise

    One way to think of these codes is as redundant hashes (in this case, a series of 1s and 0s) added to the end of the original data. The rules for the creation of that hash are stored in a specific codebook.

    As the encoded data travel over a network, they are affected by noise, or energy that disrupts the signal, which is often generated by other electronic devices. When that coded data and the noise that affected them arrive at their destination, the decoding algorithm consults its codebook and uses the structure of the hash to guess what the stored information is.

    Instead, GRAND works by guessing the noise that affected the message, and uses the noise pattern to deduce the original information. GRAND generates a series of noise sequences in the order they are likely to occur, subtracts them from the received data, and checks to see if the resulting codeword is in a codebook.

    While the noise appears random in nature, it has a probabilistic structure that allows the algorithm to guess what it might be.

    “In a way, it is similar to troubleshooting. If someone brings their car into the shop, the mechanic doesn’t start by mapping the entire car to blueprints. Instead, they start by asking, ‘What is the most likely thing to go wrong?’ Maybe it just needs gas. If that doesn’t work, what’s next? Maybe the battery is dead?” Médard says.

    Novel hardware

    The GRAND chip uses a three-tiered structure, starting with the simplest possible solutions in the first stage and working up to longer and more complex noise patterns in the two subsequent stages. Each stage operates independently, which increases the throughput of the system and saves power.

    The device is also designed to switch seamlessly between two codebooks. It contains two static random-access memory chips, one that can crack codewords, while the other loads a new codebook and then switches to decoding without any downtime.

    The researchers tested the GRAND chip and found it could effectively decode any moderate redundancy code up to 128 bits in length, with only about a microsecond of latency.

    Médard and her collaborators had previously demonstrated the success of the algorithm, but this new work showcases the effectiveness and efficiency of GRAND in hardware for the first time.

    Developing hardware for the novel decoding algorithm required the researchers to first toss aside their preconceived notions, Médard says.

    “We couldn’t go out and reuse things that had already been done. This was like a complete whiteboard. We had to really think about every single component from scratch. It was a journey of reconsideration. And I think when we do our next chip, there will be things with this first chip that we’ll realize we did out of habit or assumption that we can do better,” she says.

    A chip for the future

    Since GRAND only uses codebooks for verification, the chip not only works with legacy codes but could also be used with codes that haven’t even been introduced yet.

    In the lead-up to 5G implementation, regulators and communications companies struggled to find consensus as to which codes should be used in the new network. Regulators ultimately chose to use two types of traditional codes for 5G infrastructure in different situations. Using GRAND could eliminate the need for that rigid standardization in the future, Médard says.

    The GRAND chip could even open the field of coding to a wave of innovation.

    “For reasons I’m not quite sure of, people approach coding with awe, like it is black magic. The process is mathematically nasty, so people just use codes that already exist. I’m hoping this will recast the discussion so it is not so standards-oriented, enabling people to use codes that already exist and create new codes,” she says.

    Moving forward, Médard and her collaborators plan to tackle the problem of soft detection with a retooled version of the GRAND chip. In soft detection, the received data are less precise.

    They also plan to test the ability of GRAND to crack longer, more complex codes and adjust the structure of the silicon chip to improve its energy efficiency.

    The research was funded by the Battelle Memorial Institute and Science Foundation of Ireland. More