More stories

  • in

    Data flow’s decisive role on the global stage

    In 2016, Meicen Sun came to a profound realization: “The control of digital information will lie at the heart of all the big questions and big contentions in politics.” A graduate student in her final year of study who is specializing in international security and the political economy of technology, Sun vividly recalls the emergence of the internet “as a democratizing force, an opener, an equalizer,” helping give rise to the Arab Spring. But she was also profoundly struck when nations in the Middle East and elsewhere curbed internet access to throttle citizens’ efforts to speak and mobilize freely.

    During her undergraduate and graduate studies, which came to focus on China and its expanding global role, Sun became convinced that digital constraints initially intended to prevent the free flow of ideas were also having enormous and growing economic impacts.

    “With an exceptionally high mobile internet adoption rate and the explosion of indigenous digital apps, China’s digital economy was surging, helping to drive the nation’s broader economic growth and international competitiveness,” Sun says. “Yet at the same time, the country maintained the most tightly controlled internet ecosystem in the world.”

    Sun set out to explore this apparent paradox in her dissertation. Her research to date has yielded both novel findings and troubling questions.  

    “Through its control of the internet, China has in effect provided protectionist benefits to its own data-intensive domestic sectors,” she says. “If there is a benefit to imposing internet control, given the absence of effective international regulations, does this give authoritarian states an advantage in trade and national competitiveness?” Following this thread, Sun asks, “What might this mean for the future of democracy as the world grows increasingly dependent on digital technology?”

    Protect or innovate

    Early in her graduate program, classes in capitalism and technology and public policy, says Sun, “cemented for me the idea of data as a factor of production, and the importance of cross-border information flow in making a country innovative.” This central premise serves as a springboard for Sun’s doctoral studies.

    In a series of interconnected research papers using China as her primary case, she is examining the double-edged nature of internet limits. “They accord protectionist benefits to domestic data-internet-intensive sectors, on the one hand, but on the other, act as a potential longer-term deterrent to the country’s capacity to innovate.”

    To pursue her doctoral project, advised by professor of political science Kenneth Oye, Sun is extracting data from a multitude of sources, including a website that has been routinely testing web domain accessibility from within China since 2011. This allows her to pin down when and to what degree internet control occurs. She can then compare this information to publicly available records on the expansion or contraction of data-intensive industrial sectors, enabling her to correlate internet control to a sector’s performance.

    Sun has also compiled datasets for firm-level revenue, scientific citations, and patents that permit her to measure aspects of China’s innovation culture. In analyzing her data she leverages both quantitative and qualitative methods, including one co-developed by her dissertation co-advisor, associate professor of political science In Song Kim. Her initial analysis suggests internet control prevents scholars from accessing knowledge available on foreign websites, and that if sustained, such control could take a toll on the Chinese economy over time.

    Of particular concern is the possibility that the economic success that flows from strict internet controls, as exemplified by the Chinese model, may encourage the rise of similar practices among emerging states or those in political flux.

    “The grim implication of my research is that without international regulation on information flow restrictions, democracies will be at a disadvantage against autocracies,” she says. “No matter how short-term or narrow these curbs are, they confer concrete benefits on certain economic sectors.”

    Data, politics, and economy

    Sun got a quick start as a student of China and its role in the world. She was born in Xiamen, a coastal Chinese city across from Taiwan, to academic parents who cultivated her interest in international politics. “My dad would constantly talk to me about global affairs, and he was passionate about foreign policy,” says Sun.

    Eager for education and a broader view of the world, Sun took a scholarship at 15 to attend school in Singapore. “While this experience exposed me to a variety of new ideas and social customs, I felt the itch to travel even farther away, and to meet people with different backgrounds and viewpoints from mine,” than she says.

    Sun attended Princeton University where, after two years sticking to her “comfort zone” — writing and directing plays and composing music for them — she underwent a process of intellectual transition. Political science classes opened a window onto a larger landscape to which she had long been connected: China’s behavior as a rising power and the shifting global landscape.

    She completed her undergraduate degree in politics, and followed up with a master’s degree in international relations at the University of Pennsylvania, where she focused on China-U.S. relations and China’s participation in international institutions. She was on the path to completing a PhD at Penn when, Sun says, “I became confident in my perception that digital technology, and especially information sharing, were becoming critically important factors in international politics, and I felt a strong desire to devote my graduate studies, and even my career, to studying these topics,”

    Certain that the questions she hoped to pursue could best be addressed through an interdisciplinary approach with those working on similar issues, Sun began her doctoral program anew at MIT.

    “Doer mindset”

    Sun is hopeful that her doctoral research will prove useful to governments, policymakers, and business leaders. “There are a lot of developing states actively shopping between data governance and development models for their own countries,” she says. “My findings around the pros and cons of information flow restrictions should be of interest to leaders in these places, and to trade negotiators and others dealing with the global governance of data and what a fair playing field for digital trade would be.”

    Sun has engaged directly with policy and industry experts through her fellowships with the World Economic Forum and the Pacific Forum. And she has embraced questions that touch on policy outside of her immediate research: Sun is collaborating with her dissertation co-advisor, MIT Sloan Professor Yasheng Huang, on a study of the political economy of artificial intelligence in China for the MIT Task Force on the Work of the Future.

    This year, as she writes her dissertation papers, Sun will be based at Georgetown University, where she has a Mortara Center Global Political Economy Project Predoctoral Fellowship. In Washington, she will continue her journey to becoming a “policy-minded scholar, a thinker with a doer mindset, whose findings have bearing on things that happen in the world.” More

  • in

    How quickly do algorithms improve?

    Algorithms are sort of like a parent to a computer. They tell the computer how to make sense of information so they can, in turn, make something useful out of it.

    The more efficient the algorithm, the less work the computer has to do. For all of the technological progress in computing hardware, and the much debated lifespan of Moore’s Law, computer performance is only one side of the picture.

    Behind the scenes a second trend is happening: Algorithms are being improved, so in turn less computing power is needed. While algorithmic efficiency may have less of a spotlight, you’d definitely notice if your trusty search engine suddenly became one-tenth as fast, or if moving through big datasets felt like wading through sludge.

    This led scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) to ask: How quickly do algorithms improve?  

    Existing data on this question were largely anecdotal, consisting of case studies of particular algorithms that were assumed to be representative of the broader scope. Faced with this dearth of evidence, the team set off to crunch data from 57 textbooks and more than 1,110 research papers, to trace the history of when algorithms got better. Some of the research papers directly reported how good new algorithms were, and others needed to be reconstructed by the authors using “pseudocode,” shorthand versions of the algorithm that describe the basic details.

    In total, the team looked at 113 “algorithm families,” sets of algorithms solving the same problem that had been highlighted as most important by computer science textbooks. For each of the 113, the team reconstructed its history, tracking each time a new algorithm was proposed for the problem and making special note of those that were more efficient. Ranging in performance and separated by decades, starting from the 1940s to now, the team found an average of eight algorithms per family, of which a couple improved its efficiency. To share this assembled database of knowledge, the team also created Algorithm-Wiki.org.

    The scientists charted how quickly these families had improved, focusing on the most-analyzed feature of the algorithms — how fast they could guarantee to solve the problem (in computer speak: “worst-case time complexity”). What emerged was enormous variability, but also important insights on how transformative algorithmic improvement has been for computer science.

    For large computing problems, 43 percent of algorithm families had year-on-year improvements that were equal to or larger than the much-touted gains from Moore’s Law. In 14 percent of problems, the improvement to performance from algorithms vastly outpaced those that have come from improved hardware. The gains from algorithm improvement were particularly large for big-data problems, so the importance of those advancements has grown in recent decades.

    The single biggest change that the authors observed came when an algorithm family transitioned from exponential to polynomial complexity. The amount of effort it takes to solve an exponential problem is like a person trying to guess a combination on a lock. If you only have a single 10-digit dial, the task is easy. With four dials like a bicycle lock, it’s hard enough that no one steals your bike, but still conceivable that you could try every combination. With 50, it’s almost impossible — it would take too many steps. Problems that have exponential complexity are like that for computers: As they get bigger they quickly outpace the ability of the computer to handle them. Finding a polynomial algorithm often solves that, making it possible to tackle problems in a way that no amount of hardware improvement can.

    As rumblings of Moore’s Law coming to an end rapidly permeate global conversations, the researchers say that computing users will increasingly need to turn to areas like algorithms for performance improvements. The team says the findings confirm that historically, the gains from algorithms have been enormous, so the potential is there. But if gains come from algorithms instead of hardware, they’ll look different. Hardware improvement from Moore’s Law happens smoothly over time, and for algorithms the gains come in steps that are usually large but infrequent. 

    “This is the first paper to show how fast algorithms are improving across a broad range of examples,” says Neil Thompson, an MIT research scientist at CSAIL and the Sloan School of Management and senior author on the new paper. “Through our analysis, we were able to say how many more tasks could be done using the same amount of computing power after an algorithm improved. As problems increase to billions or trillions of data points, algorithmic improvement becomes substantially more important than hardware improvement. In an era where the environmental footprint of computing is increasingly worrisome, this is a way to improve businesses and other organizations without the downside.”

    Thompson wrote the paper alongside MIT visiting student Yash Sherry. The paper is published in the Proceedings of the IEEE. The work was funded by the Tides foundation and the MIT Initiative on the Digital Economy. More

  • in

    Exact symbolic artificial intelligence for faster, better assessment of AI fairness

    The justice system, banks, and private companies use algorithms to make decisions that have profound impacts on people’s lives. Unfortunately, those algorithms are sometimes biased — disproportionately impacting people of color as well as individuals in lower income classes when they apply for loans or jobs, or even when courts decide what bail should be set while a person awaits trial.

    MIT researchers have developed a new artificial intelligence programming language that can assess the fairness of algorithms more exactly, and more quickly, than available alternatives.

    Their Sum-Product Probabilistic Language (SPPL) is a probabilistic programming system. Probabilistic programming is an emerging field at the intersection of programming languages and artificial intelligence that aims to make AI systems much easier to develop, with early successes in computer vision, common-sense data cleaning, and automated data modeling. Probabilistic programming languages make it much easier for programmers to define probabilistic models and carry out probabilistic inference — that is, work backward to infer probable explanations for observed data.

    “There are previous systems that can solve various fairness questions. Our system is not the first; but because our system is specialized and optimized for a certain class of models, it can deliver solutions thousands of times faster,” says Feras Saad, a PhD student in electrical engineering and computer science (EECS) and first author on a recent paper describing the work. Saad adds that the speedups are not insignificant: The system can be up to 3,000 times faster than previous approaches.

    SPPL gives fast, exact solutions to probabilistic inference questions such as “How likely is the model to recommend a loan to someone over age 40?” or “Generate 1,000 synthetic loan applicants, all under age 30, whose loans will be approved.” These inference results are based on SPPL programs that encode probabilistic models of what kinds of applicants are likely, a priori, and also how to classify them. Fairness questions that SPPL can answer include “Is there a difference between the probability of recommending a loan to an immigrant and nonimmigrant applicant with the same socioeconomic status?” or “What’s the probability of a hire, given that the candidate is qualified for the job and from an underrepresented group?”

    SPPL is different from most probabilistic programming languages, as SPPL only allows users to write probabilistic programs for which it can automatically deliver exact probabilistic inference results. SPPL also makes it possible for users to check how fast inference will be, and therefore avoid writing slow programs. In contrast, other probabilistic programming languages such as Gen and Pyro allow users to write down probabilistic programs where the only known ways to do inference are approximate — that is, the results include errors whose nature and magnitude can be hard to characterize.

    Error from approximate probabilistic inference is tolerable in many AI applications. But it is undesirable to have inference errors corrupting results in socially impactful applications of AI, such as automated decision-making, and especially in fairness analysis.

    Jean-Baptiste Tristan, associate professor at Boston College and former research scientist at Oracle Labs, who was not involved in the new research, says, “I’ve worked on fairness analysis in academia and in real-world, large-scale industry settings. SPPL offers improved flexibility and trustworthiness over other PPLs on this challenging and important class of problems due to the expressiveness of the language, its precise and simple semantics, and the speed and soundness of the exact symbolic inference engine.”

    SPPL avoids errors by restricting to a carefully designed class of models that still includes a broad class of AI algorithms, including the decision tree classifiers that are widely used for algorithmic decision-making. SPPL works by compiling probabilistic programs into a specialized data structure called a “sum-product expression.” SPPL further builds on the emerging theme of using probabilistic circuits as a representation that enables efficient probabilistic inference. This approach extends prior work on sum-product networks to models and queries expressed via a probabilistic programming language. However, Saad notes that this approach comes with limitations: “SPPL is substantially faster for analyzing the fairness of a decision tree, for example, but it can’t analyze models like neural networks. Other systems can analyze both neural networks and decision trees, but they tend to be slower and give inexact answers.”

    “SPPL shows that exact probabilistic inference is practical, not just theoretically possible, for a broad class of probabilistic programs,” says Vikash Mansinghka, an MIT principal research scientist and senior author on the paper. “In my lab, we’ve seen symbolic inference driving speed and accuracy improvements in other inference tasks that we previously approached via approximate Monte Carlo and deep learning algorithms. We’ve also been applying SPPL to probabilistic programs learned from real-world databases, to quantify the probability of rare events, generate synthetic proxy data given constraints, and automatically screen data for probable anomalies.”

    The new SPPL probabilistic programming language was presented in June at the ACM SIGPLAN International Conference on Programming Language Design and Implementation (PLDI), in a paper that Saad co-authored with MIT EECS Professor Martin Rinard and Mansinghka. SPPL is implemented in Python and is available open source. More

  • in

    A comprehensive study of technological change

    The societal impacts of technological change can be seen in many domains, from messenger RNA vaccines and automation to drones and climate change. The pace of that technological change can affect its impact, and how quickly a technology improves in performance can be an indicator of its future importance. For decision-makers like investors, entrepreneurs, and policymakers, predicting which technologies are fast improving (and which are overhyped) can mean the difference between success and failure.

    New research from MIT aims to assist in the prediction of technology performance improvement using U.S. patents as a dataset. The study describes 97 percent of the U.S. patent system as a set of 1,757 discrete technology domains, and quantitatively assesses each domain for its improvement potential.

    “The rate of improvement can only be empirically estimated when substantial performance measurements are made over long time periods,” says Anuraag Singh SM ’20, lead author of the paper. “In some large technological fields, including software and clinical medicine, such measures have rarely, if ever, been made.”

    A previous MIT study provided empirical measures for 30 technological domains, but the patent sets identified for those technologies cover less than 15 percent of the patents in the U.S. patent system. The major purpose of this new study is to provide predictions of the performance improvement rates for the thousands of domains not accessed by empirical measurement. To accomplish this, the researchers developed a method using a new probability-based algorithm, machine learning, natural language processing, and patent network analytics.

    Overlap and centrality

    A technology domain, as the researchers define it, consists of sets of artifacts fulfilling a specific function using a specific branch of scientific knowledge. To find the patents that best represent a domain, the team built on previous research conducted by co-author Chris Magee, a professor of the practice of engineering systems within the Institute for Data, Systems, and Society (IDSS). Magee and his colleagues found that by looking for patent overlap between the U.S. and international patent-classification systems, they could quickly identify patents that best represent a technology. The researchers ultimately created a correspondence of all patents within the U.S. patent system to a set of 1,757 technology domains.

    To estimate performance improvement, Singh employed a method refined by co-authors Magee and Giorgio Triulzi, a researcher with the Sociotechnical Systems Research Center (SSRC) within IDSS and an assistant professor at Universidad de los Andes in Colombia. Their method is based on the average “centrality” of patents in the patent citation network. Centrality refers to multiple criteria for determining the ranking or importance of nodes within a network.

    “Our method provides predictions of performance improvement rates for nearly all definable technologies for the first time,” says Singh.

    Those rates vary — from a low of 2 percent per year for the “Mechanical skin treatment — Hair removal and wrinkles” domain to a high of 216 percent per year for the “Dynamic information exchange and support systems integrating multiple channels” domain. The researchers found that most technologies improve slowly; more than 80 percent of technologies improve at less than 25 percent per year. Notably, the number of patents in a technological area was not a strong indicator of a higher improvement rate.

    “Fast-improving domains are concentrated in a few technological areas,” says Magee. “The domains that show improvement rates greater than the predicted rate for integrated chips — 42 percent, from Moore’s law — are predominantly based upon software and algorithms.”

    TechNext Inc.

    The researchers built an online interactive system where domains corresponding to technology-related keywords can be found along with their improvement rates. Users can input a keyword describing a technology and the system returns a prediction of improvement for the technological domain, an automated measure of the quality of the match between the keyword and the domain, and patent sets so that the reader can judge the semantic quality of the match.

    Moving forward, the researchers have founded a new MIT spinoff called TechNext Inc. to further refine this technology and use it to help leaders make better decisions, from budgets to investment priorities to technology policy. Like any inventors, Magee and his colleagues want to protect their intellectual property rights. To that end, they have applied for a patent for their novel system and its unique methodology.

    “Technologies that improve faster win the market,” says Singh. “Our search system enables technology managers, investors, policymakers, and entrepreneurs to quickly look up predictions of improvement rates for specific technologies.”

    Adds Magee: “Our goal is to bring greater accuracy, precision, and repeatability to the as-yet fuzzy art of technology forecasting.” More