More stories

  • in

    Exact symbolic artificial intelligence for faster, better assessment of AI fairness

    The justice system, banks, and private companies use algorithms to make decisions that have profound impacts on people’s lives. Unfortunately, those algorithms are sometimes biased — disproportionately impacting people of color as well as individuals in lower income classes when they apply for loans or jobs, or even when courts decide what bail should be set while a person awaits trial.

    MIT researchers have developed a new artificial intelligence programming language that can assess the fairness of algorithms more exactly, and more quickly, than available alternatives.

    Their Sum-Product Probabilistic Language (SPPL) is a probabilistic programming system. Probabilistic programming is an emerging field at the intersection of programming languages and artificial intelligence that aims to make AI systems much easier to develop, with early successes in computer vision, common-sense data cleaning, and automated data modeling. Probabilistic programming languages make it much easier for programmers to define probabilistic models and carry out probabilistic inference — that is, work backward to infer probable explanations for observed data.

    “There are previous systems that can solve various fairness questions. Our system is not the first; but because our system is specialized and optimized for a certain class of models, it can deliver solutions thousands of times faster,” says Feras Saad, a PhD student in electrical engineering and computer science (EECS) and first author on a recent paper describing the work. Saad adds that the speedups are not insignificant: The system can be up to 3,000 times faster than previous approaches.

    SPPL gives fast, exact solutions to probabilistic inference questions such as “How likely is the model to recommend a loan to someone over age 40?” or “Generate 1,000 synthetic loan applicants, all under age 30, whose loans will be approved.” These inference results are based on SPPL programs that encode probabilistic models of what kinds of applicants are likely, a priori, and also how to classify them. Fairness questions that SPPL can answer include “Is there a difference between the probability of recommending a loan to an immigrant and nonimmigrant applicant with the same socioeconomic status?” or “What’s the probability of a hire, given that the candidate is qualified for the job and from an underrepresented group?”

    SPPL is different from most probabilistic programming languages, as SPPL only allows users to write probabilistic programs for which it can automatically deliver exact probabilistic inference results. SPPL also makes it possible for users to check how fast inference will be, and therefore avoid writing slow programs. In contrast, other probabilistic programming languages such as Gen and Pyro allow users to write down probabilistic programs where the only known ways to do inference are approximate — that is, the results include errors whose nature and magnitude can be hard to characterize.

    Error from approximate probabilistic inference is tolerable in many AI applications. But it is undesirable to have inference errors corrupting results in socially impactful applications of AI, such as automated decision-making, and especially in fairness analysis.

    Jean-Baptiste Tristan, associate professor at Boston College and former research scientist at Oracle Labs, who was not involved in the new research, says, “I’ve worked on fairness analysis in academia and in real-world, large-scale industry settings. SPPL offers improved flexibility and trustworthiness over other PPLs on this challenging and important class of problems due to the expressiveness of the language, its precise and simple semantics, and the speed and soundness of the exact symbolic inference engine.”

    SPPL avoids errors by restricting to a carefully designed class of models that still includes a broad class of AI algorithms, including the decision tree classifiers that are widely used for algorithmic decision-making. SPPL works by compiling probabilistic programs into a specialized data structure called a “sum-product expression.” SPPL further builds on the emerging theme of using probabilistic circuits as a representation that enables efficient probabilistic inference. This approach extends prior work on sum-product networks to models and queries expressed via a probabilistic programming language. However, Saad notes that this approach comes with limitations: “SPPL is substantially faster for analyzing the fairness of a decision tree, for example, but it can’t analyze models like neural networks. Other systems can analyze both neural networks and decision trees, but they tend to be slower and give inexact answers.”

    “SPPL shows that exact probabilistic inference is practical, not just theoretically possible, for a broad class of probabilistic programs,” says Vikash Mansinghka, an MIT principal research scientist and senior author on the paper. “In my lab, we’ve seen symbolic inference driving speed and accuracy improvements in other inference tasks that we previously approached via approximate Monte Carlo and deep learning algorithms. We’ve also been applying SPPL to probabilistic programs learned from real-world databases, to quantify the probability of rare events, generate synthetic proxy data given constraints, and automatically screen data for probable anomalies.”

    The new SPPL probabilistic programming language was presented in June at the ACM SIGPLAN International Conference on Programming Language Design and Implementation (PLDI), in a paper that Saad co-authored with MIT EECS Professor Martin Rinard and Mansinghka. SPPL is implemented in Python and is available open source. More

  • in

    A comprehensive study of technological change

    The societal impacts of technological change can be seen in many domains, from messenger RNA vaccines and automation to drones and climate change. The pace of that technological change can affect its impact, and how quickly a technology improves in performance can be an indicator of its future importance. For decision-makers like investors, entrepreneurs, and policymakers, predicting which technologies are fast improving (and which are overhyped) can mean the difference between success and failure.

    New research from MIT aims to assist in the prediction of technology performance improvement using U.S. patents as a dataset. The study describes 97 percent of the U.S. patent system as a set of 1,757 discrete technology domains, and quantitatively assesses each domain for its improvement potential.

    “The rate of improvement can only be empirically estimated when substantial performance measurements are made over long time periods,” says Anuraag Singh SM ’20, lead author of the paper. “In some large technological fields, including software and clinical medicine, such measures have rarely, if ever, been made.”

    A previous MIT study provided empirical measures for 30 technological domains, but the patent sets identified for those technologies cover less than 15 percent of the patents in the U.S. patent system. The major purpose of this new study is to provide predictions of the performance improvement rates for the thousands of domains not accessed by empirical measurement. To accomplish this, the researchers developed a method using a new probability-based algorithm, machine learning, natural language processing, and patent network analytics.

    Overlap and centrality

    A technology domain, as the researchers define it, consists of sets of artifacts fulfilling a specific function using a specific branch of scientific knowledge. To find the patents that best represent a domain, the team built on previous research conducted by co-author Chris Magee, a professor of the practice of engineering systems within the Institute for Data, Systems, and Society (IDSS). Magee and his colleagues found that by looking for patent overlap between the U.S. and international patent-classification systems, they could quickly identify patents that best represent a technology. The researchers ultimately created a correspondence of all patents within the U.S. patent system to a set of 1,757 technology domains.

    To estimate performance improvement, Singh employed a method refined by co-authors Magee and Giorgio Triulzi, a researcher with the Sociotechnical Systems Research Center (SSRC) within IDSS and an assistant professor at Universidad de los Andes in Colombia. Their method is based on the average “centrality” of patents in the patent citation network. Centrality refers to multiple criteria for determining the ranking or importance of nodes within a network.

    “Our method provides predictions of performance improvement rates for nearly all definable technologies for the first time,” says Singh.

    Those rates vary — from a low of 2 percent per year for the “Mechanical skin treatment — Hair removal and wrinkles” domain to a high of 216 percent per year for the “Dynamic information exchange and support systems integrating multiple channels” domain. The researchers found that most technologies improve slowly; more than 80 percent of technologies improve at less than 25 percent per year. Notably, the number of patents in a technological area was not a strong indicator of a higher improvement rate.

    “Fast-improving domains are concentrated in a few technological areas,” says Magee. “The domains that show improvement rates greater than the predicted rate for integrated chips — 42 percent, from Moore’s law — are predominantly based upon software and algorithms.”

    TechNext Inc.

    The researchers built an online interactive system where domains corresponding to technology-related keywords can be found along with their improvement rates. Users can input a keyword describing a technology and the system returns a prediction of improvement for the technological domain, an automated measure of the quality of the match between the keyword and the domain, and patent sets so that the reader can judge the semantic quality of the match.

    Moving forward, the researchers have founded a new MIT spinoff called TechNext Inc. to further refine this technology and use it to help leaders make better decisions, from budgets to investment priorities to technology policy. Like any inventors, Magee and his colleagues want to protect their intellectual property rights. To that end, they have applied for a patent for their novel system and its unique methodology.

    “Technologies that improve faster win the market,” says Singh. “Our search system enables technology managers, investors, policymakers, and entrepreneurs to quickly look up predictions of improvement rates for specific technologies.”

    Adds Magee: “Our goal is to bring greater accuracy, precision, and repeatability to the as-yet fuzzy art of technology forecasting.” More