More stories

  • in

    A new way to integrate data with physical objects

    To get a sense of what StructCode is all about, says Mustafa Doğa Doğan, think of Superman. Not the “faster than a speeding bullet” and “more powerful than a locomotive” version, but a Superman, or Superwoman, who sees the world differently from ordinary mortals — someone who can look around a room and glean all kinds of information about ordinary objects that is not apparent to people with less penetrating faculties.

    That, in a nutshell, is “the high-level idea behind StructCode,” explains Doğan, a PhD student in electrical engineering and computer science at MIT and an affiliate of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). “The goal is to change the way we interact with objects” — to make those interactions more meaningful and more meaning-laden — “by embedding information into objects in ways that can be readily accessed.”

    StructCode grew out of an effort called InfraredTags, which Doğan and other colleagues introduced in 2022. That work, as well as the current project, was carried out in the laboratory of MIT Associate Professor Stefanie Mueller — Doğan’s advisor, who has taken part in both projects. In last year’s approach, “invisible” tags — that can only be seen with cameras capable of detecting infrared light — were used to reveal information about physical objects. The drawback there was that many cameras cannot perceive infrared light. Moreover, the method for fabricating these objects and affixing the tags to their surfaces relied on 3D printers, which tend to be very slow and often can only make objects that are small.

    StructCode, at least in its original version, relies on objects produced with laser-cutting techniques that can be manufactured within minutes, rather than the hours it might take on a 3D printer. Information can be extracted from these objects, moreover, with the RGB cameras that are commonly found in smartphones; the ability to operate in the infrared range of the spectrum is not required.

    In their initial demonstrations of the idea, the MIT-led team decided to construct their objects out of wood, making pieces such as furniture, picture frames, flowerpots, or toys that are well suited to laser-cut fabrication. A key question that had to be resolved was this: How can information be stored in a way that is unobtrusive and durable, as compared to externally-attached bar codes and QR codes, and also will not undermine an object’s structural integrity?

    The solution that the team has come up with, for now, is to rely on joints, which are ubiquitous in wooden objects made out of more than one component. Perhaps the most familiar is the finger joint, which has a kind of zigzag pattern whereby two wooden pieces adjoin at right angles such that every protruding “finger” along the joint of the first piece fits into a corresponding “gap” in the joint of the second piece and, similarly, every gap in the joint of the first piece is filled with a finger from the second.

    “Joints have these repeating features, which are like repeating bits,” Dogan says. To create a code, the researchers slightly vary the length of the gaps or fingers. A standard size length is accorded a 1. A slightly shorter length is assigned a 0, and a slightly longer length is assigned a 2. The encoding scheme is based on the sequence of these numbers, or bits, that can be observed along a joint. For every string of four bits, there are 81 (34) possible variations.

    The team also demonstrated ways of encoding messages in “living hinges” — a kind of joint that is made by taking a flat, rigid piece of material and making it bendable by cutting a series of parallel, vertical lines. As with the finger joints, the distance between these lines can be varied: 1 being the standard length, 0 being a slightly shorter length, and 2 being slightly longer. And in this way, a code can be assembled from an object that contains a living hinge.

    The idea is described in a paper, “StructCode: Leveraging Fabrication Artifacts to Store Data in Laser-Cut Objects,” that was presented this month at the 2023 ACM Symposium on Computational Fabrication in New York City. Doğan, the paper’s first author, is joined by Mueller and four coauthors — recent MIT alumna Grace Tang ’23, MNG ’23; MIT undergraduate Richard Qi; University of California at Berkeley graduate student Vivian Hsinyueh Chan; and Cornell University Assistant Professor Thijs Roumen.

    “In the realm of materials and design, there is often an inclination to associate novelty and innovation with entirely new materials or manufacturing techniques,” notes Elvin Karana, a professor of materials innovation and design at the Delft University of Technology. One of the things that impresses Karana most about StructCode is that it provides a novel means of storing data by “applying a commonly used technique like laser cutting and a material as ubiquitous as wood.”

    The idea for StructCode, adds University of Colorado computer scientist Ellen Yi-Luen Do, “is “simple, elegant, and totally makes sense. It’s like having the Rosetta Stone to help decipher Egyptian hieroglyphs.”

    Patrick Baudisch, a computer scientist at the Hasso Plattner Institute in Germany, views StructCode as “a great step forward for personal fabrication. It takes a key piece of functionality that’s only offered today for mass-produced goods and brings it to custom objects.”

    Here, in brief, is how it works: First, a laser cutter — guided by a model created via StructCode — fabricates an object into which encoded information has been embedded. After downloading a StructCode app, an user can decode the hidden message by pointing a cellphone camera at the object, which can (aided by StructCode software) detect subtle variations in length found in an object’s outward-facing joints or living hinges.

    The process is even easier if the user is equipped with augmented reality glasses, Doğan says. “In that case, you don’t need to point a camera. The information comes up automatically.” And that can give people more of the “superpowers” that the designers of StructCode hope to confer.

    “The object doesn’t need to contain a lot of information,” Doğan adds. “Just enough — in the form of, say, URLs — to direct people to places they can find out what they need to know.”

    Users might be sent to a website where they can obtain information about the object — how to care for it, and perhaps eventually how to disassemble it and recycle (or safely dispose of) its contents. A flowerpot that was made with living hinges might inform a user, based on records that are maintained online, as to when the plant inside the pot was last watered and when it needs to be watered again. Children examining a toy crocodile could, through StructCode, learn scientific details about various parts of the animal’s anatomy. A picture frame made with finger joints modified by StructCode could help people find out about the painting inside the frame and about the person (or persons) who created the artwork — perhaps linking to a video of an artist talking about this work directly.

    “This technique could pave the way for new applications, such as interactive museum exhibits,” says Raf Ramakers, a computer scientist at Hasselt University in Belgium. “It holds the potential for broadening the scope of how we perceive and interact with everyday objects” — which is precisely the goal that motivates the work of Doğan and his colleagues.

    But StructCode is not the end of the line, as far as Doğan and his collaborators are concerned. The same general approach could be adapted to other manufacturing techniques besides laser cutting, and information storage doesn’t have to be confined to the joints of wooden objects. Data could be represented, for instance, in the texture of leather, within the pattern of woven or knitted pieces, or concealed by other means within an image. Doğan is excited by the breadth of available options and by the fact that their “explorations into this new realm of possibilities, designed to make objects and our world more interactive, are just beginning.” More

  • in

    System tracks movement of food through global humanitarian supply chain

    Although more than enough food is produced to feed everyone in the world, as many as 828 million people face hunger today. Poverty, social inequity, climate change, natural disasters, and political conflicts all contribute to inhibiting access to food. For decades, the U.S. Agency for International Development (USAID) Bureau for Humanitarian Assistance (BHA) has been a leader in global food assistance, supplying millions of metric tons of food to recipients worldwide. Alleviating hunger — and the conflict and instability hunger causes — is critical to U.S. national security.

    But BHA is only one player within a large, complex supply chain in which food gets handed off between more than 100 partner organizations before reaching its final destination. Traditionally, the movement of food through the supply chain has been a black-box operation, with stakeholders largely out of the loop about what happens to the food once it leaves their custody. This lack of direct visibility into operations is due to siloed data repositories, insufficient data sharing among stakeholders, and different data formats that operators must manually sort through and standardize. As a result, accurate, real-time information — such as where food shipments are at any given time, which shipments are affected by delays or food recalls, and when shipments have arrived at their final destination — is lacking. A centralized system capable of tracing food along its entire journey, from manufacture through delivery, would enable a more effective humanitarian response to food-aid needs.

    In 2020, a team from MIT Lincoln Laboratory began engaging with BHA to create an intelligent dashboard for their supply-chain operations. This dashboard brings together the expansive food-aid datasets from BHA’s existing systems into a single platform, with tools for visualizing and analyzing the data. When the team started developing the dashboard, they quickly realized the need for considerably more data than BHA had access to.

    “That’s where traceability comes in, with each handoff partner contributing key pieces of information as food moves through the supply chain,” explains Megan Richardson, a researcher in the laboratory’s Humanitarian Assistance and Disaster Relief Systems Group.

    Richardson and the rest of the team have been working with BHA and their partners to scope, build, and implement such an end-to-end traceability system. This system consists of serialized, unique identifiers (IDs) — akin to fingerprints — that are assigned to individual food items at the time they are produced. These individual IDs remain linked to items as they are aggregated along the supply chain, first domestically and then internationally. For example, individually tagged cans of vegetable oil get packaged into cartons; cartons are placed onto pallets and transported via railway and truck to warehouses; pallets are loaded onto shipping containers at U.S. ports; and pallets are unloaded and cartons are unpackaged overseas.

    With a trace

    Today, visibility at the single-item level doesn’t exist. Most suppliers mark pallets with a lot number (a lot is a batch of items produced in the same run), but this is for internal purposes (i.e., to track issues stemming back to their production supply, like over-enriched ingredients or machinery malfunction), not data sharing. So, organizations know which supplier lot a pallet and carton are associated with, but they can’t track the unique history of an individual carton or item within that pallet. As the lots move further downstream toward their final destination, they are often mixed with lots from other productions, and possibly other commodity types altogether, because of space constraints. On the international side, such mixing and the lack of granularity make it difficult to quickly pull commodities out of the supply chain if food safety concerns arise. Current response times can span several months.

    “Commodities are grouped differently at different stages of the supply chain, so it is logical to track them in those groupings where needed,” Richardson says. “Our item-level granularity serves as a form of Rosetta Stone to enable stakeholders to efficiently communicate throughout these stages. We’re trying to enable a way to track not only the movement of commodities, including through their lot information, but also any problems arising independent of lot, like exposure to high humidity levels in a warehouse. Right now, we have no way to associate commodities with histories that may have resulted in an issue.”

    “You can now track your checked luggage across the world and the fish on your dinner plate,” adds Brice MacLaren, also a researcher in the laboratory’s Humanitarian Assistance and Disaster Relief Systems Group. “So, this technology isn’t new, but it’s new to BHA as they evolve their methodology for commodity tracing. The traceability system needs to be versatile, working across a wide variety of operators who take custody of the commodity along the supply chain and fitting into their existing best practices.”

    As food products make their way through the supply chain, operators at each receiving point would be able to scan these IDs via a Lincoln Laboratory-developed mobile application (app) to indicate a product’s current location and transaction status — for example, that it is en route on a particular shipping container or stored in a certain warehouse. This information would get uploaded to a secure traceability server. By scanning a product, operators would also see its history up until that point.   

    Hitting the mark

    At the laboratory, the team tested the feasibility of their traceability technology, exploring different ways to mark and scan items. In their testing, they considered barcodes and radio-frequency identification (RFID) tags and handheld and fixed scanners. Their analysis revealed 2D barcodes (specifically data matrices) and smartphone-based scanners were the most feasible options in terms of how the technology works and how it fits into existing operations and infrastructure.

    “We needed to come up with a solution that would be practical and sustainable in the field,” MacLaren says. “While scanners can automatically read any RFID tags in close proximity as someone is walking by, they can’t discriminate exactly where the tags are coming from. RFID is expensive, and it’s hard to read commodities in bulk. On the other hand, a phone can scan a barcode on a particular box and tell you that code goes with that box. The challenge then becomes figuring out how to present the codes for people to easily scan without significantly interrupting their usual processes for handling and moving commodities.” 

    As the team learned from partner representatives in Kenya and Djibouti, offloading at the ports is a chaotic, fast operation. At manual warehouses, porters fling bags over their shoulders or stack cartons atop their heads any which way they can and run them to a drop point; at bagging terminals, commodities come down a conveyor belt and land this way or that way. With this variability comes several questions: How many barcodes do you need on an item? Where should they be placed? What size should they be? What will they cost? The laboratory team is considering these questions, keeping in mind that the answers will vary depending on the type of commodity; vegetable oil cartons will have different specifications than, say, 50-kilogram bags of wheat or peas.

    Leaving a mark

    Leveraging results from their testing and insights from international partners, the team has been running a traceability pilot evaluating how their proposed system meshes with real-world domestic and international operations. The current pilot features a domestic component in Houston, Texas, and an international component in Ethiopia, and focuses on tracking individual cartons of vegetable oil and identifying damaged cans. The Ethiopian team with Catholic Relief Services recently received a container filled with pallets of uniquely barcoded cartons of vegetable oil cans (in the next pilot, the cans will be barcoded, too). They are now scanning items and collecting data on product damage by using smartphones with the laboratory-developed mobile traceability app on which they were trained. 

    “The partners in Ethiopia are comparing a couple lid types to determine whether some are more resilient than others,” Richardson says. “With the app — which is designed to scan commodities, collect transaction data, and keep history — the partners can take pictures of damaged cans and see if a trend with the lid type emerges.”

    Next, the team will run a series of pilots with the World Food Program (WFP), the world’s largest humanitarian organization. The first pilot will focus on data connectivity and interoperability, and the team will engage with suppliers to directly print barcodes on individual commodities instead of applying barcode labels to packaging, as they did in the initial feasibility testing. The WFP will provide input on which of their operations are best suited for testing the traceability system, considering factors like the network bandwidth of WFP staff and local partners, the commodity types being distributed, and the country context for scanning. The BHA will likely also prioritize locations for system testing.

    “Our goal is to provide an infrastructure to enable as close to real-time data exchange as possible between all parties, given intermittent power and connectivity in these environments,” MacLaren says.

    In subsequent pilots, the team will try to integrate their approach with existing systems that partners rely on for tracking procurements, inventory, and movement of commodities under their custody so that this information is automatically pushed to the traceability server. The team also hopes to add a capability for real-time alerting of statuses, like the departure and arrival of commodities at a port or the exposure of unclaimed commodities to the elements. Real-time alerts would enable stakeholders to more efficiently respond to food-safety events. Currently, partners are forced to take a conservative approach, pulling out more commodities from the supply chain than are actually suspect, to reduce risk of harm. Both BHA and WHP are interested in testing out a food-safety event during one of the pilots to see how the traceability system works in enabling rapid communication response.

    To implement this technology at scale will require some standardization for marking different commodity types as well as give and take among the partners on best practices for handling commodities. It will also require an understanding of country regulations and partner interactions with subcontractors, government entities, and other stakeholders.

    “Within several years, I think it’s possible for BHA to use our system to mark and trace all their food procured in the United States and sent internationally,” MacLaren says.

    Once collected, the trove of traceability data could be harnessed for other purposes, among them analyzing historical trends, predicting future demand, and assessing the carbon footprint of commodity transport. In the future, a similar traceability system could scale for nonfood items, including medical supplies distributed to disaster victims, resources like generators and water trucks localized in emergency-response scenarios, and vaccines administered during pandemics. Several groups at the laboratory are also interested in such a system to track items such as tools deployed in space or equipment people carry through different operational environments.

    “When we first started this program, colleagues were asking why the laboratory was involved in simple tasks like making a dashboard, marking items with barcodes, and using hand scanners,” MacLaren says. “Our impact here isn’t about the technology; it’s about providing a strategy for coordinated food-aid response and successfully implementing that strategy. Most importantly, it’s about people getting fed.” More

  • in

    Day of AI curriculum meets the moment

    MIT Responsible AI for Social Empowerment and Education (RAISE) recently celebrated the second annual Day of AI with two flagship local events. The Edward M. Kennedy Institute for the U.S. Senate in Boston hosted a human rights and data policy-focused event that was streamed worldwide. Dearborn STEM Academy in Roxbury, Massachusetts, hosted a student workshop in collaboration with Amazon Future Engineer. With over 8,000 registrations across all 50 U.S. states and 108 countries in 2023, participation in Day of AI has more than doubled since its inaugural year.

    Day of AI is a free curriculum of lessons and hands-on activities designed to teach kids of all ages and backgrounds the basics and responsible use of artificial intelligence, designed by researchers at MIT RAISE. This year, resources were available for educators to run at any time and in any increments they chose. The curriculum included five new modules to address timely topics like ChatGPT in School, Teachable Machines, AI and Social Media, Data Science and Me, and more. A collaboration with the International Society for Technology in Education also introduced modules for early elementary students. Educators across the world shared photos, videos, and stories of their students’ engagement, expressing excitement and even relief over the accessible lessons.

    Professor Cynthia Breazeal, director of RAISE, dean for digital learning at MIT, and head of the MIT Media Lab’s Personal Robots research group, said, “It’s been a year of extraordinary advancements in AI, and with that comes necessary conversations and concerns about who and what this technology is for. With our Day of AI events, we want to celebrate the teachers and students who are putting in the work to make sure that AI is for everyone.”

    Reflecting community values and protecting digital citizens

    Play video

    On May 18, 2023, MIT RAISE hosted a global Day of AI celebration featuring a flagship local event focused on human rights and data policy at the Edward M. Kennedy Institute for the U.S. Senate. Students from the Warren Prescott Middle School and New Mission High School heard from speakers the City of Boston, Liberty Mutual, and MIT to discuss the many benefits and challenges of artificial intelligence education. Video: MIT Open Learning

    MIT President Sally Kornbluth welcomed students from Warren Prescott Middle School and New Mission High School to the Day of AI program at the Edward M. Kennedy Institute. Kornbluth reflected on the exciting potential of AI, along with the ethical considerations society needs to be responsible for.

    “AI has the potential to do all kinds of fantastic things, including driving a car, helping us with the climate crisis, improving health care, and designing apps that we can’t even imagine yet. But what we have to make sure it doesn’t do is cause harm to individuals, to communities, to us — society as a whole,” she said.

    This theme resonated with each of the event speakers, whose jobs spanned the sectors of education, government, and business. Yo Deshpande, technologist for the public realm, and Michael Lawrence Evans, program director of new urban mechanics from the Boston Mayor’s Office, shared how Boston thinks about using AI to improve city life in ways that are “equitable, accessible, and delightful.” Deshpande said, “We have the opportunity to explore not only how AI works, but how using AI can line up with our values, the way we want to be in the world, and the way we want to be in our community.”

    Adam L’Italien, chief innovation officer at Liberty Mutual Insurance (one of Day of AI’s founding sponsors), compared our present moment with AI technologies to the early days of personal computers and internet connection. “Exposure to emerging technologies can accelerate progress in the world and in your own lives,” L’Italien said, while recognizing that the AI development process needs to be inclusive and mitigate biases.

    Human policies for artificial intelligence

    So how does society address these human rights concerns about AI? Marc Aidinoff ’21, former White House Office of Science and Technology Policy chief of staff, led a discussion on how government policy can influence the parameters of how technology is developed and used, like the Blueprint for an AI Bill of Rights. Aidinoff said, “The work of building the world you want to see is far harder than building the technical AI system … How do you work with other people and create a collective vision for what we want to do?” Warren Prescott Middle School students described how AI could be used to solve problems that humans couldn’t. But they also shared their concerns that AI could affect data privacy, learning deficits, social media addiction, job displacement, and propaganda.

    In a mock U.S. Senate trial activity designed by Daniella DiPaola, PhD student at the MIT Media Lab, the middle schoolers investigated what rights might be undermined by AI in schools, hospitals, law enforcement, and corporations. Meanwhile, New Mission High School students workshopped the ideas behind bill S.2314, the Social Media Addiction Reduction Technology (SMART) Act, in an activity designed by Raechel Walker, graduate research assistant in the Personal Robots Group, and Matt Taylor, research assistant at the Media Lab. They discussed what level of control could or should be introduced at the parental, educational, and governmental levels to reduce the risks of internet addiction.

    “Alexa, how do I program AI?”

    Play video

    The 2023 Day of AI celebration featured a flagship local event at the Dearborn STEM Academy in Roxbury in collaboration with Amazon Future Engineer. Students participated in a hands-on activity using MIT App Inventor as part of Day of AI’s Alexa lesson. Video: MIT Open Learning

    At Dearborn STEM Academy, Amazon Future Engineer helped students work through the Intro to Voice AI curriculum module in real-time. Students used MIT App Inventor to code basic commands for Alexa. In an interview with WCVB, Principal Darlene Marcano said, “It’s important that we expose our students to as many different experiences as possible. The students that are participating are on track to be future computer scientists and engineers.”

    Breazeal told Dearborn students, “We want you to have an informed voice about how you want AI to be used in society. We want you to feel empowered that you can shape the world. You can make things with AI to help make a better world and a better community.”

    Rohit Prasad ’08, senior vice president and head scientist for Alexa at Amazon, and Victor Reinoso ’97, global director of philanthropic education initiatives at Amazon, also joined the event. “Amazon and MIT share a commitment to helping students discover a world of possibilities through STEM and AI education,” said Reinoso. “There’s a lot of current excitement around the technological revolution with generative AI and large language models, so we’re excited to help students explore careers of the future and navigate the pathways available to them.” To highlight their continued investment in the local community and the school program, Amazon donated a $25,000 Innovation and Early College Pathways Program Grant to the Boston Public School system.

    Day of AI down under

    Not only was the Day of AI program widely adopted across the globe, Australian educators were inspired to adapt their own regionally specific curriculum. An estimated 161,000 AI professionals will be needed in Australia by 2030, according to the National Artificial Intelligence Center in the Commonwealth Scientific and Industrial Research Organization (CSIRO), an Australian government agency and Day of AI Australia project partner. CSIRO worked with the University of New South Wales to develop supplementary educational resources on AI ethics and machine learning. Day of AI Australia reached 85,000 students at 400-plus secondary schools this year, sparking curiosity in the next generation of AI experts.

    The interest in AI is accelerating as fast as the technology is being developed. Day of AI offers a unique opportunity for K-12 students to shape our world’s digital future and their own.

    “I hope that some of you will decide to be part of this bigger effort to help us figure out the best possible answers to questions that are raised by AI,” Kornbluth told students at the Edward M. Kennedy Institute. “We’re counting on you, the next generation, to learn how AI works and help make sure it’s for everyone.” More

  • in

    3 Questions: What a single car can say about traffic

    Vehicle traffic has long defied description. Once measured roughly through visual inspection and traffic cameras, new smartphone crowdsourcing tools are now quantifying traffic far more precisely. This popular method, however, also presents a problem: Accurate measurements require a lot of data and users.

    Meshkat Botshekan, an MIT PhD student in civil and environmental engineering and research assistant at the MIT Concrete Sustainability Hub, has sought to expand on crowdsourcing methods by looking into the physics of traffic. During his time as a doctoral candidate, he has helped develop Carbin, a smartphone-based roadway crowdsourcing tool created by MIT CSHub and the University of Massachusetts Dartmouth, and used its data to offer more insight into the physics of traffic — from the formation of traffic jams to the inference of traffic phase and driving behavior. Here, he explains how recent findings can allow smartphones to infer traffic properties from the measurements of a single vehicle.  

    Q: Numerous navigation apps already measure traffic. Why do we need alternatives?

    A: Traffic characteristics have always been tough to measure. In the past, visual inspection and cameras were used to produce traffic metrics. So, there’s no denying that today’s navigation tools apps offer a superior alternative. Yet even these modern tools have gaps.

    Chief among them is their dependence on spatially distributed user counts: Essentially, these apps tally up their users on road segments to estimate the density of traffic. While this approach may seem adequate, it is both vulnerable to manipulation, as demonstrated in some viral videos, and requires immense quantities of data for reliable estimates. Processing these data is so time- and resource-intensive that, despite their availability, they can’t be used to quantify traffic effectively across a whole road network. As a result, this immense quantity of traffic data isn’t actually optimal for traffic management.

    Q: How could new technologies improve how we measure traffic?

    A: New alternatives have the potential to offer two improvements over existing methods: First, they can extrapolate far more about traffic with far fewer data. Second, they can cost a fraction of the price while offering a far simpler method of data collection. Just like Waze and Google Maps, they rely on crowdsourcing data from users. Yet, they are grounded in the incorporation of high-level statistical physics into data analysis.

    For instance, the Carbin app, which we are developing in collaboration with UMass Dartmouth, applies principles of statistical physics to existing traffic models to entirely forgo the need for user counts. Instead, it can infer traffic density and driver behavior using the input of a smartphone mounted in single vehicle.

    The method at the heart of the app, which was published last fall in Physical Review E, treats vehicles like particles in a many-body system. Just as the behavior of a closed many-body system can be understood through observing the behavior of an individual particle relying on the ergodic theorem of statistical physics, we can characterize traffic through the fluctuations in speed and position of a single vehicle across a road. As a result, we can infer the behavior and density of traffic on a segment of a road.

    As far less data is required, this method is more rapid and makes data management more manageable. But most importantly, it also has the potential to make traffic data less expensive and accessible to those that need it.

    Q: Who are some of the parties that would benefit from new technologies?

    A: More accessible and sophisticated traffic data would benefit more than just drivers seeking smoother, faster routes. It would also enable state and city departments of transportation (DOTs) to make local and collective interventions that advance the critical transportation objectives of equity, safety, and sustainability.

    As a safety solution, new data collection technologies could pinpoint dangerous driving conditions on a much finer scale to inform improved traffic calming measures. And since socially vulnerable communities experience traffic violence disproportionately, these interventions would have the added benefit of addressing pressing equity concerns. 

    There would also be an environmental benefit. DOTs could mitigate vehicle emissions by identifying minute deviations in traffic flow. This would present them with more opportunities to mitigate the idling and congestion that generate excess fuel consumption.  

    As we’ve seen, these three challenges have become increasingly acute, especially in urban areas. Yet, the data needed to address them exists already — and is being gathered by smartphones and telematics devices all over the world. So, to ensure a safer, more sustainable road network, it will be crucial to incorporate these data collection methods into our decision-making. More

  • in

    Helping companies optimize their websites and mobile apps

    Creating a good customer experience increasingly means creating a good digital experience. But metrics like pageviews and clicks offer limited insight into how much customers actually like a digital product.

    That’s the problem the digital optimization company Amplitude is solving. Amplitude gives companies a clearer picture into how users interact with their digital products to help them understand exactly which features to promote or improve.

    “It’s all about using product data to drive your business,” says Amplitude CEO Spenser Skates ’10, who co-founded the company with Curtis Liu ’10 and Stanford University graduate Jeffrey Wang. “Mobile apps and websites are really complex. The average app or website will have thousands of things you can do with it. The question is how you know which of those things are driving a great user experience and which parts are really frustrating for users.”

    Amplitude’s database can gather millions of details about how users behave inside an app or website and allow customers to explore that information without needing data science degrees.

    “It provides an interface for very easy, accessible ways of looking at your data, understanding your data, and asking questions of that data,” Skates says.

    Amplitude, which recently announced it will be going public, is already helping 23 of the 100 largest companies in the U.S. Customers include media companies like NBC, tech companies like Twitter, and retail companies like Walmart.

    “Our platform helps businesses understand how people are using their apps and websites so they can create better versions of their products,” Skates says. “It’s all about creating a really compelling product.”

    Learning entrepreneurship

    The founders say their years at MIT were among the best of their lives. Skates and Liu were undergraduates from 2006 to 2010. Skates majored in biological engineering while Liu majored in mathematics and electrical engineering and computer science. The two first met as opponents in MIT’s Battlecode competition, in which students use artificial intelligence algorithms to control teams of robots that compete in a strategy game against other teams. The following year they teamed up.

    “There are a lot of parallels between what you’re trying to do in Battlecode and what you end up having to do in the early stages of a startup,” Liu says. “You have limited resources, limited time, and you’re trying to accomplish a goal. What we found is trying a lot of different things, putting our ideas out there and testing them with real data, really helped us focus on the things that actually mattered. That method of iteration and continual improvement set the foundation for how we approach building products and startups.”

    Liu and Skates next participated in the MIT $100K Entrepreneurship Competition with an idea for a cloud-based music streaming service. After graduation, Skates began working in finance and Liu got a job at Google, but they continued pursuing startup ideas on the side, including a website that let alumni see where their classmates ended up and a marketplace for finding photographers.

    A year after graduation, the founders decided to quit their jobs and work on a startup full time. Skates moved into Liu’s apartment in San Francisco, setting up a mattress on the floor, and they began working on a project that became Sonalight, a voice recognition app. As part of the project, the founders built an internal system to understand where users got stuck in the app and what features were used the most.

    Despite getting over 100,000 downloads, the founders decided Sonalight was a little too early for its time and started thinking their analytics feature could be useful to other companies. They spoke with about 30 different product teams to learn more about what companies wanted from their digital analytics. Amplitude was officially founded in 2012.

    Amplitude gathers fine details about digital product usage, parsing out individual features and actions to give customers a better view of how their products are being used. Using the data in Amplitude’s intuitive, no-code interface, customers can make strategic decisions like whether to launch a feature or change a distribution channel.

    The platform is designed to ease the bottlenecks that arise when executives, product teams, salespeople, and marketers want to answer questions about customer experience or behavior but need the data science team to crunch the numbers for them.

    “It’s a very collaborative interface to encourage customers to work together to understand how users are engaging with their apps,” Skates says.

    Amplitude’s database also uses machine learning to segment users, predict user outcomes, and uncover novel correlations. Earlier this year, the company unveiled a service called Recommend that helps companies create personalized user experiences across their entire platform in minutes. The service goes beyond demographics to personalize customer experiences based on what users have done or seen before within the product.

    “We’re very conscious on the privacy front,” Skates says. “A lot of analytics companies will resell your data to third parties or use it for advertising purposes. We don’t do any of that. We’re only here to provide product insights to our customers. We’re not using data to track you across the web. Everyone expects Netflix to use the data on what you’ve watched before to recommend what to watch next. That’s effectively what we’re helping other companies do.”

    Optimizing digital experiences

    The meditation app Calm is on a mission to help users build habits that improve their mental wellness. Using Amplitude, the company learned that users most often use the app to get better sleep and reduce stress. The insights helped Calm’s team double down on content geared toward those goals, launching “sleep stories” to help users unwind at the end of each day and adding content around anxiety relief and relaxation. Sleep stories are now Calm’s most popular type of content, and Calm has grown rapidly to millions of people around the world.

    Calm’s story shows the power of letting user behavior drive product decisions. Amplitude has also helped the online fundraising site GoFundMe increase donations by showing users more compelling campaigns and the exercise bike company Peloton realize the importance of social features like leaderboards.

    Moving forward, the founders believe Amplitude’s platform will continue helping companies adapt to an increasingly digital world in which users expect more compelling, personalized experiences.

    “If you think about the online experience for companies today compared to 10 years ago, now [digital] is the main point of contact, whether you’re a media company streaming content, a retail company, or a finance company,” Skates says. “That’s only going to continue. That’s where we’re trying to help.” More