More stories

  • in

    New model predicts how shoe properties affect a runner’s performance

    A good shoe can make a huge difference for runners, from career marathoners to couch-to-5K first-timers. But every runner is unique, and a shoe that works for one might trip up another. Outside of trying on a rack of different designs, there’s no quick and easy way to know which shoe best suits a person’s particular running style.

    MIT engineers are hoping to change that with a new model that predicts how certain shoe properties will affect a runner’s performance.

    The simple model incorporates a person’s height, weight, and other general dimensions, along with shoe properties such as stiffness and springiness along the midsole. With this input, the model then simulates a person’s running gait, or how they would run, in a particular shoe.

    Play video

    Using the model, the researchers can simulate how a runner’s gait changes with different shoe types. They can then pick out the shoe that produces the best performance, which they define as the degree to which a runner’s expended energy is minimized.

    While the model can accurately simulate changes in a runner’s gait when comparing two very different shoe types, it is less discerning when comparing relatively similar designs, including most commercially available running shoes. For this reason, the researchers envision the current model would be best used as a tool for shoe designers looking to push the boundaries of sneaker design.

    “Shoe designers are starting to 3D print shoes, meaning they can now make them with a much wider range of properties than with just a regular slab of foam,” says Sarah Fay, a postdoc in MIT’s Sports Lab and the Institute for Data, Systems, and Society (IDSS). “Our model could help them design really novel shoes that are also high-performing.”

    The team is planning to improve the model, in hopes that consumers can one day use a similar version to pick shoes that fit their personal running style.

    “We’ve allowed for enough flexibility in the model that it can be used to design custom shoes and understand different individual behaviors,” Fay says. “Way down the road, we imagine that if you send us a video of yourself running, we could 3D print the shoe that’s right for you. That would be the moonshot.”

    The new model is reported in a study appearing this month in the Journal of Biomechanical Engineering. The study is authored by Fay and Anette “Peko” Hosoi, professor of mechanical engineering at MIT.

    Running, revamped

    The team’s new model grew out of talks with collaborators in the sneaker industry, where designers have started to 3D print shoes at commercial scale. These designs incorporate 3D-printed midsoles that resemble intricate scaffolds, the geometry of which can be tailored to give a certain bounce or stiffness in specific locations across the sole.

    “With 3D printing, designers can tune everything about the material response locally,” Hosoi says. “And they came to us and essentially said, ‘We can do all these things. What should we do?’”

    “Part of the design problem is to predict what a runner will do when you put an entirely new shoe on them,” Fay adds. “You have to couple the dynamics of the runner with the properties of the shoe.”

    Fay and Hosoi looked first to represent a runner’s dynamics using a simple model. They drew inspiration from Thomas McMahon, a leader in the study of biomechanics at Harvard University, who in the 1970s used a very simple “spring and damper” model to model a runner’s essential gait mechanics. Using this gait model, he predicted how fast a person could run on various track types, from traditional concrete surfaces to more rubbery material. The model showed that runners should run faster on softer, bouncier tracks that supported a runner’s natural gait.

    Though this may be unsurprising today, the insight was a revelation at the time, prompting Harvard to revamp its indoor track — a move that quickly accumulated track records, as runners found they could run much faster on the softier, springier surface.

    “McMahon’s work showed that, even if we don’t model every single limb and muscle and component of the human body, we’re still able to create meaningful insights in terms of how we design for athletic performance,” Fay says.

    Gait cost

    Following McMahon’s lead, Fay and Hosoi developed a similar, simplified model of a runner’s dynamics. The model represents a runner as a center of mass, with a hip that can rotate and a leg that can stretch. The leg is connected to a box-like shoe, with springiness and shock absorption that can be tuned, both vertically and horizontally.

    They reasoned that they should be able to input into the model a person’s basic dimensions, such as their height, weight, and leg length, along with a shoe’s material properties, such as the stiffness of the front and back midsole, and use the model to simulate what a person’s gait is likely to be when running in that shoe.

    But they also realized that a person’s gait can depend on a less definable property, which they call the “biological cost function” — a quality that a runner might not consciously be aware of but nevertheless may try to minimize whenever they run. The team reasoned that if they can identify a biological cost function that is general to most runners, then they might predict not only a person’s gait for a given shoe but also which shoe produces the gait corresponding to the best running performance.

    With this in mind, the team looked to a previous treadmill study, which recorded detailed measurements of runners, such as the force of their impacts, the angle and motion of their joints, the spring in their steps, and the work of their muscles as they ran, each in the same type of running shoe.

    Fay and Hosoi hypothesized that each runner’s actual gait arose not only from their personal dimensions and shoe properties, but also a subconscious goal to minimize one or more biological measures, yet unknown. To reveal these measures, the team used their model to simulate each runner’s gait multiple times. Each time, they programmed the model to assume the runner minimized a different biological cost, such as the degree to which they swing their leg or the impact that they make with the treadmill. They then compared the modeled gait with the runner’s actual gait to see which modeled gait — and assumed cost — matched the actual gait.

    In the end, the team found that most runners tend to minimize two costs: the impact their feet make with the treadmill and the amount of energy their legs expend.

    “If we tell our model, ‘Optimize your gait on these two things,’ it gives us really realistic-looking gaits that best match the data we have,” Fay explains. “This gives us confidence that the model can predict how people will actually run, even if we change their shoe.”

    As a final step, the researchers simulated a wide range of shoe styles and used the model to predict a runner’s gait and how efficient each gait would be for a given type of shoe.

    “In some ways, this gives you a quantitative way to design a shoe for a 10K versus a marathon shoe,” Hosoi says. “Designers have an intuitive sense for that. But now we have a mathematical understanding that we hope designers can use as a tool to kickstart new ideas.”

    This research is supported, in part, by adidas. More

  • in

    A new way to integrate data with physical objects

    To get a sense of what StructCode is all about, says Mustafa Doğa Doğan, think of Superman. Not the “faster than a speeding bullet” and “more powerful than a locomotive” version, but a Superman, or Superwoman, who sees the world differently from ordinary mortals — someone who can look around a room and glean all kinds of information about ordinary objects that is not apparent to people with less penetrating faculties.

    That, in a nutshell, is “the high-level idea behind StructCode,” explains Doğan, a PhD student in electrical engineering and computer science at MIT and an affiliate of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). “The goal is to change the way we interact with objects” — to make those interactions more meaningful and more meaning-laden — “by embedding information into objects in ways that can be readily accessed.”

    StructCode grew out of an effort called InfraredTags, which Doğan and other colleagues introduced in 2022. That work, as well as the current project, was carried out in the laboratory of MIT Associate Professor Stefanie Mueller — Doğan’s advisor, who has taken part in both projects. In last year’s approach, “invisible” tags — that can only be seen with cameras capable of detecting infrared light — were used to reveal information about physical objects. The drawback there was that many cameras cannot perceive infrared light. Moreover, the method for fabricating these objects and affixing the tags to their surfaces relied on 3D printers, which tend to be very slow and often can only make objects that are small.

    StructCode, at least in its original version, relies on objects produced with laser-cutting techniques that can be manufactured within minutes, rather than the hours it might take on a 3D printer. Information can be extracted from these objects, moreover, with the RGB cameras that are commonly found in smartphones; the ability to operate in the infrared range of the spectrum is not required.

    In their initial demonstrations of the idea, the MIT-led team decided to construct their objects out of wood, making pieces such as furniture, picture frames, flowerpots, or toys that are well suited to laser-cut fabrication. A key question that had to be resolved was this: How can information be stored in a way that is unobtrusive and durable, as compared to externally-attached bar codes and QR codes, and also will not undermine an object’s structural integrity?

    The solution that the team has come up with, for now, is to rely on joints, which are ubiquitous in wooden objects made out of more than one component. Perhaps the most familiar is the finger joint, which has a kind of zigzag pattern whereby two wooden pieces adjoin at right angles such that every protruding “finger” along the joint of the first piece fits into a corresponding “gap” in the joint of the second piece and, similarly, every gap in the joint of the first piece is filled with a finger from the second.

    “Joints have these repeating features, which are like repeating bits,” Dogan says. To create a code, the researchers slightly vary the length of the gaps or fingers. A standard size length is accorded a 1. A slightly shorter length is assigned a 0, and a slightly longer length is assigned a 2. The encoding scheme is based on the sequence of these numbers, or bits, that can be observed along a joint. For every string of four bits, there are 81 (34) possible variations.

    The team also demonstrated ways of encoding messages in “living hinges” — a kind of joint that is made by taking a flat, rigid piece of material and making it bendable by cutting a series of parallel, vertical lines. As with the finger joints, the distance between these lines can be varied: 1 being the standard length, 0 being a slightly shorter length, and 2 being slightly longer. And in this way, a code can be assembled from an object that contains a living hinge.

    The idea is described in a paper, “StructCode: Leveraging Fabrication Artifacts to Store Data in Laser-Cut Objects,” that was presented this month at the 2023 ACM Symposium on Computational Fabrication in New York City. Doğan, the paper’s first author, is joined by Mueller and four coauthors — recent MIT alumna Grace Tang ’23, MNG ’23; MIT undergraduate Richard Qi; University of California at Berkeley graduate student Vivian Hsinyueh Chan; and Cornell University Assistant Professor Thijs Roumen.

    “In the realm of materials and design, there is often an inclination to associate novelty and innovation with entirely new materials or manufacturing techniques,” notes Elvin Karana, a professor of materials innovation and design at the Delft University of Technology. One of the things that impresses Karana most about StructCode is that it provides a novel means of storing data by “applying a commonly used technique like laser cutting and a material as ubiquitous as wood.”

    The idea for StructCode, adds University of Colorado computer scientist Ellen Yi-Luen Do, “is “simple, elegant, and totally makes sense. It’s like having the Rosetta Stone to help decipher Egyptian hieroglyphs.”

    Patrick Baudisch, a computer scientist at the Hasso Plattner Institute in Germany, views StructCode as “a great step forward for personal fabrication. It takes a key piece of functionality that’s only offered today for mass-produced goods and brings it to custom objects.”

    Here, in brief, is how it works: First, a laser cutter — guided by a model created via StructCode — fabricates an object into which encoded information has been embedded. After downloading a StructCode app, an user can decode the hidden message by pointing a cellphone camera at the object, which can (aided by StructCode software) detect subtle variations in length found in an object’s outward-facing joints or living hinges.

    The process is even easier if the user is equipped with augmented reality glasses, Doğan says. “In that case, you don’t need to point a camera. The information comes up automatically.” And that can give people more of the “superpowers” that the designers of StructCode hope to confer.

    “The object doesn’t need to contain a lot of information,” Doğan adds. “Just enough — in the form of, say, URLs — to direct people to places they can find out what they need to know.”

    Users might be sent to a website where they can obtain information about the object — how to care for it, and perhaps eventually how to disassemble it and recycle (or safely dispose of) its contents. A flowerpot that was made with living hinges might inform a user, based on records that are maintained online, as to when the plant inside the pot was last watered and when it needs to be watered again. Children examining a toy crocodile could, through StructCode, learn scientific details about various parts of the animal’s anatomy. A picture frame made with finger joints modified by StructCode could help people find out about the painting inside the frame and about the person (or persons) who created the artwork — perhaps linking to a video of an artist talking about this work directly.

    “This technique could pave the way for new applications, such as interactive museum exhibits,” says Raf Ramakers, a computer scientist at Hasselt University in Belgium. “It holds the potential for broadening the scope of how we perceive and interact with everyday objects” — which is precisely the goal that motivates the work of Doğan and his colleagues.

    But StructCode is not the end of the line, as far as Doğan and his collaborators are concerned. The same general approach could be adapted to other manufacturing techniques besides laser cutting, and information storage doesn’t have to be confined to the joints of wooden objects. Data could be represented, for instance, in the texture of leather, within the pattern of woven or knitted pieces, or concealed by other means within an image. Doğan is excited by the breadth of available options and by the fact that their “explorations into this new realm of possibilities, designed to make objects and our world more interactive, are just beginning.” More

  • in

    Design’s new frontier

    In the 1960s, the advent of computer-aided design (CAD) sparked a revolution in design. For his PhD thesis in 1963, MIT Professor Ivan Sutherland developed Sketchpad, a game-changing software program that enabled users to draw, move, and resize shapes on a computer. Over the course of the next few decades, CAD software reshaped how everything from consumer products to buildings and airplanes were designed.

    “CAD was part of the first wave in computing in design. The ability of researchers and practitioners to represent and model designs using computers was a major breakthrough and still is one of the biggest outcomes of design research, in my opinion,” says Maria Yang, Gail E. Kendall Professor and director of MIT’s Ideation Lab.

    Innovations in 3D printing during the 1980s and 1990s expanded CAD’s capabilities beyond traditional injection molding and casting methods, providing designers even more flexibility. Designers could sketch, ideate, and develop prototypes or models faster and more efficiently. Meanwhile, with the push of a button, software like that developed by Professor Emeritus David Gossard of MIT’s CAD Lab could solve equations simultaneously to produce a new geometry on the fly.

    In recent years, mechanical engineers have expanded the computing tools they use to ideate, design, and prototype. More sophisticated algorithms and the explosion of machine learning and artificial intelligence technologies have sparked a second revolution in design engineering.

    Researchers and faculty at MIT’s Department of Mechanical Engineering are utilizing these technologies to re-imagine how the products, systems, and infrastructures we use are designed. These researchers are at the forefront of the new frontier in design.

    Computational design

    Faez Ahmed wants to reinvent the wheel, or at least the bicycle wheel. He and his team at MIT’s Design Computation & Digital Engineering Lab (DeCoDE) use an artificial intelligence-driven design method that can generate entirely novel and improved designs for a range of products — including the traditional bicycle. They create advanced computational methods to blend human-driven design with simulation-based design.

    “The focus of our DeCoDE lab is computational design. We are looking at how we can create machine learning and AI algorithms to help us discover new designs that are optimized based on specific performance parameters,” says Ahmed, an assistant professor of mechanical engineering at MIT.

    For their work using AI-driven design for bicycles, Ahmed and his collaborator Professor Daniel Frey wanted to make it easier to design customizable bicycles, and by extension, encourage more people to use bicycles over transportation methods that emit greenhouse gases.

    To start, the group gathered a dataset of 4,500 bicycle designs. Using this massive dataset, they tested the limits of what machine learning could do. First, they developed algorithms to group bicycles that looked similar together and explore the design space. They then created machine learning models that could successfully predict what components are key in identifying a bicycle style, such as a road bike versus a mountain bike.

    Once the algorithms were good enough at identifying bicycle designs and parts, the team proposed novel machine learning tools that could use this data to create a unique and creative design for a bicycle based on certain performance parameters and rider dimensions.

    Ahmed used a generative adversarial network — or GAN — as the basis of this model. GAN models utilize neural networks that can create new designs based on vast amounts of data. However, using GAN models alone would result in homogeneous designs that lack novelty and can’t be assessed in terms of performance. To address these issues in design problems, Ahmed has developed a new method which he calls “PaDGAN,” performance augmented diverse GAN.

    “When we apply this type of model, what we see is that we can get large improvements in the diversity, quality, as well as novelty of the designs,” Ahmed explains.

    Using this approach, Ahmed’s team developed an open-source computational design tool for bicycles freely available on their lab website. They hope to further develop a set of generalizable tools that can be used across industries and products.

    Longer term, Ahmed has his sights set on loftier goals. He hopes the computational design tools he develops could lead to “design democratization,” putting more power in the hands of the end user.

    “With these algorithms, you can have more individualization where the algorithm assists a customer in understanding their needs and helps them create a product that satisfies their exact requirements,” he adds.

    Using algorithms to democratize the design process is a goal shared by Stefanie Mueller, an associate professor in electrical engineering and computer science and mechanical engineering.

    Personal fabrication

    Platforms like Instagram give users the freedom to instantly edit their photographs or videos using filters. In one click, users can alter the palette, tone, and brightness of their content by applying filters that range from bold colors to sepia-toned or black-and-white. Mueller, X-Window Consortium Career Development Professor, wants to bring this concept of the Instagram filter to the physical world.

    “We want to explore how digital capabilities can be applied to tangible objects. Our goal is to bring reprogrammable appearance to the physical world,” explains Mueller, director of the HCI Engineering Group based out of MIT’s Computer Science and Artificial Intelligence Laboratory.

    Mueller’s team utilizes a combination of smart materials, optics, and computation to advance personal fabrication technologies that would allow end users to alter the design and appearance of the products they own. They tested this concept in a project they dubbed “Photo-Chromeleon.”

    First, a mix of photochromic cyan, magenta, and yellow dies are airbrushed onto an object — in this instance, a 3D sculpture of a chameleon. Using software they developed, the team sketches the exact color pattern they want to achieve on the object itself. An ultraviolet light shines on the object to activate the dyes.

    To actually create the physical pattern on the object, Mueller has developed an optimization algorithm to use alongside a normal office projector outfitted with red, green, and blue LED lights. These lights shine on specific pixels on the object for a given period of time to physically change the makeup of the photochromic pigments.

    “This fancy algorithm tells us exactly how long we have to shine the red, green, and blue light on every single pixel of an object to get the exact pattern we’ve programmed in our software,” says Mueller.

    Giving this freedom to the end user enables limitless possibilities. Mueller’s team has applied this technology to iPhone cases, shoes, and even cars. In the case of shoes, Mueller envisions a shoebox embedded with UV and LED light projectors. Users could put their shoes in the box overnight and the next day have a pair of shoes in a completely new pattern.

    Mueller wants to expand her personal fabrication methods to the clothes we wear. Rather than utilize the light projection technique developed in the PhotoChromeleon project, her team is exploring the possibility of weaving LEDs directly into clothing fibers, allowing people to change their shirt’s appearance as they wear it. These personal fabrication technologies could completely alter consumer habits.

    “It’s very interesting for me to think about how these computational techniques will change product design on a high level,” adds Mueller. “In the future, a consumer could buy a blank iPhone case and update the design on a weekly or daily basis.”

    Computational fluid dynamics and participatory design

    Another team of mechanical engineers, including Sili Deng, the Brit (1961) & Alex (1949) d’Arbeloff Career Development Professor, are developing a different kind of design tool that could have a large impact on individuals in low- and middle-income countries across the world.

    As Deng walked down the hallway of Building 1 on MIT’s campus, a monitor playing a video caught her eye. The video featured work done by mechanical engineers and MIT D-Lab on developing cleaner burning briquettes for cookstoves in Uganda. Deng immediately knew she wanted to get involved.

    “As a combustion scientist, I’ve always wanted to work on such a tangible real-world problem, but the field of combustion tends to focus more heavily on the academic side of things,” explains Deng.

    After reaching out to colleagues in MIT D-Lab, Deng joined a collaborative effort to develop a new cookstove design tool for the 3 billion people across the world who burn solid fuels to cook and heat their homes. These stoves often emit soot and carbon monoxide, leading not only to millions of deaths each year, but also worsening the world’s greenhouse gas emission problem.

    The team is taking a three-pronged approach to developing this solution, using a combination of participatory design, physical modeling, and experimental validation to create a tool that will lead to the production of high-performing, low-cost energy products.

    Deng and her team in the Deng Energy and Nanotechnology Group use physics-based modeling for the combustion and emission process in cookstoves.

    “My team is focused on computational fluid dynamics. We use computational and numerical studies to understand the flow field where the fuel is burned and releases heat,” says Deng.

    These flow mechanics are crucial to understanding how to minimize heat loss and make cookstoves more efficient, as well as learning how dangerous pollutants are formed and released in the process.

    Using computational methods, Deng’s team performs three-dimensional simulations of the complex chemistry and transport coupling at play in the combustion and emission processes. They then use these simulations to build a combustion model for how fuel is burned and a pollution model that predicts carbon monoxide emissions.

    Deng’s models are used by a group led by Daniel Sweeney in MIT D-Lab to test the experimental validation in prototypes of stoves. Finally, Professor Maria Yang uses participatory design methods to integrate user feedback, ensuring the design tool can actually be used by people across the world.

    The end goal for this collaborative team is to not only provide local manufacturers with a prototype they could produce themselves, but to also provide them with a tool that can tweak the design based on local needs and available materials.

    Deng sees wide-ranging applications for the computational fluid dynamics her team is developing.

    “We see an opportunity to use physics-based modeling, augmented with a machine learning approach, to come up with chemical models for practical fuels that help us better understand combustion. Therefore, we can design new methods to minimize carbon emissions,” she adds.

    While Deng is utilizing simulations and machine learning at the molecular level to improve designs, others are taking a more macro approach.

    Designing intelligent systems

    When it comes to intelligent design, Navid Azizan thinks big. He hopes to help create future intelligent systems that are capable of making decisions autonomously by using the enormous amounts of data emerging from the physical world. From smart robots and autonomous vehicles to smart power grids and smart cities, Azizan focuses on the analysis, design, and control of intelligent systems.

    Achieving such massive feats takes a truly interdisciplinary approach that draws upon various fields such as machine learning, dynamical systems, control, optimization, statistics, and network science, among others.

    “Developing intelligent systems is a multifaceted problem, and it really requires a confluence of disciplines,” says Azizan, assistant professor of mechanical engineering with a dual appointment in MIT’s Institute for Data, Systems, and Society (IDSS). “To create such systems, we need to go beyond standard approaches to machine learning, such as those commonly used in computer vision, and devise algorithms that can enable safe, efficient, real-time decision-making for physical systems.”

    For robot control to work in the complex dynamic environments that arise in the real world, real-time adaptation is key. If, for example, an autonomous vehicle is going to drive in icy conditions or a drone is operating in windy conditions, they need to be able to adapt to their new environment quickly.

    To address this challenge, Azizan and his collaborators at MIT and Stanford University have developed a new algorithm that combines adaptive control, a powerful methodology from control theory, with meta learning, a new machine learning paradigm.

    “This ‘control-oriented’ learning approach outperforms the existing ‘regression-oriented’ methods, which are mostly focused on just fitting the data, by a wide margin,” says Azizan.

    Another critical aspect of deploying machine learning algorithms in physical systems that Azizan and his team hope to address is safety. Deep neural networks are a crucial part of autonomous systems. They are used for interpreting complex visual inputs and making data-driven predictions of future behavior in real time. However, Azizan urges caution.

    “These deep neural networks are only as good as their training data, and their predictions can often be untrustworthy in scenarios not covered by their training data,” he says. Making decisions based on such untrustworthy predictions could lead to fatal accidents in autonomous vehicles or other safety-critical systems.

    To avoid these potentially catastrophic events, Azizan proposes that it is imperative to equip neural networks with a measure of their uncertainty. When the uncertainty is high, they can then be switched to a “safe policy.”

    In pursuit of this goal, Azizan and his collaborators have developed a new algorithm known as SCOD — Sketching Curvature of Out-of-Distribution Detection. This framework could be embedded within any deep neural network to equip them with a measure of their uncertainty.

    “This algorithm is model-agnostic and can be applied to neural networks used in various kinds of autonomous systems, whether it’s drones, vehicles, or robots,” says Azizan.

    Azizan hopes to continue working on algorithms for even larger-scale systems. He and his team are designing efficient algorithms to better control supply and demand in smart energy grids. According to Azizan, even if we create the most efficient solar panels and batteries, we can never achieve a sustainable grid powered by renewable resources without the right control mechanisms.

    Mechanical engineers like Ahmed, Mueller, Deng, and Azizan serve as the key to realizing the next revolution of computing in design.

    “MechE is in a unique position at the intersection of the computational and physical worlds,” Azizan says. “Mechanical engineers build a bridge between theoretical, algorithmic tools and real, physical world applications.”

    Sophisticated computational tools, coupled with the ground truth mechanical engineers have in the physical world, could unlock limitless possibilities for design engineering, well beyond what could have been imagined in those early days of CAD. More