More stories

  • in

    China's telecom market grows to $232.4B on cloud push

    China’s telecommunications sector climbed 8% last year to hit 1.47 trillion yuan ($232.41 billion) in revenue. Its internet services industry also saw growth, expanding 21.2% to reach 1.55 trillion yuan ($245.06 billion) in revenue. Enterprise demand for new digital services, such as cloud computing, big data, and data centres were the biggest drivers that fuelled the Chinese telecom market, according to the country’s Ministry of Industry and Information Technology (MIIT). Revenue from these digital services grew 27.8% and accounted for 44.5% of the industry’s overall revenue growth, the ministry said. 

    It added that the 8% year-on-year growth was higher than the 4.1% growth rate clocked in 2020. Revenue from fixed line, data, and internet services contributed 61.5% of the total industry.  Pointing to China’s push for new infrastructures, specifically 5G networks, the MIIT said the country had rolled out some 1.43 million 5G base stations by end-2021. These accounted for more than 60% of the global figure, it said.  It also noted that more than 300 Chinese cities had begun building gigabit optic fibre networks, adding that investments in internet broadband access climbed 40% year-on-year in 2021. China’s internet and related services market also saw robust growth, expanding 21.2% to reach 1.55 trillion yuan ($245.06 billion) in revenue, according to MIIT.

    Businesses in the local sector registered 132 billion yuan ($20.87 billion) in profits, clocking a 13.3% year-on-year growth, reported state-owned news agency Xinhua, citing stats from the ministry. These organisations also spent 5% more on research and development (R&D) last year, forking out 75.42 billion yuan ($11.92 billion). MIIT’s figures include Chinese businesses that register at least 5 million yuan ($790,500) in revenue from internet services. Organisations that drew the same amount in revenue from China’s software and IT industry also saw growth last year, the ministry said, noting that there were more than 40,000 such companies in the sector.In particular, the IT services market expanded by 20% year-on-year to register 6 trillion yuan ($948.6 billion) in revenue. Software vendors saw their combined profits climb 7.6% to almost 1.19 trillion yuan ($188.14 billion) in 2021, MIIT said. It revealed that China’s software exports tipped $52.1 billion last year, up 8.8% year-on-year.RELATED COVERAGE More

  • in

    Telstra signs 16.5-year deal to support Viasat-3 in Asia-Pacific

    Image: Al Drago/Bloomberg via Getty Images
    Telstra and Viasat have signed a 16.5-year deal that will see the Australian telco build and manage the ground infrastructure needed for when the Viasat-3 geosynchronous satellite constellation eventually comes online.Under the deal, Telstra will collocate satellite access nodes at hundreds of its sites around Australia, as well as build and manage the links between those sites and multiple data centres that will house core networking equipment. When it announced Viasat-3, the company expected the first satellite to be launched in late 2019 or early 2020, but fate and the coronavirus intervened to push back those plans. “Later this year, we will begin the launch cycle of our Viasat-3 constellation, which is a trio of the highest capacity commercial geo-satellites ever built. Each one, delivering more than a terabit per second … total network throughput, which is about a thousand times more efficient than when you compare it to our first-generation satellites,” Viasat president of space and commercial networks Dave Ryan said on Wednesday. “This terabit class of satellites is truly unique, and offers the best industry bandwidth economics especially when you compare it to other geos to medium, earth orbit, or lower satellite systems.” The first satellite is set to be launched will service the Americas, followed by two more launched at six-month intervals to service EMEA and Asia-Pacific. It is expected that the trio will support download speeds of “well over” 150Mbps. Even though the Telstra network is limited to Australia, it will still support service outside the nation.

    “The vast majority of the equipment to be able to operate the Asia Pacific region is what we’re talking about deploying in Australia,” Ryan told ZDNet. “There may, and probably will be, cases where some countries want their own hub, and so there may be relatively small amounts of equipment that might go into other countries as we expand out and meet those particular requirements. “Some countries do want to have a regional control, for example, of what goes in and out of their countries. All countries do to some degree and sometimes that requires additional hubs put into their countries. But it’s a relatively small amount of equipment compared to what we are talking about working with Telstra on.” Telstra added it was in discussions on how it may use Viasat services in the future. At the same time, Telstra announced it would add 20,000kms of new fibre to its optical network that would support transmission rates of up to 650Gbps, and express connectivity between Sydney and Melbourne, Brisbane, and Perth of up to 55Tbps. The telco said trials were already underway, with the proper build to commence before the end of this fiscal year, with the hit to capital expenditure to be around AU$350 million over the 2023 to 2025 fiscal years. All up, both projects are set to cost between AU$1.4 billion to AU$1.6 billion, and are expected to continue approximately AU$200 million to earnings by FY26 and be paid off in nine years. “They are also consistent with our strategy to create value from InfraCo, including considering monetisation opportunities over time,” Telstra CEO Andy Penn said. “Our strong cash flows and T25 growth ambitions provide us the flexibility to make these strategic infrastructure investments, whilst maintaining flexibility to return excess cash to shareholders. Together, these investments are expected to deliver incremental long-term accretive growth.” In November, Viasat announced it would acquire UK-based Inmarsat in a $7.3 billion transaction that is set to close later this year. The combined entity would have a fleet of 19 satellites in service with another 10 under construction, a global Ka-band footprint and L-band assets and licences for all-weather narrowband and IoT connectivity. Viasat added it would introduce its beamforming, end-user terminal, and payload technologies to “unlock greater value” in Inmarsat’s L-band space assets.  Related Coverage More

  • in

    Comcast's Java-based resource library expands accessibility features for Xfinity

    Comcast detailed new plans to increase accessibility across its Xfinity X1 and Flex products by employing a JavaScript framework called LightningJS.

    According to the cable provider, it will “integrate accessibility as a core component throughout the tech stack” as it continues the development of products like its Xfinity X1 home entertainment interface and Xfinity Flex streaming box. This new transition is being powered by the integration of the aforementioned open-source JavaScript framework LightningJS. The company plans to employ it as the underlying framework for accessibility components across its entertainment products. Benefits of integrating this new technology at the base level have already included additional capabilities such as an easily implementable typeface and font size changes and support high-contrast color schemes for those with visual impairments; reduced motion modes for viewers with motion sensitivities; and expanded support for a focus state for users with limited mobility access Comcast’s services via a screen reader or other accessibility technology. Comcast said it is continuing to build out a shared component library of Lightning UI assets that it can deploy across its Xfinity products and via Sky and NBC Universal. Additional functions currently being worked on include the ability to announce on-screen text such as movie and category titles and a text magnifier that can display selected fields in larger, high-contrast fonts. Comcast also plans to begin open sourcing its developments to help partners integrate accessibility features directly into apps and assets that they plan to deploy across its platforms. Tom Wlodkowski, Vice President of Accessibility at Comcast Cable, said, “Collaboration has always been central to our technology innovation and development, especially when it comes to inclusive product design. This is yet another example of how our teams are working together and with the larger development community to create better experiences for everyone.”

    Networking More

  • in

    FCC moves forward with plans to require broadband 'nutrition labels'

    The US Federal Communications Commission on Thursday proposed new rules that would require internet service providers (ISPs) to prominently display easy-to-understand labels to help consumers comparison shop for broadband services. Under the proposal, ISPs would have to display the labels — modeled after nutrition labels found on food packaging — at the point of sale. The proposed labels show prices, speeds, data allowances, network management practices, and other key broadband service information. An example of a blank label for fixed broadband.
    FCC
    “Access to accurate, simple-to-understand information about broadband internet access services helps consumers make informed choices and is central to a well-functioning marketplace that encourages competition, innovation, low prices, and high-quality service,” the FCC wrote in a release Thursday.The FCC first approved this style of label for ISPs to display on a voluntary basis in 2016. Now, ISPs will be required to display this kind of information under the recently-passed Infrastructure Investment and Jobs Act. The bill also included more than $65 billion to build out broadband networks and make broadband more affordable. Under the new law, the FCC has a year to set up the new broadband labeling requirements. The next step is for the FCC to hear from the public. The agency is seeking comments on things like: how consumers evaluate broadband service plans; whether the 2016 labels will assist consumers with the purchase process; whether the 2016 labels should be updated in terms of content and format; and whether the commission should provide new guidance about where broadband providers must display such labels.

    Networking More

  • in

    How Extreme Networks got inside track on sports stadium installations

    This morning Extreme Networks announced its quarterly results for Q2 FY22. As has been the trend recently, Extreme put up another solid quarter, posting revenues of $280.9 million and non-GAAP EPS of $0.21, beating the expected numbers of $272.1 million and $0.17, respectively.

    The revenue number grew a healthy 16% year over year. Its Q3 revenue guide is $276M to $286M, the midpoint of which is in line with the Street’s expected $281.1M, where EPS is expected to be $0.16 to $0.21, also in line with the expected $0.18. Extreme has now exceeded its numbers for the past four consecutive quarters, showing an acceleration to the business, despite a tough macro environment slowed by supply-chain shortages and uncertainty about people returning to work. One of the important financial metrics to examine is that SaaS ARR is now $88.3M, up 55% YoY and 11% QoQ. The shift to the cloud creates much greater predictability for investors. The strong numbers were driven by strong customer demand, highlighted by more than $90M in incremental backlog, bringing the total to almost $300M. As the chip and supply shortages ease, that $300M will convert into revenue. Extreme now in the SD-WAN space During the quarter, Extreme completed the integration of Ipanema, which was faster than expected. This will bring SD-WAN (software-defined wide-area network) into Extreme’s broad portfolio of networking products. SD-WAN was the missing link in the company’s end-to-end enterprise networking portfolio, including campus switching, Wi-Fi, data center, and hybrid work products. SD-WAN also provided a path to SASE (secure access service edge), enabling Extreme to pivot to security. Also: How Juniper is using AI in SD-WAN to differentiate itselfThis quarter was also highlighted by some new sports partnerships, an area in which Extreme has been a leader for the better part of a decade. Stadium Wi-Fi is tough because keeping tens of thousands of fans connected requires a major feat of engineering, but this is something Extreme does very well. The company currently has Wi-Fi / Wi-Fi analytics relationships with the NFL, Major League Baseball and NASCAR and announced it is adding the National Hockey League (NHL). The agreement is for Extreme to be the official Wi-Fi analytics provider of the pro hockey league. NHL added to Extreme’s roster of pro sports leagues 

    The NHL will use the insights from ExtremeAnalytics in a similar way to the other sports leagues. The product provides granular insights into who is using what application and when. It also shows usage patterns to optimize the performance of the apps; improving fan experience is critical to all sports leagues. This ensures that the Wi-Fi network is performing as expected at a basic level. During the game, fans are texting, TikTok-ing, Instagramming, Tweeting, and other social activities. When the network is not working, it can be incredibly frustrating. 

    Also, since the COVID-19 pandemic began two years ago, the fan’s mobile device has taken on an even more important role. Most venues only accept digital tickets; QR codes for vaccine checks are mandatory at many locations, and concession and memorabilia are mostly cashless. Even things such as 50/50 raffles no longer accept cash, creating a further reliance on Wi-Fi and the mobile phone. There’s another trend coming that will require better-performing Wi-Fi: in-stadium betting. While we aren’t quite there yet, most of the sports leagues are prepping for it. There is a significant amount of money to be made from daily leagues, fantasy sports, prop bets, and more. There is also a significant amount of risk if the network happens to go down in the middle of a transaction. Wi-Fi will need to evolve from a network that is the best available now to one that’s always on and always performing. Most sports leagues use the ExtremeAnalytics platform to do that. Super Bowl analytics powered by Extreme In addition to the NHL, Extreme announced that it would be providing Wi-Fi analytics for the Super Bowl for the ninth consecutive year. Although SoFi stadium is predominantly Cisco equipment, as per this ZDNet post, Extreme provides the Wi-Fi analytics league-wide, including the championship game. Extreme Networks’ milestones.
    Extreme Networks
    Extreme’s expertise in sports and entertainment started in 2012 when it won the stadium Wi-Fi contract for the New England Patriots. I recall going to an event at Gillette Stadium, and Jonathan Kraft, president of the Patriots, told a group of media and analysts that Extreme was the only Wi-Fi vendor willing to guarantee performance. Nine Super Bowls later, the company has built a highly successful sports and entertainment practice. While the nine Super Bowls isn’t quite as impressive as QB Tom Brady’s 10, it’s a noteworthy accomplishment. English Premier League the next frontier for Wi-Fi Another announcement in this area is that Extreme Networks has been selected as the Wi-Fi 6 and Wi-Fi analytics provider for Manchester United, one of the marquee teams in the English Premier League. “Man U” is owned by the same ownership group as the Tampa Bay Buccaneers, a long-time Extreme customer, which certainly helped establish a relationship. The EPL has not been as aggressive with digitizing soccer as the North American sports leagues, likely due to the near-monopoly it has on sports in the UK. The world is changing, though, and high-performance Wi-Fi is no longer an option.Old Trafford Stadium, home of Man U, adds to the list of iconic stadiums Extreme has now modernized with its Wi-Fi products. The list includes the previously mentioned Trafford but also Berlin Olympic Stadium, Daytona Speedway, Wrigley Field, the Bell Center, Lambeau Field, LA. Coliseum, and America’s most beloved ballpark, Fenway Park. Extreme also recently added Stanford Stadium, right in the backyard of its bigger Silicon Valley competitors. The digitization of sports has some interesting potential to shift competitive dynamics. Currently, big market teams are the ones with big TV contracts. Some leagues, such as the NFL, do a nice job of revenue sharing but others, like MLB, have an imbalance in the opportunity because the Yankees, Red Sox, Dodgers, and others have far more money than small-market teams. Capitalizing on digital trends can create an entirely new revenue stream for all teams, allowing them to close the gap. As an example, the Edmonton Oilers have the most exciting young player in the NHL in Connor McDavid. The team and league should be venturing into digitizing McDavid to get him the same time of exposure as he could have in New York or LA. Teams can use their digital prowess to attract high-profile free agents that may have once shunned a market such as Edmonton’s. The digitization of sports can democratize opportunity for all teams in all markets. The transformation the sports leagues are now experiencing is something all IT and business leaders should be watching. I saw a recent study that found that 58% of customer interactions are now digital, and 55% of all products and services have been digitized. While sports and entertainment are ahead of the curve, this trend is coming to all businesses largely due to necessity. Good-quality Wi-Fi is critical to modernizing customer and employee experience. Companies that ignore this area will soon see customers bolt for competitors that don’t.

    Networking More

  • in

    AT&T brings symmetrical multi-gig connectivity to home market

    The thirst for more and more internet speed continues to grow. In a world where the pandemic accelerated digital transformation, one could argue the internet connection into one’s home is critically important to the way we work, learn and play. To date, however, consumers have been limited to gigabit speeds, which might have seemed fast a couple of years ago, but today is putting a cap on the things we can do. AT&T breaks the gig barrier for home internet 

    On Monday, AT&T broke the gig barrier when it announced its fiber customers can now get multi-gigabyte (GB) internet speeds, as the carrier doubles down on fiber in its broadband infrastructure. AT&T will offer symmetric 2.5GB and 5GB speed options beginning this week.AT&T is also rolling out simpler pricing for its fiber portfolio without additional equipment fees, annual contracts, and data caps. AT&T Fiber and Business Fiber customers with a 2GB plan will pay $110 per month and $225 per month, respectively. This is ideally suited for small businesses or those who have many connected devices in the home. The 5GB option will cost AT&T Fiber customers $180 per month and Business Fiber customers $395 per month.Symmetric bandwidth can be a game-changer for video users or content creators  The notable parts of the announcements are the symmetric bandwidth, as that’s a rarity with broadband. Comcast Xfinity offers 2GB download speeds, but the upload is limited to 35Mbps, which is a limitation of cable. Verizon offers near symmetric gigabit fiber but not multi-gigabit speeds. Symmetric bandwidth is important for video calls, gaming, and content creators, who are uploading massive files to the cloud. In this case, customers of AT&T would see a marked performance improvement. Also, I’m a big fan of transparent pricing where the cost is fixed in perpetuity. Often, broadband providers offer a low introductory price and then jack the price up after a year. By now, most savvy buyers know that if one calls and complains, they can get the price reduced. Putting customers through this gauntlet annually is one reason why companies like Comcast’s NPS score is so low. This skit by SNL actually parodies a call with Spectrum, which seems like a typical call to your local cable provider. AT&T’s service is no better, but holding the price fixed is at least one less reason for a customer to contact the call center. Also, the price is inclusive of fees, equipment, and other factors that can drive a seemingly low price up. With telecom services, it’s rare that you get what you pay for, but in this case, that’s true. Fiber is a proven technology The fiber network from AT&T is reliable, secure, and tested. It’s used by the U.S. government, the military, first responders, and leading companies with complex connectivity needs. More than 2.75 million U.S. businesses currently rely on AT&T’s high-speed fiber connections.

    However, businesses aren’t the only ones with a need for speed. Research cited in the AT&T press material shows the average consumer has 13 connected devices in their home, which could go up to 32 devices or more in the near future. This includes traditional devices, such as tablets and laptops, smart TVs, streaming devices, gaming consoles, appliances, connected doorbells, and more. Such devices consume tons of data and demand more bandwidth.On top of that, more people are working from home due to the COVID-19 pandemic. Multi-gig speeds are primed for these demands and can provide the bandwidth homes and businesses require to run a multitude of connected devices. Fiber was designed specifically for high-speed internet, enabling high-capacity tasks like uploading large files during video calls, as well as gaming and entertainment. AT&T’s multi-gig fiber launch is part of the carrier’s strategy to provide customers with a seamless wireless experience from a single carrier by combining its 5G network and its fiber network. AT&T has also amped up its Wi-Fi technology. Last year, the carrier launched a gateway that’s Wi-Fi 6 and Tri-band enabled to support multiple connected devices.AT&T envisions a future of fiber that’s hyperlocal, hyper-reliable, and hyper-fast. The service will be available in more than 70 metro areas across the country, including Los Angeles, Atlanta, and Dallas, which might seem like a minor number, but it’s currently only available in 5.2 million customer locations, which is a fraction of the country. AT&T will expand this to about 30 million in 2025, which is still the minority of the country. If you’re lucky enough to be in the AT&T footprint, the service is worth a look.

    Networking More

  • in

    Juniper rolls out the Trio 6 chipset for a wide range of network use cases

    Juniper Networks on Tuesday announced a set of new silicon chipsets: The Trio 6 chipset is optimized for a wide range of use cases on the edge, with flexibility to adapt to future networking use cases. Meanwhile, the Express 5 chipset is designed for high throughput, delivering non-blocking throughput of 28.8T in a single package. These new application-specific integrated circuits (ASICs), Juniper says, are designed to be optimized for the specific needs of specific points in the network. “As networks have evolved over the past two decades by supporting more diverse and demanding digital services, operators have increasingly sought out specialized silicon to tackle specific roles,” Juniper’s Brendan Gibbs wrote in a blog post. “Networks run better with ASICs optimized for different tasks.”The sixth generation of Juniper’s Trio silicon for MX Series routers maximizes logical scale and programmability for complex and dynamic edge service nodes. At the network edge, platforms need to be able to support a growing number of diverse business and consumer use cases and features.The Trio 6, which is machine learning-enabled, also helps deliver security with native support for IPSec and integrated MACsec at native line rate. In terms of power usage, the Trio 6 uses 7-nm fabrication technology to deliver a 70% improvement in efficiency compared to previous-generation chipsets.The Trio 6 is available now. Along with the Trio 6, Juniper is rolling out new additions to the MX Series routing portfolio, all based on the new chipset. This includes the Juniper MX10K family, which offers the first 400G-capable LC9600 line card.The new Express 5 ASIC, meanwhile, is designed for PTX10K series platforms. Juniper says it delivers the industry’s  highest non-blocking throughput. Also built with 7-nm technology, it delivers 45% better power efficiency than previous chipsets. 

    Express 5 silicon taped out in 2021 and will be available in shipping product at a future date.

    Networking More

  • in

    Meta's 'data2vec' is a step toward One Neural Network to Rule Them All

    The race is on to create one neural network that can process multiple kinds of data — a more-general artificial intelligence that doesn’t discriminate about types of data but instead can crunch them all within the same basic structure.

    Artificial Intelligence

    The genre of multi-modality, as these neural networks are called, is seeing a flurry of activity in which different data, such as image, text, and speech audio, are passed through the same algorithm to produce a score on different tests such as image recognition, natural language understanding, or speech detection. And these ambidextrous networks are racking up scores on benchmark tests of AI. The latest achievement is what’s called “data2vec,” developed by researchers at the AI division of Meta (parent of Facebook, Instagram, and WhatsApp).The point, as Meta researcher Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, and Michael Auli reveal in a blog post, is to approach something more like the general learning ability that the human mind seems to encompass. “While people appear to learn in a similar way regardless of how they get information — whether they use sight or sound, for example — there are currently big differences in the way self-supervised learning algorithms learn from images, speech, text, and other modalities,” the blog post states.The main point is that “AI should be able to learn to do many different tasks, including those that are entirely unfamiliar.” Meta’s CEO, Mark Zuckerberg, offered a quote about the work and its ties to a future Metaverse: People experience the world through a combination of sight, sound, and words, and systems like this could one day understand the world the way we do. This will all eventually get built into AR glasses with an AI assistant so, for example, it could help you cook dinner, noticing if you miss an ingredient, prompting you to turn down the heat, or more complex tasks.

    The name data2vec is a play on the name of a program for language “embedding” developed at Google in 2013 called “word2vec.” That program predicted how words cluster together, and so word2vec is representative of a neural network designed for a specific type of data, in that case text.  Also: Open the pod bay doors, please, HAL: Meta’s AI simulates lip-readingIn the case of data2vec, however, Baevski and colleagues are taking a standard version of what’s called a Transformer, developed by Ashish Vaswani and colleagues at Google in 2017, and extending it to be used for multiple data types. The Transformer neural network was originally developed for language tasks, but it has been widely adapted in the years since for many kinds of data. Baevski et al. show that the Transformer can be used to process multiple kinds of data without being altered, and the trained neural network that results can perform on multiple different tasks.  In the formal paper, “data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language,” Baevski et al., train the Transformer for image data, speech audio waveforms, and text language representations.  The very general Transformer becomes what is called a pre-training that can then be applied to specific neural networks in order to perform on specific tasks. For example, the authors use data2vec as pre-training to equip what’s called “ViT,” the “vision Transformer,” a neural network specifically designed for vision tasks that was introduced last year by Alexey Dosovitskiy and colleagues at Google. Meta shows top scores for the venerable ImageNet image-recognition competition.
    Meta 2022
    When used on ViT to try to solve the standard ImageNet test of image recognition, their results come in at the top of the pack, with accuracy of 84.1%. That’s better than the score of 83.2% received by a team at Microsoft that pre-trained ViT lead by Hangbo Bao last year. And the same data2vec Transformer outputs results that are state-of-the-art for speech recognition and that are competitive, if not the best, for natural language learning: Experimental results show data2vec to be effective in all three modalities, setting a new state of the art for ViT-B and ViT-L on ImageNet-1K, improving over the best prior work in speech processing on speech recognition and performing on par to RoBERTa on the GLUE natural language understanding benchmark.  The crux is that this is happening without any modification of the neural network to be about images, and the same for speech and text. Instead, every input type is going into the same network and is completing the same very general task. That task is the same task that Transformer networks always use, known as “masked prediction.”  Also: Google’s Supermodel: DeepMind Perceiver is a step on the road to an AI machine that could process anythingThe way that data2vec performs masked prediction, however, is an approach known as “self-supervised” learning. In a self-supervised setting, a neural network is trained or developed by having to pass through multiple stages. First, the network constructs a representation of the joint probability of data input, be it images or speech or text. Then, a second version of the network has some of those input data items “masked out,” left unrevealed. It has to reconstruct the joint probability that the first version of the network had constructed, which forces it to create increasingly better representations of the data by essentially filling in the blanks. An overview of the data2vec approach.
    Meta 2022
    The two networks, the one with the full pattern of the joint probability, and the one with the incomplete version that it is trying to complete, are called, sensibly enough, “Teacher” and “Student.” The Student network tries to develop its sense of the data, if you will, by reconstructing what the Teacher has already achieved. You can see the code for the models on Github. How is the neural network performing Teacher and Student for three very different types of data? The key is that the “target” of joint probability in all three data cases is not a specific output data type, as is the case in versions of the Transformer for a specific data type, such as Google’s BERT or OpenAI’s GPT-3. 

    Networking

    Rather, data2vec is grabbing a bunch of neural network layers that are inside the neural network, somewhere in the middle, that represent the data before it is produced as a final output.  As the researchers write, “One of the main differences of our method […] other than performing masked prediction, is the use of targets which are based on averaging multiple layers from the teacher network.” Specifically, “we regress multiple neural network layer representations instead of just the top layer,” so that “data2vec predicts the latent representations of the input data.” They add, “We generally use the output of the FFN [feed-forward network] prior to the last residual connection in each block as target,” where a “block” is the Transformer equivalent of a neural network layer. The point is that every data type that goes in becomes the same challenge for the Student network of reconstructing something inside the neural network that the Teacher had composed. This averaging is different from other recent approaches to building One Network To Crunch All Data. For example, last summer, Google’s DeepMind unit offered up what it calls “Perceiver,” its own multi-modal version of the Transformer. The training of the Perceiver neural network is the more-standard process of producing an output that is the answer to a labeled, supervised task such as ImageNet. In the self-supervised approach, data2vec isn’t using those labels; it’s just trying to reconstruct the network’s internal representation of the data.  Even more ambitious efforts lie in the wings. Jeff Dean, head of Google’s AI efforts, in October teased about “Pathways,” calling it a “next generation AI architecture” for multi-modal data processing. Mind you, data2vec’s very general approach to a single neural net for multiple modalities still has a lot of information about the different data types. Image, speech, and text are all prepared by pre-processing of the data. In that way, the multi-modal aspect of the network still relies on clues about the data, what the team refer to as “small modality-specific input encoders.” Also: Google unveils ‘Pathways’, a next-gen AI that can be trained to multitaskWe are not yet at a world where a neural net is trained with no sense whatsoever of the input data types. We are also not at a point in time when the neural network can construct one representation that combines all the different data types, so that the neural net is learning things in combination.That fact is made clear from an exchange between ZDNet and the researchers. ZDNet reached out to Baevski and team and asked, “Are the latent representations that serve as targets a combined encoding of all three modalities at any given time step, or are they usually just one of the modalities?” Baevski and team responded that it is the latter case, and their reply is interesting enough to quote at length: The latent variables are not a combined encoding for the three modalities. We train separate models for each modality but the process through which the models learn is identical. This is the main innovation of our project since before there were large differences in how models are trained in different modalities. Neuroscientists also believe that humans learn in similar ways about sounds and the visual world. Our project shows that self-supervised learning can also work the same way for different modalities. Given data2vec’s modality-specific limitations, a neural network that might truly be One Network To Rule Them All remains the technology of the future. More