More stories

  • in

    Comcast's Java-based resource library expands accessibility features for Xfinity

    Comcast detailed new plans to increase accessibility across its Xfinity X1 and Flex products by employing a JavaScript framework called LightningJS.

    According to the cable provider, it will “integrate accessibility as a core component throughout the tech stack” as it continues the development of products like its Xfinity X1 home entertainment interface and Xfinity Flex streaming box. This new transition is being powered by the integration of the aforementioned open-source JavaScript framework LightningJS. The company plans to employ it as the underlying framework for accessibility components across its entertainment products. Benefits of integrating this new technology at the base level have already included additional capabilities such as an easily implementable typeface and font size changes and support high-contrast color schemes for those with visual impairments; reduced motion modes for viewers with motion sensitivities; and expanded support for a focus state for users with limited mobility access Comcast’s services via a screen reader or other accessibility technology. Comcast said it is continuing to build out a shared component library of Lightning UI assets that it can deploy across its Xfinity products and via Sky and NBC Universal. Additional functions currently being worked on include the ability to announce on-screen text such as movie and category titles and a text magnifier that can display selected fields in larger, high-contrast fonts. Comcast also plans to begin open sourcing its developments to help partners integrate accessibility features directly into apps and assets that they plan to deploy across its platforms. Tom Wlodkowski, Vice President of Accessibility at Comcast Cable, said, “Collaboration has always been central to our technology innovation and development, especially when it comes to inclusive product design. This is yet another example of how our teams are working together and with the larger development community to create better experiences for everyone.”

    Networking More

  • in

    FCC moves forward with plans to require broadband 'nutrition labels'

    The US Federal Communications Commission on Thursday proposed new rules that would require internet service providers (ISPs) to prominently display easy-to-understand labels to help consumers comparison shop for broadband services. Under the proposal, ISPs would have to display the labels — modeled after nutrition labels found on food packaging — at the point of sale. The proposed labels show prices, speeds, data allowances, network management practices, and other key broadband service information. An example of a blank label for fixed broadband.
    FCC
    “Access to accurate, simple-to-understand information about broadband internet access services helps consumers make informed choices and is central to a well-functioning marketplace that encourages competition, innovation, low prices, and high-quality service,” the FCC wrote in a release Thursday.The FCC first approved this style of label for ISPs to display on a voluntary basis in 2016. Now, ISPs will be required to display this kind of information under the recently-passed Infrastructure Investment and Jobs Act. The bill also included more than $65 billion to build out broadband networks and make broadband more affordable. Under the new law, the FCC has a year to set up the new broadband labeling requirements. The next step is for the FCC to hear from the public. The agency is seeking comments on things like: how consumers evaluate broadband service plans; whether the 2016 labels will assist consumers with the purchase process; whether the 2016 labels should be updated in terms of content and format; and whether the commission should provide new guidance about where broadband providers must display such labels.

    Networking More

  • in

    How Extreme Networks got inside track on sports stadium installations

    This morning Extreme Networks announced its quarterly results for Q2 FY22. As has been the trend recently, Extreme put up another solid quarter, posting revenues of $280.9 million and non-GAAP EPS of $0.21, beating the expected numbers of $272.1 million and $0.17, respectively.

    The revenue number grew a healthy 16% year over year. Its Q3 revenue guide is $276M to $286M, the midpoint of which is in line with the Street’s expected $281.1M, where EPS is expected to be $0.16 to $0.21, also in line with the expected $0.18. Extreme has now exceeded its numbers for the past four consecutive quarters, showing an acceleration to the business, despite a tough macro environment slowed by supply-chain shortages and uncertainty about people returning to work. One of the important financial metrics to examine is that SaaS ARR is now $88.3M, up 55% YoY and 11% QoQ. The shift to the cloud creates much greater predictability for investors. The strong numbers were driven by strong customer demand, highlighted by more than $90M in incremental backlog, bringing the total to almost $300M. As the chip and supply shortages ease, that $300M will convert into revenue. Extreme now in the SD-WAN space During the quarter, Extreme completed the integration of Ipanema, which was faster than expected. This will bring SD-WAN (software-defined wide-area network) into Extreme’s broad portfolio of networking products. SD-WAN was the missing link in the company’s end-to-end enterprise networking portfolio, including campus switching, Wi-Fi, data center, and hybrid work products. SD-WAN also provided a path to SASE (secure access service edge), enabling Extreme to pivot to security. Also: How Juniper is using AI in SD-WAN to differentiate itselfThis quarter was also highlighted by some new sports partnerships, an area in which Extreme has been a leader for the better part of a decade. Stadium Wi-Fi is tough because keeping tens of thousands of fans connected requires a major feat of engineering, but this is something Extreme does very well. The company currently has Wi-Fi / Wi-Fi analytics relationships with the NFL, Major League Baseball and NASCAR and announced it is adding the National Hockey League (NHL). The agreement is for Extreme to be the official Wi-Fi analytics provider of the pro hockey league. NHL added to Extreme’s roster of pro sports leagues 

    The NHL will use the insights from ExtremeAnalytics in a similar way to the other sports leagues. The product provides granular insights into who is using what application and when. It also shows usage patterns to optimize the performance of the apps; improving fan experience is critical to all sports leagues. This ensures that the Wi-Fi network is performing as expected at a basic level. During the game, fans are texting, TikTok-ing, Instagramming, Tweeting, and other social activities. When the network is not working, it can be incredibly frustrating. 

    Also, since the COVID-19 pandemic began two years ago, the fan’s mobile device has taken on an even more important role. Most venues only accept digital tickets; QR codes for vaccine checks are mandatory at many locations, and concession and memorabilia are mostly cashless. Even things such as 50/50 raffles no longer accept cash, creating a further reliance on Wi-Fi and the mobile phone. There’s another trend coming that will require better-performing Wi-Fi: in-stadium betting. While we aren’t quite there yet, most of the sports leagues are prepping for it. There is a significant amount of money to be made from daily leagues, fantasy sports, prop bets, and more. There is also a significant amount of risk if the network happens to go down in the middle of a transaction. Wi-Fi will need to evolve from a network that is the best available now to one that’s always on and always performing. Most sports leagues use the ExtremeAnalytics platform to do that. Super Bowl analytics powered by Extreme In addition to the NHL, Extreme announced that it would be providing Wi-Fi analytics for the Super Bowl for the ninth consecutive year. Although SoFi stadium is predominantly Cisco equipment, as per this ZDNet post, Extreme provides the Wi-Fi analytics league-wide, including the championship game. Extreme Networks’ milestones.
    Extreme Networks
    Extreme’s expertise in sports and entertainment started in 2012 when it won the stadium Wi-Fi contract for the New England Patriots. I recall going to an event at Gillette Stadium, and Jonathan Kraft, president of the Patriots, told a group of media and analysts that Extreme was the only Wi-Fi vendor willing to guarantee performance. Nine Super Bowls later, the company has built a highly successful sports and entertainment practice. While the nine Super Bowls isn’t quite as impressive as QB Tom Brady’s 10, it’s a noteworthy accomplishment. English Premier League the next frontier for Wi-Fi Another announcement in this area is that Extreme Networks has been selected as the Wi-Fi 6 and Wi-Fi analytics provider for Manchester United, one of the marquee teams in the English Premier League. “Man U” is owned by the same ownership group as the Tampa Bay Buccaneers, a long-time Extreme customer, which certainly helped establish a relationship. The EPL has not been as aggressive with digitizing soccer as the North American sports leagues, likely due to the near-monopoly it has on sports in the UK. The world is changing, though, and high-performance Wi-Fi is no longer an option.Old Trafford Stadium, home of Man U, adds to the list of iconic stadiums Extreme has now modernized with its Wi-Fi products. The list includes the previously mentioned Trafford but also Berlin Olympic Stadium, Daytona Speedway, Wrigley Field, the Bell Center, Lambeau Field, LA. Coliseum, and America’s most beloved ballpark, Fenway Park. Extreme also recently added Stanford Stadium, right in the backyard of its bigger Silicon Valley competitors. The digitization of sports has some interesting potential to shift competitive dynamics. Currently, big market teams are the ones with big TV contracts. Some leagues, such as the NFL, do a nice job of revenue sharing but others, like MLB, have an imbalance in the opportunity because the Yankees, Red Sox, Dodgers, and others have far more money than small-market teams. Capitalizing on digital trends can create an entirely new revenue stream for all teams, allowing them to close the gap. As an example, the Edmonton Oilers have the most exciting young player in the NHL in Connor McDavid. The team and league should be venturing into digitizing McDavid to get him the same time of exposure as he could have in New York or LA. Teams can use their digital prowess to attract high-profile free agents that may have once shunned a market such as Edmonton’s. The digitization of sports can democratize opportunity for all teams in all markets. The transformation the sports leagues are now experiencing is something all IT and business leaders should be watching. I saw a recent study that found that 58% of customer interactions are now digital, and 55% of all products and services have been digitized. While sports and entertainment are ahead of the curve, this trend is coming to all businesses largely due to necessity. Good-quality Wi-Fi is critical to modernizing customer and employee experience. Companies that ignore this area will soon see customers bolt for competitors that don’t.

    Networking More

  • in

    AT&T brings symmetrical multi-gig connectivity to home market

    The thirst for more and more internet speed continues to grow. In a world where the pandemic accelerated digital transformation, one could argue the internet connection into one’s home is critically important to the way we work, learn and play. To date, however, consumers have been limited to gigabit speeds, which might have seemed fast a couple of years ago, but today is putting a cap on the things we can do. AT&T breaks the gig barrier for home internet 

    On Monday, AT&T broke the gig barrier when it announced its fiber customers can now get multi-gigabyte (GB) internet speeds, as the carrier doubles down on fiber in its broadband infrastructure. AT&T will offer symmetric 2.5GB and 5GB speed options beginning this week.AT&T is also rolling out simpler pricing for its fiber portfolio without additional equipment fees, annual contracts, and data caps. AT&T Fiber and Business Fiber customers with a 2GB plan will pay $110 per month and $225 per month, respectively. This is ideally suited for small businesses or those who have many connected devices in the home. The 5GB option will cost AT&T Fiber customers $180 per month and Business Fiber customers $395 per month.Symmetric bandwidth can be a game-changer for video users or content creators  The notable parts of the announcements are the symmetric bandwidth, as that’s a rarity with broadband. Comcast Xfinity offers 2GB download speeds, but the upload is limited to 35Mbps, which is a limitation of cable. Verizon offers near symmetric gigabit fiber but not multi-gigabit speeds. Symmetric bandwidth is important for video calls, gaming, and content creators, who are uploading massive files to the cloud. In this case, customers of AT&T would see a marked performance improvement. Also, I’m a big fan of transparent pricing where the cost is fixed in perpetuity. Often, broadband providers offer a low introductory price and then jack the price up after a year. By now, most savvy buyers know that if one calls and complains, they can get the price reduced. Putting customers through this gauntlet annually is one reason why companies like Comcast’s NPS score is so low. This skit by SNL actually parodies a call with Spectrum, which seems like a typical call to your local cable provider. AT&T’s service is no better, but holding the price fixed is at least one less reason for a customer to contact the call center. Also, the price is inclusive of fees, equipment, and other factors that can drive a seemingly low price up. With telecom services, it’s rare that you get what you pay for, but in this case, that’s true. Fiber is a proven technology The fiber network from AT&T is reliable, secure, and tested. It’s used by the U.S. government, the military, first responders, and leading companies with complex connectivity needs. More than 2.75 million U.S. businesses currently rely on AT&T’s high-speed fiber connections.

    However, businesses aren’t the only ones with a need for speed. Research cited in the AT&T press material shows the average consumer has 13 connected devices in their home, which could go up to 32 devices or more in the near future. This includes traditional devices, such as tablets and laptops, smart TVs, streaming devices, gaming consoles, appliances, connected doorbells, and more. Such devices consume tons of data and demand more bandwidth.On top of that, more people are working from home due to the COVID-19 pandemic. Multi-gig speeds are primed for these demands and can provide the bandwidth homes and businesses require to run a multitude of connected devices. Fiber was designed specifically for high-speed internet, enabling high-capacity tasks like uploading large files during video calls, as well as gaming and entertainment. AT&T’s multi-gig fiber launch is part of the carrier’s strategy to provide customers with a seamless wireless experience from a single carrier by combining its 5G network and its fiber network. AT&T has also amped up its Wi-Fi technology. Last year, the carrier launched a gateway that’s Wi-Fi 6 and Tri-band enabled to support multiple connected devices.AT&T envisions a future of fiber that’s hyperlocal, hyper-reliable, and hyper-fast. The service will be available in more than 70 metro areas across the country, including Los Angeles, Atlanta, and Dallas, which might seem like a minor number, but it’s currently only available in 5.2 million customer locations, which is a fraction of the country. AT&T will expand this to about 30 million in 2025, which is still the minority of the country. If you’re lucky enough to be in the AT&T footprint, the service is worth a look.

    Networking More

  • in

    Juniper rolls out the Trio 6 chipset for a wide range of network use cases

    Juniper Networks on Tuesday announced a set of new silicon chipsets: The Trio 6 chipset is optimized for a wide range of use cases on the edge, with flexibility to adapt to future networking use cases. Meanwhile, the Express 5 chipset is designed for high throughput, delivering non-blocking throughput of 28.8T in a single package. These new application-specific integrated circuits (ASICs), Juniper says, are designed to be optimized for the specific needs of specific points in the network. “As networks have evolved over the past two decades by supporting more diverse and demanding digital services, operators have increasingly sought out specialized silicon to tackle specific roles,” Juniper’s Brendan Gibbs wrote in a blog post. “Networks run better with ASICs optimized for different tasks.”The sixth generation of Juniper’s Trio silicon for MX Series routers maximizes logical scale and programmability for complex and dynamic edge service nodes. At the network edge, platforms need to be able to support a growing number of diverse business and consumer use cases and features.The Trio 6, which is machine learning-enabled, also helps deliver security with native support for IPSec and integrated MACsec at native line rate. In terms of power usage, the Trio 6 uses 7-nm fabrication technology to deliver a 70% improvement in efficiency compared to previous-generation chipsets.The Trio 6 is available now. Along with the Trio 6, Juniper is rolling out new additions to the MX Series routing portfolio, all based on the new chipset. This includes the Juniper MX10K family, which offers the first 400G-capable LC9600 line card.The new Express 5 ASIC, meanwhile, is designed for PTX10K series platforms. Juniper says it delivers the industry’s  highest non-blocking throughput. Also built with 7-nm technology, it delivers 45% better power efficiency than previous chipsets. 

    Express 5 silicon taped out in 2021 and will be available in shipping product at a future date.

    Networking More

  • in

    Meta's 'data2vec' is a step toward One Neural Network to Rule Them All

    The race is on to create one neural network that can process multiple kinds of data — a more-general artificial intelligence that doesn’t discriminate about types of data but instead can crunch them all within the same basic structure.

    Artificial Intelligence

    The genre of multi-modality, as these neural networks are called, is seeing a flurry of activity in which different data, such as image, text, and speech audio, are passed through the same algorithm to produce a score on different tests such as image recognition, natural language understanding, or speech detection. And these ambidextrous networks are racking up scores on benchmark tests of AI. The latest achievement is what’s called “data2vec,” developed by researchers at the AI division of Meta (parent of Facebook, Instagram, and WhatsApp).The point, as Meta researcher Alexei Baevski, Wei-Ning Hsu, Qiantong Xu, Arun Babu, Jiatao Gu, and Michael Auli reveal in a blog post, is to approach something more like the general learning ability that the human mind seems to encompass. “While people appear to learn in a similar way regardless of how they get information — whether they use sight or sound, for example — there are currently big differences in the way self-supervised learning algorithms learn from images, speech, text, and other modalities,” the blog post states.The main point is that “AI should be able to learn to do many different tasks, including those that are entirely unfamiliar.” Meta’s CEO, Mark Zuckerberg, offered a quote about the work and its ties to a future Metaverse: People experience the world through a combination of sight, sound, and words, and systems like this could one day understand the world the way we do. This will all eventually get built into AR glasses with an AI assistant so, for example, it could help you cook dinner, noticing if you miss an ingredient, prompting you to turn down the heat, or more complex tasks.

    The name data2vec is a play on the name of a program for language “embedding” developed at Google in 2013 called “word2vec.” That program predicted how words cluster together, and so word2vec is representative of a neural network designed for a specific type of data, in that case text.  Also: Open the pod bay doors, please, HAL: Meta’s AI simulates lip-readingIn the case of data2vec, however, Baevski and colleagues are taking a standard version of what’s called a Transformer, developed by Ashish Vaswani and colleagues at Google in 2017, and extending it to be used for multiple data types. The Transformer neural network was originally developed for language tasks, but it has been widely adapted in the years since for many kinds of data. Baevski et al. show that the Transformer can be used to process multiple kinds of data without being altered, and the trained neural network that results can perform on multiple different tasks.  In the formal paper, “data2vec: A General Framework for Self-supervised Learning in Speech, Vision and Language,” Baevski et al., train the Transformer for image data, speech audio waveforms, and text language representations.  The very general Transformer becomes what is called a pre-training that can then be applied to specific neural networks in order to perform on specific tasks. For example, the authors use data2vec as pre-training to equip what’s called “ViT,” the “vision Transformer,” a neural network specifically designed for vision tasks that was introduced last year by Alexey Dosovitskiy and colleagues at Google. Meta shows top scores for the venerable ImageNet image-recognition competition.
    Meta 2022
    When used on ViT to try to solve the standard ImageNet test of image recognition, their results come in at the top of the pack, with accuracy of 84.1%. That’s better than the score of 83.2% received by a team at Microsoft that pre-trained ViT lead by Hangbo Bao last year. And the same data2vec Transformer outputs results that are state-of-the-art for speech recognition and that are competitive, if not the best, for natural language learning: Experimental results show data2vec to be effective in all three modalities, setting a new state of the art for ViT-B and ViT-L on ImageNet-1K, improving over the best prior work in speech processing on speech recognition and performing on par to RoBERTa on the GLUE natural language understanding benchmark.  The crux is that this is happening without any modification of the neural network to be about images, and the same for speech and text. Instead, every input type is going into the same network and is completing the same very general task. That task is the same task that Transformer networks always use, known as “masked prediction.”  Also: Google’s Supermodel: DeepMind Perceiver is a step on the road to an AI machine that could process anythingThe way that data2vec performs masked prediction, however, is an approach known as “self-supervised” learning. In a self-supervised setting, a neural network is trained or developed by having to pass through multiple stages. First, the network constructs a representation of the joint probability of data input, be it images or speech or text. Then, a second version of the network has some of those input data items “masked out,” left unrevealed. It has to reconstruct the joint probability that the first version of the network had constructed, which forces it to create increasingly better representations of the data by essentially filling in the blanks. An overview of the data2vec approach.
    Meta 2022
    The two networks, the one with the full pattern of the joint probability, and the one with the incomplete version that it is trying to complete, are called, sensibly enough, “Teacher” and “Student.” The Student network tries to develop its sense of the data, if you will, by reconstructing what the Teacher has already achieved. You can see the code for the models on Github. How is the neural network performing Teacher and Student for three very different types of data? The key is that the “target” of joint probability in all three data cases is not a specific output data type, as is the case in versions of the Transformer for a specific data type, such as Google’s BERT or OpenAI’s GPT-3. 

    Networking

    Rather, data2vec is grabbing a bunch of neural network layers that are inside the neural network, somewhere in the middle, that represent the data before it is produced as a final output.  As the researchers write, “One of the main differences of our method […] other than performing masked prediction, is the use of targets which are based on averaging multiple layers from the teacher network.” Specifically, “we regress multiple neural network layer representations instead of just the top layer,” so that “data2vec predicts the latent representations of the input data.” They add, “We generally use the output of the FFN [feed-forward network] prior to the last residual connection in each block as target,” where a “block” is the Transformer equivalent of a neural network layer. The point is that every data type that goes in becomes the same challenge for the Student network of reconstructing something inside the neural network that the Teacher had composed. This averaging is different from other recent approaches to building One Network To Crunch All Data. For example, last summer, Google’s DeepMind unit offered up what it calls “Perceiver,” its own multi-modal version of the Transformer. The training of the Perceiver neural network is the more-standard process of producing an output that is the answer to a labeled, supervised task such as ImageNet. In the self-supervised approach, data2vec isn’t using those labels; it’s just trying to reconstruct the network’s internal representation of the data.  Even more ambitious efforts lie in the wings. Jeff Dean, head of Google’s AI efforts, in October teased about “Pathways,” calling it a “next generation AI architecture” for multi-modal data processing. Mind you, data2vec’s very general approach to a single neural net for multiple modalities still has a lot of information about the different data types. Image, speech, and text are all prepared by pre-processing of the data. In that way, the multi-modal aspect of the network still relies on clues about the data, what the team refer to as “small modality-specific input encoders.” Also: Google unveils ‘Pathways’, a next-gen AI that can be trained to multitaskWe are not yet at a world where a neural net is trained with no sense whatsoever of the input data types. We are also not at a point in time when the neural network can construct one representation that combines all the different data types, so that the neural net is learning things in combination.That fact is made clear from an exchange between ZDNet and the researchers. ZDNet reached out to Baevski and team and asked, “Are the latent representations that serve as targets a combined encoding of all three modalities at any given time step, or are they usually just one of the modalities?” Baevski and team responded that it is the latter case, and their reply is interesting enough to quote at length: The latent variables are not a combined encoding for the three modalities. We train separate models for each modality but the process through which the models learn is identical. This is the main innovation of our project since before there were large differences in how models are trained in different modalities. Neuroscientists also believe that humans learn in similar ways about sounds and the visual world. Our project shows that self-supervised learning can also work the same way for different modalities. Given data2vec’s modality-specific limitations, a neural network that might truly be One Network To Rule Them All remains the technology of the future. More

  • in

    Private 5G is coming soon to a business near you

    Increasing connectivity and communication demands are paving the way for private 5G, a cloud-era wireless technology designed for the enterprise and highly adaptable to changes. Many organizations are already implementing or thinking about implementing private 5G because the network and the data can better be controlled by the enterprise. It can also be restricted to a certain location, providing coverage both indoors or outdoors in places such as manufacturing plants and ports.

    On top of that, private 5G allows organizations to control and customize their security settings, policies, and other aspects of wireless communications.A new study recently published by Economist Impact in partnership with NTT surveyed organizations around the world and uncovered that more than half of them plan to deploy a private 5G network within the next six to 24 months. The survey included 216 C-level and senior IT decision-makers from organizations with a revenue of $250 mil. to more than $1 billion. The respondents came from various industries in Germany, Japan, the UK, and the U.S., and they included automotive and manufacturing, energy, health care, pharma, retail, and logistics.According to the study, organizations are broadly adopting next-gen connectivity and communications technologies, including private 5G. 94% of the respondents are implementing upgrades that include Wi-Fi 6, 4G, or 5G. Nearly a quarter (24%) are piloting private 5G networks, while 6% have at least one operational private 5G network. Among those with one operational private 5G network, the largest group is from the U.S. (9.3%) followed by Germany (7%), although Germany leads (33%) when it comes to piloting private 5G networks. Energy and transportation lead the way for installing private 5G Private 5G interest is especially high in industrial settings to support smart manufacturing use cases such as robots and self-driving machines. Energy (39%) and transport (33%) are the two industries more likely to be piloting 5G networks. Transport companies (41%) are most likely to have already built a private 5G network. Within the automotive and manufacturing industries, 25% of companies reported having a private 5G pilot and 5& have an operational network. In health care and pharma, 18% of companies are piloting a private 5G network and 5% have an operational network.These industries make sense as network reliability is critical to business operations. Even the smallest hiccup in the wireless network can cost millions of dollars, which is why the verticals listed above have historically stayed away from Wi-Fi, which can be flaky at times. I’m sure everyone reading this has experienced a Wi-Fi network that appears to be working fine and then suddenly stops working and then just as quickly starts again. This is fine in a carpeted office but not on a manufacturing floor. Security is top driver Not surprisingly, security is a key driver for private 5G adoption. 69% of the respondents said network security was not being addressed by their current connectivity and communications platforms, making it a top concern for organizations across countries and industries. For 75% of health care and pharma organizations, security is the biggest pain point, given the sensitive nature of the data. Other key pain points cited by the respondents were control of data (48%), coverage and speed (43%), and the response time of their current service provider (40%). 

    Security is the reason why most organizations are exploring solutions beyond Wi-Fi. 87% of the respondents believe Wi-Fi networks don’t provide a sufficient level of security for the enterprise. In fact, most (86%) of the respondents believe private 5G is a substitute for Wi-Fi. That’s because private 5G networks offer several advantages to compliance-driven organizations for customizing security and data protection. The other benefits of implementing private 5G cited by the respondents are improved data privacy (83%), faster connection speeds with lower latency (81%), and increased network reliability for connectivity and communications (80%). Although private 5G adoption seems to be speeding up, it’s still in the early stages for most organizations. Implementing private 5G is either in the short- to medium-term plans for organizations that have yet to pilot or implement such networks. Globally, only 3% of companies plan to deploy private 5G within six months, while 15% plan to implement within 12 months, and 19% within 18 months.Building out private 5G infrastructure comes with some technical challenges that organizations shared in the study. For 44% of the respondents, a major barrier is integrating 5G with legacy systems and networks. Complexity around the infrastructure needed to deploy 5G (37%) and employees lacking technical skills to manage 5G networks (30%) are the other barriers to private 5G adoption. Managed services as a viable option for deployment For this reason, many organizations prefer to outsource their private 5G deployments. 38% of organizations choose to outsource to a managed service provider with service-level agreements; meanwhile, one-third of organizations would rather have a hybrid or shared private network approach, where they lease the network from a mobile operator. When it comes to engaging with private 5G suppliers, organizations are most likely to request system integration services (63%), post-deployment network management (62%), and network design and planning (54%).The study’s findings show adopting private 5G networks is strongly supported by senior leadership across the globe. Looking ahead, 94% of the respondents agree that 5G will become an important part of their operations. More than 90% envision private 5G becoming a standard in their industry within the next five years — a view that is shared across all sectors. It will also be the catalyst for enabling digital transformation in the enterprise.It’s important to understand the positioning of 5G versus Wi-Fi. Some industry watchers have predicted that 5G would eat away at Wi-Fi, but that’s certainly not the case. I believe the two to be highly complementary with Wi-Fi continuing to be the wireless standard of choice for general use cases and 5G when guaranteed, reliable connectivity is needed. A proof point of this comes from this Deloitte study that found that 98% of businesses will use both technologies within three years. More

  • in

    Comcast reveals prototype 10G modem for home broadband use

    Comcast revealed that it has successfully tested a new prototype DOCSIS 4.0 modem that is designed to bring 10G technology into customers’ homes for the first time.

    According to the broadband provider, the new unit has achieved symmetrical download and upload speeds in excess of 4 gigabits per second (Gbps) thanks to its “Full Duplex DOCSIS 4.0 system-on-chip (SoC).” While these figures were collected in a laboratory environment, Comcast claims the new model is capable of even faster data transmission rates in the future, as the company continues to chase the eponymous 10Gbps potential transfer rates promised by 10G networks. The cable company’s product reveal is just the latest stop on the long road it has been on to make 10G technology viable for consumer broadband. Previous milestones have included testing 10G connections over a virtualized cable modem termination system (vCMTS) using the same DOCSIS 4.0 technology found in the new modem and an earlier test of a 10G SoC, which used Network Function Virtualization (NFV) technology and Comcast’s live residential network to reach a more modest 1.25Gbps. The use of its existing nationwide network is a major goal for Comcast, which touted the fact that DOCSIS 4.0 can allow 10G transmissions via its existing cable infrastructure, with only the modem at endpoints in user homes likely needing to be replaced in most markets. Comcast clearly sees 10G technology as the future of its home broadband offerings, noting that even 4Gbps can be exceeded “as developers refine technology at every level of the 10G architecture.”For comparison, the company’s residential broadband plans currently top out in most areas with its Gigabit tier, which offers 1Gbps to 1.2Gbps download speeds, with some select regions gaining access to its Gigabit Pro service, which rises to 2Gbps. However, these speedy plans currently only support much, much slower upload rates of just 35Mbps. Comcast was previously called out for hiding this fact by Ars Technica, which noted how difficult it is to find an actual upload rate across the company’s various sign-up pages. While download rates tend to be far more important for the average consumer than upload rates, Comcast’s relatively slow upload speeds are something fiber broadband companies have kept as an advantage over it. Many fiber-based plans from companies like Verizon and Google already offer symmetrical rates that reach or come close to 1Gbps both up and down. In addition to the faster download speeds, the symmetrical transfer rates promised by this new modem may be just as important for customers that Comcast has never previously been able to capture with its existing, slower uploads. 

    The company did not provide any timeframe for this technology to reach the general public.

    Networking More