More stories

  • in

    Has 5G made telecommunications sustainable again? :Status Report

    Nokia AirScale 5G base station.
    Nokia

    The fact that we’re still talking about the subject of 5G Wireless with articles whose headlines end with a question mark, is the clearest indicator of all that we haven’t yet emerged from the initial skepticism phase — the phase we should be when its concept is first introduced. Several years into this, we should be seeing measurable results, rather than what my friend and colleague Steven J. Vaughan-Nichols calls “a bad joke.”
    Our objective with Status Report is to take the ten most important ingredients of any technology’s capability to influence its users’ businesses and livelihoods, examine them in isolation, and score them in more or less the same way. In so doing, we may be able to extrapolate pertinent, relevant facts about why a technology is influential, why it’s disruptive, or perhaps why it’s at a standstill. We begin, however, with where most analytical reports end: its conclusions.
    Executive summary

    Point 1: 5G Wireless is not yet a sustainable business
    The whole point of 5G is to give stakeholders in wireless communications a path to sustainability. This can still happen, but we’re at the point now, with respect to original projections, that sustainability should already have been achieved.
    A variety of obstacles are to blame for this, one being the pandemic. No, 5G is not the cause of the virus. No, populist sentiment against base station and tower construction is not a serious roadblock here.
    From the beginning, 5G’s rollout plan called for a transition period, during which 4G LTE and 5G would co-exist. An August 2019 study by the GSMA industry association [PDF] — published before the pandemic — warned that systems might not be sufficiently covered by revenue from services with higher bandwidth and greater coverage, including in urban areas. 
    No doubt the pandemic actually supercharged that demand increase for higher service levels. Indeed, GSMA said, it was already looking like 5G bandwidth optimizing technologies like M-MIMO would not be enough “to meet the coverage, performance, and capacity requirements in urban 5G-era networks. . . on its own.”
    Long-term sustainability was supposed to happen when telcos moved their RAN control systems off of base stations and into the cloud. The transition to Cloud RAN was happening. However, one of the leaders in paving that transition path is a company — Huawei — that the European Union, the United States, and now the United Kingdom don’t want to conduct business with. Now telcos with Huawei equipment in their base stations are conducting very different transitions: replacing functional (if suspicious) Huawei equipment with less suspicious (if not as functional) Nokia or Ericsson equipment. This postpones their Cloud RAN transitions, while they detour to different courses.
    Point 2: In the absence of performance results, victory may be achieved through symbolism

    While the rollout may be slower than first anticipated, carriers still depend upon the symbolism of 5G supremacy to maintain market relevance. T-Mobile is the best case-in-point. Beginning with its acquisition of Sprint, and leveraging its additional power to win spectrum auctions, T-Mobile’s strategy of extending 5G symbolism to more geographical areas first, even if performance does not improve in the early going, is paying off. It’s forcing competitors AT&T and Verizon to raise their bids, especially in recent US auctions for spectrum in the highly prized C-band, to combat an almost solid-hot-pink coverage map.
    The downside of playing out the 5G battle as territorial rather than technological, as a McKinsey report suggested [PDF], is that carriers may find themselves spending more capital to gain network presence in suburban and rural areas. In an earlier era, where territory wasn’t the prize, carriers would agree that one network covering a rural zone was plenty, and perhaps draw straws to determine which one that would be.
    Point 3: 5G’s seed capital won’t be coming from faster phones

    The state of the personal communications market as we enter 2021 bears undeniable similarity to that of the PC market (personal computer, if you’ve forgotten) in the 1980s. When the era of graphical computing began in earnest, the major players at that time (e.g., Microsoft, Apple, IBM, Commodore) tried to leverage the clout they had built up to that point among consumers, to help them make the transition away from 8-bit command lines and into graphical environments. Some of those key players tried to leverage more than just their market positions; they sought to apply technological advantages as well — in one very notable instance, even if it meant contriving that advantage artificially.
    Consumers are always smarter than marketing professionals presume they are. Two years ago, one carrier in particular (which shall remain nameless, in deference to folks who complain I tend to jump on AT&T’s case) pulled the proverbial wool in a direction that was supposed to cover consumers’ eyes.  The “5G+” campaign divebombed, and as a result, there’s no way any carrier can cosmetically alter the appearance of existing smartphones, to give their users the feeling of standing on the threshold of a new and forthcoming sea change.
    The capital for the first stage of the 5G rollout was supposed to come from enthusiastic consumers willing to pre-invest in a technology that might, maybe soon, exhibit some perceptible performance improvement. With that avenue having been blown up, carriers now look to enterprises for 5G’s seed capital — specifically, from a data center distribution technology called Multi-access Edge Computing (MEC) that was grafted onto the 5G portfolio, though isn’t really bound to 5G-specific components or infrastructure.
    Sphere of influence
    Here’s where we identify the factors that lead us to draw the conclusions above. (See “How we evaluate and rate technologies, and why.”)
    In short, each category of influence is given its own compass direction, so that every “positive” points a different way. A positive score pulls our metaphorical pendulum forward on the chart; a negative score pushes it backward. At the end, we find where the pendulum bob ends up, and measure its distance from dead-center. That final score tells us, amid all the forces imposed upon it and the different objectives it must achieve, how influential and relevant that technology is for the given time, and perhaps in what way.
    Stakeholder empowerment

    While 5G is delivering quality-of-service (QoS) benefits, at best, on schedule [+5], service demand levels have already well exceeded projections from 2015 — which already may seem like ancient history. Thus the period of initial capital investments has been extended, perhaps indefinitely.  [-7, net -2]
    .
    .
    Competitive advantage

    Regardless of how well consumers perceive 5G as enriching their lives, and enterprises view it as a business enabler, every brand in the telecom market space is judged by its relative stance on 5G.  [+7]  For now, the trick to holding onto that stance is to keep those lofty visions of our glorious 5G future on life support.  [-4, net: +3]
    .
    .
    Business sustainability

    Geopolitical squabbles have forced carriers to take different paths to sustainability.  [+3]  What’s more, these detours come at a time when the architecture of data centers is changing. Edge computing (ironically, one of the technologies in the broader 5G portfolio) is enabling distributed computing in remote areas — for instance, the very rural areas carriers need to plant their stakes in, to regain competitive advantage. As the resurgence of Arm processors dramatically reduces power consumption for smaller data center form factors, suddenly it may be cheaper to distribute communications power to base stations after all.  [-7, net -4]
    Evolutionary incentive

    The very fact that low-power micro data centers (µDC) may be more sustainable and affordable in the long-term than larger cloud facilities, mandates reassessing the placement of 5G’s goal posts. To put it another way, some of the design objectives that were being considered for “6G” in late 2019, may need to be accelerated for 5G now. For the remainder of the 5G transition to be successful, it must continue evolving from what it is now.  [+8]. Only smaller countries can afford to wait out this re-evaluation before joining back in the transition.  [-2, net: +5]
    Market enablement

    Until the position of the new US government towards China, and towards Huawei, are made clearer, every market in the orbit of 5G Wireless is stuck in a holding pattern.  [+3]  It isn’t really possible yet to gauge how a 5G ecosystem can re-emerge, if one major player is excluded for geopolitical reasons, leaving a managed duopoly (which is possibly contrary to European Union law) or even a de facto monopoly if Ericsson’s fortunes don’t improve, leaving Nokia in the catbird’s seat.  [-5]
    Customer value

    The consumer is not completely turned off of the promise of 5G, although continued declarations from Steven Vaughan-Nichols can’t possibly help that cause.  [+4]  A global survey of over 16,000 respondents from consulting firm Ansys showed that over 70 percent from the US, UK, and France believe either 5G has been overhyped, or they’re uncertain about the legitimacy of carriers’ claims. They haven’t abandoned 5G entirely; they’re just not on board yet.  [-4, net: 0]
    Economic contribution

    The rollout of 5G Wireless does remain a positive contributor to nations’ gross domestic product tallies, and an employer of engineers and other laborers. There are urban areas where 5G service is readily available, even though the jury is out with respect to performance improvement.  [+7]  In some areas of the world that were early adopters of the technology, re-investments have had to be made to bring 5G performance in-line with objectives.  [-5]  China may have been one example, although it successfully sold that project to its people as a post-pandemic national unification project.  [Net: +2]
    Societal integration

    We’re not at the point in our society where we’re ready to say 5G has made a beneficial contribution to our lives and livelihoods. But there is a foundational element that may yet lead us in that direction: specifically, the hard-wiring of dispersed suburban and rural areas through new, high-speed fiber optic lines.  [+4]  This could be the breakthrough needed to obliterate North America’s broadband barrier. We’re also not yet at the point, as with the first generation of nuclear power, where we’re spending more effort and resources cleaning up after the project’s failures.  [-2, net +2]
    Cultural advancement

    It’s fair to say the cultural benefit of 5G Wireless to the people of Earth is not yet measurable.  [+1]  By contrast, the vast improvement of 3G’s service levels over the two 2G technologies made feasible an entirely new industry for reaching individuals: music, literature, art, and functional endeavors. That 5G isn’t the boon to our world that 3G was, is becoming a noticeable and recordable fact among the general public.  [-2, net: -1]
    .
    Ecosystemic enablement

    Here is where 5G has brought forth an unexpected benefit:  The Open Radio Access Network (O-RAN) project centers around software that enables access to the wireless network by subscribers’ devices, using methodologies that are open source and unpatented. While the industry really wants only one RAN anyway, O-RAN would ensure that no single player in the 5G market mandates how devices attain network access. That’s especially important in an industry where mergers and acquisitions have left a mere handful of players in the base station market space, assuming your hand happens to be that of a sloth.  [+8]
    Sadly, geopolitics has tainted this project’s objectives, such that its contributors may be described as the Everyone Except Huawei club. Prior to the artificial extraction of Huawei from some markets, it accounted for more than half of radio equipment sales in those markets.  [-6, net +2]

    Final score: +0.56
    With the sole exception of the Evolutionary incentive category, every component of our 5G Wireless sphere of influence is highly contentious. It eked out a positive score, after starting out two points in the hole with its stakeholder empowerment score. The final influence vector tilted more toward customer and societal benefit than self-serving interests.  5G isn’t stuck in a rut due to some indeterminate design defect or malicious corporate intent. Anyone who sues AT&T or Nokia at this point doesn’t have a case. Moreover, it catalyzed an unprecedented global endeavor, just before a cataclysm of political, social, and even environmental forces threatened to split the world apart.
    The task before 5G’s practitioners now is to regroup and reassess their requirements for providing communications infrastructure using a sustainable business model. This is not an impossible task. The fifth generation (actually the seventh) of wireless infrastructure may yet emerge. But we may need to wait a bit longer for the smoke to clear, before we can all shake hands and start over. More

  • in

    Juniper Q4 revenue and profit beat expectations, forecast as expected

    Networking equipment maker Juniper Networks this afternoon reported Q4 revenue and profit that came in ahead of Wall Street’s expectations, and forecast this quarter’s results in line with expectations.
    Juniper stock declined by almost 3% in late trading to $26.40. 
    Juniper CEO Rami Rahim said that the company had “experienced better than expected Q4 demand and ended 2020 on a high note by delivering a second consecutive quarter of year-over-year revenue growth.”
    Added Rahim, “Despite the various challenges presented by the pandemic, we achieved many of the objectives we laid out earlier in the year, which included growing our enterprise business for a fourth consecutive year, growing our cloud business for a second consecutive year and stabilizing our service provider business.”
    “We believe these outcomes are a direct result of the strategic actions we have taken, which should position us for sustainable full-year revenue growth starting this year,” said Rahim.
    For the three months ended in December, Juniper’s revenue rose 1%, year over year, to $1.22 billion, yielding EPS of 55 cents.
    Analysts had been modeling $1.19 billion and 53 cents per share.

    For the current quarter, the company sees revenue in a range of $1.005 billion to $1.06 billion, and EPS of 20 cents to 30 cents. That compares to consensus for $1.03 billion and 25 cents. 

    Tech Earnings More

  • in

    Comcast tops Q4 estimates with steady growth in business, broadband services

    Cable giant Comcast topped fourth quarter estimates and showed strong growth across several lines of business. 
    In the fourth quarter, Comcast reported net income of $3.4 billion, or 73 cents a share, on revenue of $27.7 billion, up 2.4% from a year ago. Excluding items, Comcast reported earnings of 56 cents a share in the fourth quarter.
    Wall Street was expecting Comcast to report fourth quarter earnings of 48 cents a share on revenue of $26.7 billion.
    Comcast consists of several properties, but Comcast Business is among its fastest-growing segments. Comcast Business delivered revenue of $2.1 billion, up 4.8% from a year ago. Comcast’s high-speed Internet revenue was up 12.7% to $5.4 billion while video and voice sales fell. The company’s wireless business grew 35.8% to $505 million, and advertising revenue climbed 33.8%. 
    Here’s a look at how the company’s customer base shakes out:

    Overall, the strength in Comcast’s business comes from the cable unit. For 2020, Comcast added 1.6 million cable customer relationships and 2 million high-speed internet net additions — the highest annual gains in these categories, ever.

    Tech Earnings More

  • in

    When you need more ports on your laptop or tablet, do this

    Working from home likely means you’re working on some form of laptop or tablet, and if it was made in the last couple of years, one thing’s for sure: It doesn’t have near enough ports to connect all of the accessories we’ve all come to rely on to get our jobs done. 

    In the last year, we’ve all been a bit deprived of different things, but we don’t have to be port constrained on our laptops. While I wish we didn’t have to use docks and hubs to address these deficiencies, I am not sure I would want to go back to the good old days when these laptops weighed five pounds, and you would break your back carrying them.
    However, because there are so many USB versions, many end-users can be confused about the underlying technology and what they need to improve their connectivity with their laptops and tablets. Let’s see if we can clear some of this up.
    The evolution of Universal Serial Bus (USB)
    Universal Serial Bus 1.0 was introduced by the USB Implementer’s Forum in January 1996 — that’s exactly 25 years ago. With that introduction, we got the USB-A connector, as well. It’s the rectangular-shaped receptacle, a one-way keyed connector that we all know. It’s also the connector that we use for thumb drives and device connectivity on legacy PCs and all kinds of peripherals and consumer electronics over two decades. It’s in our cars. It’s everywhere.
    USB 1.0
    Jan. 15, 1996
    Full Speed (12Mbit/s),
    Low Speed (1.5Mbit/s)

    Initial release
    USB 1.1
    August 1998
    Full Speed (12Mbit/s),
    Low Speed (1.5Mbit/s)

    USB 2.0
    April 2000
    High Speed (480Mbit/s)
    Significant speed improvements
    USB 3.0
    November 2008
    Superspeed USB (5Gbit/s)
    Also referred to as USB 3.1 Gen 1 and USB 3.2 Gen 1 × 1
    USB 3.1
    July 2013
    Superspeed+ USB (10Gbit/s)
    Includes new USB 3.1 Gen 2, also named USB 3.2 Gen 2 × 1 in later specifications
    USB 3.2
    August 2017
    Superspeed+ USB dual-lane (20Gbit/s)
    Includes new USB 3.2 Gen 1 × 2 and Gen 2 × 2 multi-link modes
    USB4
    August 2019
    40Gbit/s (2-lane)
    Includes new USB 4 Gen 2 × 2 (64b/66b encoding) and Gen 3 × 2 (128b/132b encoding) modes and introduces USB 4 routing for tunnelling of USB3.x, DisplayPort 1.4a and PCI Express traffic and host-to-host transfers, based on the Thunderbolt 3 protocol

    The USB-C connector used with USB 3.x, 4.x, and Thunderbolt devices (left), versus the legacy USB-A connector (right), used for USB 1.x and USB 2.x devices.
    (Image: Jason Perlow/ZDNet)
    When that standard was introduced, USB 1.x had a maximum transfer rate of 12Mbps. Over the years, that increased to 480Mbps with USB 2.0 and 5Gbps on USB 3.x and, recently, 40Gbps on USB 4. The massive increase in bandwidth has allowed for things like computer monitors, ethernet cards, Wi-Fi adapters, and all sorts of other things to be connected to a PC without having to open it up and use up slots. Remember those?
    When USB 3.1 was introduced in 2013/2014, we also saw a new connector, the USB-C connector. That’s the small, reversible oval connector that we all now know and love. It is used primarily on Android smartphones, some iPad models, and PC and Mac laptops. But it’s making its way onto all kinds of consumer electronics.
    The port deficiency problem on modern laptops
    Currently, many laptops only have a USB-C connector on them. The biggest offender here is Apple’s MacBooks since the company has been very aggressive about ripping out ports over the years. Still, they are not the only ones. Companies like Dell, Lenovo, HP, and Microsoft have all been making their products thinner and streamlined. We are getting fewer ports from them due to a desire to make everything light and wirelessly connected.

    You’ll only get two USB-C/Thunderbolt 3/USB 4 ports on a 2020 M1 MacBook Pro.
    Jason Cipriani/ZDNet

    Apple’s current generation of x86 MacBooks has four USB-C connectors, and its latest M1 MacBooks only have two. Each of these connectors can function as a USB 3.0/4.0 port with a transfer rate of 20Gbps and as a Thunderbolt 3 port. 
    Thunderbolt, a standard created by Intel, is even faster. It can transfer data at up to 40Gbps. That means you could conceivably connect things like external graphics processors to a laptop with one of these ports if the operating system supports it. Or a high-speed 10Gbps network adapter, for example, if you were one of those people who need to transfer huge data files, like someone working in special effects or a video-editing studio.
    But there’s also DisplayPort
    DisplayPort is a digital display interface developed by a consortium of PC and chip manufacturers, and it was standardized by VESA, the Video Electronics Standard Association, in 2006. So, this was before the USB-C connector. It’s a special 20-pin connector. You’ve probably seen it: It resembles a big rectangle with a notch cut out of it. Virtually all of the desktop monitors you can buy now have DisplayPort connectors on them, in addition to the HDMI or the DVI connectors you normally find on older monitors. 

    USB-C-to-DisplayPort cable.
    (Image: Jason Perlow/ZDNet)
    To output to a DisplayPort-equipped external monitor, you either need an HDMI-to-DisplayPort cable, a DisplayPort-to-DisplayPort cable, or a USB-C-to-DisplayPort cable. But you are probably thinking what I am thinking: If you want to connect two monitors to a MacBook or a PC laptop, you will have to eat up a bunch of USB-C ports on that laptop. That’s not already counting the USB-C cable used for USB PD to power the laptop. So, before you know it, you’re lucky to have one spare port left. That doesn’t leave room for a mouse, a keyboard, or anything else; you’d better hope you have Bluetooth stuff to connect to it.
    On a PC laptop with a DisplayPort or USB 4 interface, you can do what is referred to as DisplayPort daisy-chaining. That allows you to use a single USB-C port to connect to multiple external monitors, provided your monitors have both DisplayPort input and output ports. There is an upper limit of five monitors with DisplayPort using a daisy chain — but at a lower resolution. You can connect one 4K/5K monitor per chain.

    However, on a MacBook, the OS doesn’t support a daisy chain, so you will need a USB-C port dedicated to each monitor, which gets us to docking stations and hubs.
    Laptop and tablet hubs of all kinds
    The good news is that on Mac and PC laptops that support USB-C, USB 4, Thunderbolt 3, and Thunderbolt 4, one cable coming out of the laptop can be split into a lot of ports using a hub or docking station. 
    Docking stations are becoming not only popular for use with laptops but also on tablets like the iPad. They allow you to connect different types of devices, not just USB-C or USB-A devices — many hubs include gigabit Ethernet, SD card, headphone, and audio jacks, and dedicated DisplayPort connectors. They also can power your laptop with as much as 94 watts, delivered by USB PD, so they have their own power bricks as well.
    Jason Cipriani’s Picks
    Jason Cipriani, my Jason Squared co-host, is partial to the Belkin Thunderbolt 3 Dock Pro. He uses it with his M1 MacBook Pro and his 2018 iPad Pro. It’s pricey, at $250, but that’s par for the course for Thunderbolt docks. It has 85W upstream charging, allowing you to charge your laptop through the dock Ports: (2) Thunderbolt 3, (1) USB-A 3.1, (1) USB-C 3.1, (4) USB-A 3.0, (1) DisplayPort, (1) SD card, (1) 3.5mm Audio in/out, and (1) Gigabit Ethernet. 

    Jason Cipriani likes the Belkin Thunderbolt 3 Dock Pro.
    (Image: Jason Perlow/ZDNet)
    Another hub he uses is the HyperDrive 6-in-1 USB-C hub. Even though it’s designed for the iPad Pro and iPad Air, he’s been able to use it with his MacBook Pro and the Surface Pro X to connect to an external monitor and use the SD card features. It’s $90. 
    Lastly, he uses Apple’s USB-C Digital AV adapter that adds a USB-C port, USB-A port, and an HDMI connection. It’s minimal and easy to move between desks or setups but lacks all the extra ports that other hubs have, like the HyperDrive. It’s $70. 
    My Picks
    Until recently, I have been a heavy user of the CalDigit TS3 plus, a Thunderbolt 3 dock designed for MacBooks. It was considered the premier one on the market about two years ago, and it goes for about $300. It has a dedicated DisplayPort and seven USB ports, as well as Ethernet and audio ports. Caldigit has also recently introduced some newer USB-C models and dual dedicated DisplayPort or dual HDMI interfaces.
    Currently, I have been using the Kensington SD5700T, which I recently reviewed. This one has four Thunderbolt 4/USB 4 ports. It is designed for the latest M1 MacBooks and PCs and delivers up to 90W of power to the laptop. I only have one Thunderbolt cable coming out of the Mac, which is powering the computer and is driving two DisplayPort monitors, Ethernet, and a whole mess of USB peripherals. It also costs about $300. Kensington has a bunch of other models, depending on what price point you’re targeting.

    The Hubble for iPad Air/Pro by Fledging.
    I also recently got this Hubble dock for iPad, made by Fledging, which I am using on my 12.9-inch iPad Pro. This thing is a real beauty and was just introduced at CES 2021. It has an HDMI connector for doing screen mirroring with a monitor or a TV set, in case you wanted to use this for watching movies in your living room or use Apple Fitness Plus, an SD card slot. It also has a dedicated USB-C port and USB-A port for connecting external peripherals, that missing audio jack, plus a USB-C charge passthrough. It’s made out of metal, which matches your iPad’s color, and it acts as both a case and a stand. So, really, you could turn your iPad into a desktop computer with this thing if you wanted. It costs $99/$110 and works on the 12.9-inch and 10.9-inch iPad Pro and the latest 11-inch iPad Air models.
    There are countless other options available. Ed Bott, my ZDNet colleague, has a thorough roundup of USB-C/Thunderbolt docks you can check out.

    ZDNet Recommends More

  • in

    AT&T Q4 2020: Consumers flock to HBO Max, but COVID recovery is far from reality

    AT&T has reported Q4 2020 earnings shored up by an expanding subscriber base, but COVID-19 is still disrupting the company’s operations and leaving an indelible mark on the books. 

    On Wednesday, the Dallas, Texas-based telecoms giant published its fourth-quarter financial results (statement) (.PDF).
    AT&T reported consolidated revenues of $45.7 billion and a net loss attributable to common stock of $13.9 billion or -$1.95 per share. Adjusted EPS earnings are $0.75 per share including “asset impairments, an actuarial loss on benefit plans, merger-amortization costs and other items.”
    “The company did not adjust for COVID-19 impacts of ($0.08): $0.01 incremental cost reductions and ($0.09) of estimated revenues,” AT&T added.
    AT&T’s third-quarter earnings were $0.39 per share (adjusted EPS $0.76) on revenues of $42.3 billion. 
    However, the firm continues to enjoy subscriber growth. Over Q4 2020, 800,000 postpaid phone net adds, 1.2 million postpaid net adds, close to six million total domestic wireless net adds, and 273,000 AT&T Fiber net adds were reported. AT&T said that it experienced a 617,000 net loss over the quarter when it comes to premium TV services.
    Total domestic and international HBO and HBO Max subscribers have now reached 41 million and close to 61 million, respectively. HBO Max activations alone accounted for 17.2 million subscribers as of the end of the quarter.

    Operating loss was reported as $10.7 billion in comparison to $5.3 billion last quarter. AT&T says “non-cash asset impairments in the quarter and the impact of lower revenues” have contributed to this figure.
    When adjusted for non-cash asset impairments, operating income was $7.8 billion in comparison to $9.2 billion in Q4 2019. AT&T’s latest reported operating income margin was 17.1%.
    Free cash flow is now pegged at $7.7 billion. Net debt declined by $1.6 billion.
    For the full 2020 financial year, AT&T revenue totaled $171.8 billion, a drop from $181.2 billion in 2019. 
    “The COVID-19 pandemic impacted revenues across all businesses, particularly WarnerMedia and domestic wireless service revenues, which were pressured from lower international roaming,” the company says. “Declines at WarnerMedia included lower content and advertising revenues, in part due to COVID-19.”
    Revenues in domestic TV services and legacy wireline solutions decline, but sales in domestic wireless equipment and both strategic and managed services “partly offset” these losses, according to AT&T. 
    Operating expenses over the year were $165.4 billion (2019: $153.2 billion), operating income in 2020 was $6.4 billion, down 77.1% year-over-year — but with adjustments, operating income for FY 2020 is recorded as $34.1 billion versus $38.6 billion in FY 2019.
    Net loss over 2020 attributable to common stock is $5.4 billion, or $0.75 per share. 
    In 2021, AT&T expects a free cash flow of at least $26 billion, as well as a full-year dividend payout ratio in the 50%’s. Consolidated revenue growth is expected to be in the range of one percent and gross capital investment is pegged at roughly $21 billion, with $18 billion in capital expenditure. 
    “We ended the year with strong momentum in our market focus areas of broadband connectivity and software-based entertainment,” said John Stankey, AT&T CEO. “By investing in our high-quality wireless customer base, we had our best full-year of postpaid phone net adds in a decade and our second lowest postpaid phone churn ever. Our fiber broadband net adds passed the one million mark for the year. And the release of Wonder Woman 1984 helped drive our domestic HBO Max and HBO subscribers to more than 41 million, a full two years faster than our initial forecast.”

    Previous and related coverage
    Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    NTT Docomo develops tech to allow 28GHz 5G signals to pass through windows

    Japanese telco giant NTT Docomo has developed a new proof of concept that it touts can “efficiently guide” 28GHz 5G radio signals received from outdoors to specific locations indoors.
    Developed alongside AGC, a Japanese glass manufacturer, the new technology is a film-like metasurface lens that can be attached to window surfaces. Once attached to window surfaces, 28GHz radio signals are then able to pass through windows and become available for use indoors.
    Generally, 28GHz signals have low diffraction, which means these types of signals are unable to penetrate windows and need to be within the line of sight of base stations, the company explained.
    But the metasurface lens is able to circumvent this, Docomo said, as it arranges a large number of sub-wavelength unit cells into certain shapes, which allows direct radio signals to pass through to specific points indoors.
    In addition, the metasurface lens material is a transparent film that can cover virtually the entire inside surface of a window and was designed to be “aesthetically acceptable”.
    It also has no effect on LTE and sub-6 band radio waves, so it can be used to improve indoor reception of 28GHz radio signals without affecting the performance of legacy wireless frequencies, Docomo added.
    Meanwhile, Ericsson has launched a 5G network slicing product for radio access networks (RAN) that allows for communications service providers to deliver customised 5G services.

    The new 5G RAN slicing product allocates radio resources at 1-millisecond scheduling and supports multi-dimensional service differentiation handling across slices. 
    According to Ericsson, the 1-millisecond scheduling strengthens end-to-end slicing capabilities for resource management to make it easier for communications service providers to support customers across various use cases.
    “What makes our solution distinct is that it boosts end-to-end management and orchestration support for fast and efficient service delivery. This gives service providers the differentiation and guaranteed performance needed to monetise 5G investments with diverse use cases. With 5G as innovation platform, we continue to drive value for our customers,” Ericsson Product Area Networks head Per Narvinger said.
    Related Coverage
    Docomo switches on sub-6GHz 5G carrier aggregation
    Qualcomm says devices will need a Snapdragon 865 to make use of the feature.
    NTT Docomo to launch 5G on March 25
    Initial footprint to be 150 locations across Japan, followed by 500 cities in a year’s time.
    Ericsson accuses Samsung of swerving FRAND commitments for 2G-5G patents
    The delay in contractual negotiations could set Ericsson back SEK 1.5 billion per quarter from 2021 onwards.
    5G rollouts gather pace, despite obstacles and arguments over coverage
    O2, BT and rivals are in a race to deploy 5G to as many locations as possible, despite various obstacles to the technology’s rollout. More

  • in

    Major internet outages hit Northeast and Mid-Atlantic states

    No, it’s not just you. Beginning this morning, users from Massachusetts to Northern Virginia started having trouble connecting with their favorite internet sites. The service problem this time isn’t with any of the major cloud providers but with the internet itself.

    According to reports on the Outages list, which is the central mailing list for ISP and network operators to report and track major internet connection problems, service failures were first spotted on Long Island. Other reports quickly revealed it wasn’t just a New York regional problem. 
    Users, throughout the Northeast and Mid-Atlantic states, start reporting having trouble reaching their favorite work-from-home sites such as Gmail, Office 365, Slack, and Zoom and school-from-home sites such as Blackboard and Schoology. Downdetector, which tracks website outages, showed serious slowdowns with many sites including Google, Zoom, Amazon Web Services (AWS), and Outlook late on Tuesday morning.
    The root cause appears to be a Verizon fiber cut in Brooklyn, NY. Verizon reported on Twitter, “There is a fiber cut and it has been reported, our technicians are aware; therefore working to resolve it as soon as possible.” The Verizon ALTER.NET backbone network also went down taking it with Verizon Wireless and DSL internet connections. 
    In addition, many Verizon Fios customers on the East Coast are reporting major slowdowns on all their internet services. Other users who aren’t Verizon customers are saying they’re seeing slowdowns as well. That’s probably because of traffic backups caused by the Verizon fiber failure. Just like when a multi-lane highway loses a lane of traffic all the traffic starts slowing down.
    At 1 PM Eastern, it appears that internet speeds are beginning to pick up. With luck and work, this morning’s internet failure may be over later this afternoon. 
    Related Stories: More

  • in

    Linux distributors frustrated by Google's new Chromium web browser restrictions

    While Google Chrome is easily the most popular PC web browser, it’s open-source big brother, Chromium, doesn’t have that many users, but it’s always had some fans on desktop Linux. Now, though, that love affair is in trouble.

    Open Source

    Google claims it recently found un-named third-party Chromium-based browsers integrating Google cloud-based features, such as Chrome sync and Click to Call, that were intended only for Google Chrome users. In other words, “This meant that a small fraction of users could sign into their Google Account and store their personal Chrome sync data, such as bookmarks, not just with Google Chrome, but also with some third-party Chromium-based browsers.”
    Also: Best Linux Foundation classes
    Google was not amused. 
    Starting on March 15th, Google said it will limit access to many Chrome application programming interfaces (API) inside Chromium starting March 15, 2021. This means users using the Chromium web browser or any other web browser based on its open-source codebase won’t be able to use most Google-specific API-enabled services. This includes the ability to sync Chrome bookmarks, check your spelling, find your contacts, translate text, and on and on.  
    Many users aren’t happy either now. Thom Holwerda, managing editor of OSNews, spoke for many when he wrote, Google’s “not closing a security hole, they’re just requiring that everyone use Chrome. Or to put it bluntly, they do not want you to access their Google API functionality without using proprietary software (Google Chrome).”
    Developers can, once they jump through the necessary hoops to get API keys and an OAuth 2.0 client ID, get keys to these APIs. But, Google underlines, “that the keys you have now acquired are not for distribution purposes and must not be shared with other users.”

    In theory, a developer could pull the API keys out of mainline Chrome and maintain their Chromium’s build Google functionality. However, that’s just asking for a lawsuit. 
    Besides, Jochen Eisinger, Google’s Director of Engineering for Chrome Trust & Safety remarked on the Google Chromium developer group, “We won’t remove the API from your key, but we’ll limit the quota to the quota for development. … this will make the keys unsuitable for production use.” These “APIs were not designed to be used by third-party software, so short of a complete rewrite, there is unfortunately no [other] option.”
    So, where does that leave Linux distributors who’ve been bundling Chromium? Between a rock and a hard place. 
    Porting Chromium to Linux is not trivial. Alan Pope, Canonical’s community manager for Ubuntu Linux engineering service, explained why Canonical started shipping Chromium in an Ubuntu Snap container rather than in a DEB package:

    Maintaining a single release of Chromium is a significant time investment for the Ubuntu Desktop Team working with the Ubuntu Security team to deliver updates to each stable release. As the teams support numerous stable releases of Ubuntu, the amount of work is compounded. Comparing this workload to other Linux distributions that have a single supported rolling release misses the nuance of supporting multiple Long Term Support (LTS) and non-LTS releases.
    Google releases a new major version of Chromium every six weeks, with typically several minor versions to address security vulnerabilities in between. Every new stable version has to be built for each supported Ubuntu release − 16.04, 18.04, 19.04, and the upcoming 19.10 − and for all supported architectures (amd64, i386, arm, arm64).

    While Snap has made this easier, it’s still not easy. According to sources, Canonical has not decided yet whether it will support Chromium without the end-user support for the Google services specific APIs. 
    Linux Mint recently started bundling its own Chromium browser. Mint leader Clement “Clem” Lefebvre is sticking with Chromium. “We’re not going to do anything. We’ll continue to package Chromium.”
    Red Hat’s community Linux distro Fedora, however, was seriously considering dumping Chromium. Tom Callaway, Chromium’s Fedora maintainer explained that it’s because Google is “cutting off access to the Sync and “other Google Exclusive” APIs from all builds except Google Chrome. This will make the Fedora Chromium build significantly less functional (along with every other distro packaged Chromium).”
    However, after consideration, Calloway explained “I never said I was going to remove Chromium from Fedora. I said I was seriously considering it, but after much thought, I decided that there were enough users who still wanted it, even without the functionality provided by the Google API.” So, starting immediately, Fedora’s version of Chromium no longer supports the soon to be depreciated APIs. 
    Calloway really wishes Google would reconsider its position. But, he sees little chance of it. “What frustrates me,” Calloway tweeted, “the most is how no one on the Chrome team understands the concept of open source community building. Nothing the Chr maintainers did ever hurt Chrome, they only ever made it stronger.”
    Other Linux distributions are edging closer to dumping Chromium. Arch Linux maintainers have thought about it, but, for now, they’ll continue to keep Chromium around even after the March 15th deadline. 
    Eric Hameleers, who maintains Chromium for Slackware Linux, is dropping Chromium. “I will not package and distribute a Chromium for Slackware if that package is crippled by the absence of login to Chrome Sync. I will not package a Chromium build with Google’s own ID and secret embedded. Instead, I will do the right thing: advise people not to use Chrome but switch to Firefox.”
    With this move, Google has alienated code maintainers and developers at multiple Linux distributions. When Linux Chromium users discover the latest versions won’t work as they have before, they’ll be unhappy too. 
    True, this is only a small number. But, it’s leaving many others with a bad taste in their mouth over Google failed the open-source community in this instance. That, in the end, will matter more than this move’s immediate impact on programmers and end-users. 
    Related Stories: More