More stories

  • in

    Celona aims to make private enterprise 5G networks commonplace

    Special Feature

    Celona, a startup, is launching technology that will make it easier for enterprises to deploy private LTE/5G wireless networks with one platform.
    The technology, called MicroSlicing, uses software to abstract cellular wireless network design and translate it into a framework similar to what IT departments manage today.
    Celona’s platform also leverages Citizens Broadband Radio Service (CBRS) spectrum in the US so enterprises can use it for 5G mobile devices as well as Internet of things infrastructure.
    The company features a single software-as-a-service license with three- and five-year subscription options. The pricing covers indoor and outdoor access points, license for spectrum access, software, SIM cards and technical support.

    According to Celona, HPE’s Aruba unit will resell the company’s product portfolio. Celona is targeting customers that need to segment networks for interference-free connectivity to devices. Celona-powered networks are designed to ride shotgun with enterprise Wi-Fi networks to provide an interference free network for smartphones and sensors.
    Celona’s lineup includes:
    Celona RAN, which is a set of enterprise indoor and outdoor CBRS LTE access points that provide up to 25,000 square feet and 1 million square feet of coverage. Radio functions are automated via Celona software.
    Celona Edge, a private LTE/5G core that integrates with existing enterprise network configurations and access control policies.
    Celona Orchestrator, an AIOps platform that remotely installs Celona access points and Edge software across ties. The Celona Orchestrator allows for provisioning of Celona SIM cards.
    The company’s wares are available through Celona’s channel partners.  

    Here’s a look at Celona Orchestrator.  More

  • in

    Cisco reveals this critical bug in Cisco Security Manager after exploits are posted – patch now

    Cisco has disclosed a critical security flaw affecting its Cisco Security Manager software, along with two other high-severity vulnerabilities in the product. 
    Cisco has flagged that the three security vulnerabilities are fixed in version 4.22 of Cisco Security Manager, which was released last week. 

    Networking

    Cisco Security Manager helps admins manage security policies on Cisco security devices and provision Cisco’s firewall, VPN, Adaptive Security Appliance (ASA) devices, Firepower devices, and many other switches and routers. 
    SEE: IoT: Major threats and security tips for devices (free PDF) (TechRepublic)
    The most serious issue addressed in release 4.22 is a path-traversal vulnerability, tracked as CVE-2020-27130, which could allow a remote attacker without credentials to download files from an affected device. 
    The issue, with a severity rating of 9.1 out of 10, affects Cisco Security Manager releases 4.21 and earlier. 
    “The vulnerability is due to improper validation of directory traversal character sequences within requests to an affected device. An attacker could exploit this vulnerability by sending a crafted request to the affected device,” Cisco explains in the advisory. 

    The company appears to have published the advisory after Florian Hauser of security firm Code White, who reported the bugs to Cisco, published proof of concept (PoC) exploits for 12 vulnerabilities affecting Cisco Security Manager. 
    Hauser, who uses the Twitter handle @frycos, said in a tweet that he reported 12 flaws affecting the web interface of Cisco Security Manager 120 days ago, on July 13. 
    He says he decided to release the PoCs because Cisco didn’t mention the vulnerabilities in 4.22 release notes and had not published advisories.  

    “Several pre-auth vulnerabilities were submitted to Cisco on 2020-07-13 and (according to Cisco) patched in version 4.22 on 2020-11-10. Release notes didn’t state anything about the vulnerabilities, security advisories were not published. All payloads are processed in the context of NT AUTHORITYSYSTEM,” he wrote. 
    Among them are multiple vulnerabilities in the Cisco Security Manager’s Java deserialization function, which could allow remote attackers without credentials to execute commands of their choice on the affected device. 
    Unfortunately, Cisco hasn’t fixed these Java deserialization vulnerabilities in the 4.22 release but plans to fix them in the next 4.23 release. Cisco also says there are no workarounds and has not listed any mitigations that could be used until a fix arrives. 
    SEE: Ransomware victims aren’t reporting attacks to police. That’s causing a big problem
    These issues affect releases 4.21 and earlier and have a severity rating of 8.1 out of 10. Cisco issued the identifier CVE-2020-27131 to the bugs, which are due to insecure deserialization of user-supplied content.
    “An attacker could exploit these vulnerabilities by sending a malicious serialized Java object to a specific listener on an affected system. A successful exploit could allow the attacker to execute arbitrary commands on the device with the privileges of NT AUTHORITYSYSTEM on the Windows target host,” Cisco explains. 
    A third flaw affecting Cisco Security Manager releases 4.21 and earlier, tracked as CVE-2020-27125, can allow an attacker to view insufficiently protected static credentials on the affected software. The credentials are viewable to an attacker looking at source code. 
    This issue, with a severity rating of 7.1, is fixed in release 4.22.
    Cisco’s Product Security Incident Response Team (PSIRT) said it is aware of public announcements about these vulnerabilities. However, it was not aware of any malicious use of them.
    More on Cisco and network security More

  • in

    Uniti gets more greenfields with AU$9.25m Harbour ISP deal

    Uniti has continued its telco shopping spree, announcing on Tuesday it has signed an agreement to purchase greenfields specialist Harbour ISP.
    With a maximum purchase price of AU$9.25 million and AU$1 million worth of options for Uniti shares, Uniti said the purchase price would consist of 90% cash and 10% in shares. At the time of writing, Uniti was trading at AU$1.59 a share.
    As a result of the deal, Uniti is set to gain 30,000 broadband customers in housing estates and apartment complexes, doubling Uniti’s consumer business numbers.
    “Of particular strategic significance to Uniti is Harbour’s close alliances with a number of national property development companies, including Mirvac,” Uniti said.
    “In certain instances, these developer alliances see Harbour enjoy ‘preferred RSP’ status, delivering strong take-up of Harbour broadband services in new greenfield developments via a series of cooperative marketing activities, undertaken with the support and endorsement of the developer.”
    Last week, the purchase of greenfields broadband builder Opticomm by Uniti was implemented.
    A fortnight prior, Uniti gained ACCC approval to functionally separate to operate both as a wholesale and retail provider in greenfield areas.

    At the time, the company said the separation would enable it to actively promote its retail brands to around 110,000 connected premises nationwide and an additional 44,000 premises that are currently under construction when they become connected.
    Uniti said on Tuesday it has plans to integrate Harbour ISP into its consumer business in under six months and expects an acquistion multiple of under three times earnings before interest, tax, depreciation, and amortisation (EBITDA). By fiscal year 2022, Uniti said it expects Habour ISP to have EBITDA of AU$3 million.
    Harbour ISP has been “amongst the most active and effective RSPs” on Opticomm’s network for several years, Uniti said.
    “Functional separation now enables us to actively promote retail broadband offerings on our owned networks and Harbour ISP, with its proven pedigree in the greenfield broadband market, is an outstanding platform for us to build specific capability as well as scale in our [consumer] business unit,” Uniti CEO and managing director Michael Simmons said.
    “With the now confirmed addition of OptiComm to the Uniti Group, our network of private fibre premises (connected, under construction or contracted to be connected) exceeds 400,000 connections. Given this large and growing footprint, the strategic value of acquiring Harbour as a specialist greenfield RSP is significant and timely.”
    Related Coverage More

  • in

    Photon juggling: One big quantum processor from 100 little ones

    The processing assembly for Google’s Sycamore quantum computer.
    In the not-terribly-distant past, the goal of quantum computing research was to achieve a milestone called quantum supremacy: the point in time when a quantum computer can, in practical terms, be considered superior to a classical, semiconductor-based computer for processing any task you give it. Certainly Google already made a big enough fuss about it. This is no longer true.  Engineers and scholars have since conceded that this is not possible — that a quantum device cannot supersede a classical device.  (Of course, it may seem a little too convenient that they should make this declaration now.)

    The principal reason for this is not that a quantum computer (QC), once the plans for its development are fully realized, would somehow be inferior. A quantum computer is, and because of the nature of physics always will be, a quantum processor maintained and marshaled by a classical control system. Despite that title, “control” may be an imprecise word in this context. Although such devices may yet become the foundation for a new industry, they don’t really control quantum processing any more than a barbed wire fence controls a prison riot. More accurately, they control the gateway leading to and from the prison, with the guards making sure to watch only the gateway and nothing else (because watching something requires photons, and photons will make the qubit stack — the core processing element — decohere.)
    No, the reason is because a quantum system includes, and depends upon, a classical computer. It’s tempting to say the two rely upon each other, but that would misinterpret their working relationship. Tell a QC it’s dependent upon anything else, and it’s liable to throw a qubit and fall apart.
    What engineers and programmers are seeking now is a kinder, gentler position of achievement and authority. Some have opted for the phrase quantum advantage, which would imply that the QC has a clearly measurable virtue, in terms of performance, speed, or quality, over a classical supercomputer. Others prefer quantum practicality, which is softer still, implying that the QC would be the device one would rationally choose to perform a task, given a rational analysis of the alternatives.
    “You might think, ‘Well, we’ve achieved quantum advantage last year at Google. So it’s probably a few years’ worth of work to get to quantum practicality, isn’t it?'” said Prof. Lieven Vandersypen, the scientific director of Dutch public/academic partnership QuTech, speaking at the recent IQT Europe 2020 conference. Google’s supremacy claim was made after having provably maintained the execution of a task with a 53-qubit register. So perhaps the road to 100 qubits is paved, smooth, and unobstructed, if one takes this point of view. Prof. Vandersypen continued:

    Prof.  Lieven Vandersypen

    A few hundred qubits comes not out of nowhere… This is the point where useful problems could be addressed. On the other hand, perhaps millions of qubits are needed… or maybe a miracle in designing new quantum algorithms. So which of these applies, and how do I look at it? Certainly I don’t believe for a minute that it is just a few years’ worth of work to achieve real quantum practicality. If we look at the projections, indeed, we are going to achieve as a community a few hundred qubits. But these will be not perfect qubits. Then what you need are a hundred perfect qubits that will run indefinitely without error, and can carry through as many operations as are needed to really enter this quantum practicality regime. Okay, they don’t need to be really perfect, but they have to be, let’s say, between 1,000 and 10,000 times higher quality than any of the qubits that we can operate today. That is not completely out of the question, but for sure, not going to happen in a few years’ time.

    Vandersypen makes multiple references to “a few years,” and not by coincidence. In the midst of a global pandemic, and an ongoing shift in the global order, a few years’ worth of government and institutional funding may be all that institutions like QuTech can hope for.
    What would render the entire question of supremacy, advantage, or “edginess” somewhat moot is if there were some force somewhere, perhaps a force of physics, that could make multiple QCs, and perhaps all QCs on Earth, simultaneously interoperable. This is what quantum entanglement actually is. A complete understanding of the underlying principles of a quantum information network (QIN) requires explanations that don’t just border on the philosophical, but plunge head-first into the ocean of the metaphysical.

    Entanglement-as-a-service

    Generally speaking, the laws of physics have thus far referred mainly to the explicate order. Indeed, it may be said that the principle function of Cartesian coordinates is just to give a clear and precise description of explicate order. Now, we are proposing that in the formulation of the laws of physics, primary relevance is to be given to the implicate order, while the explicate order is to have a secondary kind of significance (e.g., as happened with Aristotle’s notion of movement, after the development of classical physics). Thus, it may be expected that a description in terms of Cartesian coordinates can no longer be given a primary emphasis, and that a new kind of description will indeed have to be developed for discussing the laws of physics.
                                                             -David Bohm                                                          Wholeness and the Implicate Order, 1980

    A quantum information network (QIN), if it can be built, would accomplish something that can’t be done in physical reality. Not even science fiction has manifest a contraption such as this. Had Isaac Asimov any clue that such a thing might be feasible, the Robot series would ultimately not have been about robots.

      Mathias van den Bossche
    “You don’t send information on a quantum information network,” explained Mathias van den Bossche, who directs telecommunications and navigation systems research for Italy-based satellite consortium Thales Alenia Space.  “You weave entanglement correlations from one end user to the other end user. When this is available, everything in the middle disappears, and the end users discuss directly. This means you have actually nothing that is being repeated along the network, apart from the entanglement that swaps from link to link — there is no information that is repeated.”
    The only way to adequately convey the function of a QIN is with a ridiculous metaphor: Imagine if the state of being connected, of working together as a cohesive unit, were something you could take with you, as though a dealer in a poker game handed it to you. Own this card, and someone else’s poker hand at the other end of the table is part of yours. If he has two kings and so do you, you now have four-of-a-kind.  (And so does he, but at least you know that.)
    Now imagine you were playing a variant of the game where players could trade cards. Connectedness with one player’s hand could be something you could trade, perhaps for a card granting you connectedness with another player’s hand. To complete this metaphor, imagine you were playing this game using a kind of networking where trading the value of the card would be exactly the same as trading the card itself.
    [embedded content]
    At this point you might, if I’ve phrased this correctly, have an inkling of an idea of what a quantum network would do. Let’s give that capability a purpose: You already know how, with an electronic computer, a component interfaces with the motherboard by being plugged into its bus. That interface gives the component connectedness (some call this “connectivity,” but in present-day networking that actually means something else). The theory of a quantum network is that, at the quantum level, the connectedness of two components can be communicated. The result is that you could obtain an exponentially stronger, single quantum computer using two QCs that swapped their connectedness properties with one another.

      Stephanie Wehner
    “Ultimately, in the future, we would like to make entanglement available for everyone,” declared Stephanie Wehner, who leads the Quantum Internet Initiative at QuTech, speaking at IQT Europe 2020.  “This means enabling quantum communications, ultimately, between local quantum processors anywhere on Earth.”
    Although quantum networks join pairs of QCs, and the connection between them may be volatile, a QIN must have some continual state for it to even exist at the quantum level. So the minimum level of nodes in a QIN is 3, not 2, so that one link is always maintained. A November 2019 report by Toshiba Research and the University of Cambridge, introducing their minimal, 3-node QIN between Cambridge, Bristol, and London, remains current due to delays imposed by the pandemic. The goal of a QIN is not just to communicate quantum states between pairs of locations, but because photons are mobile by nature (you can’t capture light in a jar), remember those states by juggling them from place to place like hot potatoes. Thus a quantum network is a kind of quantum memory.
    Computing the optimization of paths in a QIN could possibly, he conceded, require a QC, if the behavior of the network as a whole cannot be modeled. The very idea of a quantum computer was sparked by Dr. Richard Feynman, considered the father of much of this science including quantum chromodynamics, suggesting in the course of one of his impromptu lectures that only a QC could model quantum mechanical behavior.
    But quantum entanglement — the phenomenon where two atoms, once having been joined, share the state of one property regardless of their distance in space — can only be shared between two atoms anyway, insofar as we know. There will be no problem of joining several entangled QCs together, because theory tells us this is impossible.
    The trick to making a QIN functional may then become a switching problem: opening links in a optical fiber chain connecting sources to destinations, like locks in a canal. Yet it might not be an impossible problem, even for a classically managed network. Quantum connections may indeed be achieved between points over long distances, so long as we accept them as multiple-step routes with stops along the way.
    The reason this is important has to do with QC’s future role in protecting all communications, including the conventional variety that otherwise has nothing to do with quantum. Once it becomes a trivial matter for a stable QC to decrypt any classically encrypted message in mere moments, the only thing stopping the collapse of digital communications as we know it will be a restriction of access to quantum communications. And if the history of the Web has proven anything, it’s that the one way to extend the propagation of information is through a paltry attempt to seal off access to it.

    Entanglement distillation
    Protected communications requires some form of encoded key. Because it involves photons rather than algorithms, a quantum key is purely random. What’s more, it cannot be copied within the network without destroying it, and the message it protects, in the process. So the most a malicious actor can do is maybe disrupt the process, but not swipe the decrypted message.
    The intent of the emerging art of quantum key distribution (QKD) is to leverage quantum mechanics’ inexplicable quirks to protect all digital communication, now and into the foreseeable future. The astute reader may have already gathered that the actual protection delivered by a quantum key only works in the context of a QIN. So for a message outside the QIN to remain protected, it would require some aspect of classical encryption within the classical context — at least until everyone owns her own QC, which is extremely unlikely. Typically, a chain is only as strong as its weakest link.
    Yet since 2012, there has been a theoretical framework [PDF] for pairing quantum and classical cryptography: one that leverages the QIN to authenticate the classical key. In the absence of a quantum key, its classical counterpart would be useless.
    In the near future, the measure of the success of quantum computing as an ecosystem — as something more than an experiment at headline generation — will be whether an independent security organization can earn sustainable revenue as a producer and distributor of quantum keys. That may only be possible when commercial customers perceive quantum networking as something that directly improves, and probably accelerates, classical networking: the Internet (the one with the capital “I”).

      Prof. Saikat Guha
    “Quantum internet, the way I think of it, will not be a brand new internet,” remarked Prof. Saikat Guha, who directs the National Science Foundation’s Center for Quantum Networks.  “It will be upgrading our current Internet to be able to connect quantum devices, quantum gadgets.” Prof. Guha continued:

    People often say the quantum internet is going to make the classical Internet faster and more powerful. We’ve got to be cautious about that, because adding this new quantum communication service on the Internet is actually going to put an additional classical communications burden on the classical Internet. We’re going to have to support higher-bandwidth communications on the classical Internet, to be able to support this additional service we are putting on top of that infrastructure, not just in terms of the extra control plane communications traffic that has to be sustained, but also there is additional, inherent classical communication that is required for purification, entanglement distillation, quantum error correction, and so forth.

    As if the metaphysical implications weren’t dramatic enough, we now have some new, practical, common sense implications to deal with: Even if we achieve the theoretical objective of compounding smaller QCs together into one larger one by way of a QIN, actually getting the interconnected particles to do what we want them to do, requires the type of algorithmic optimization process that presently requires a QC itself to achieve — in other words, impractical in the classical realm. Perhaps the biggest optimization we’ll need appears in Guha’s list: entanglement distillation. This is where an operation involving multiple, weakly entangled qubits becomes refined into one with a lesser number of more strongly entangled ones.
    As researchers from Dusseldorf’s Institute for Theoretical Physics III discovered as far back as 2013, the generation of quantum keys that work over sustainable distances requires entanglement distillation — a way to better amplify the signals sent over fiber by making them clearer. As Guha suggests, this may have to be done in a classical setting; otherwise, it’s a chicken-and-egg problem, where the QKD needs distillation in order to optimize the QKD.
    In building a system that not only relies upon, but is leveraged upon, an as-yet-unexplained physical phenomenon where changes of state happen with perfect simultaneity, it is extremely difficult to determine the identity and location of step one in the sequence. It’s the type of problem we would like to have a quantum computer to solve for us. For now, we’re stuck with the best thinking machines available to us. And when reasoning on this level, the output of these particular machines tends to look more like philosophy than logic.
    Learn more — From CNET Media Group
    Elsewhere More

  • in

    AGL makes its move into reselling NBN services

    After looking at which telco it wanted to buy in 2019, and eventually settling on Southern Phone in a AU$27.5 million deal, AGL entered the NBN reselling market under its own brand on Friday.
    The company is opening with a trio of unlimited plans, and offering AU$15 off a month if energy is bundled in.
    The smallest plan is a AU$75 per month 25Mbps plan that has typical evening speeds of 19Mbps. There’s also a 50Mbps plan for AU$80 per month that has typical evening speeds of 38Mbps and a 100Mbps plan that has typical evening speeds between 7pm and 11pm of 76Mbps.
    By way of comparison, Telstra is currently selling its plans based on serving up the rated speed during evenings, so hitting 100Mbps during busy evening periods for its 100Mbps plan.
    In announcing its restructure yesterday, Telstra also revealed it was looking to enter the energy market.
    “Recognising the growing convergence of energy and data, it’s becoming more important to meet the needs of our connected customers by providing the essential services of the future,” AGL chief customer officer Christine Corbett said.
    “Last year we acquired Southern Phone, one of Australia’s largest regional telecommunications companies with more than 168,000 active NBN and ADSL broadband internet and mobile phone services in regional Australia.

    “However, the launch of broadband products represents our first move into data and telecommunications under the AGL brand and also supports our strategic priority of growth.”
    AGL said it is looking to increase its customer count from 4.2 million to 4.5 million by the end of the 2024 fiscal year, and increase the average number of services from 1.4 to 1.6 per customer.
    The energy company is the largest private electricity generator in the country.
    Related Coverage More

  • in

    Cisco tops Q1 expectations, shares climb on strong guidance

    Cisco published its first quarter financial results on Thursday, beating market expectations and issuing strong guidance for the current quarter. The networking giant posted non-GAAP earnings of 76 cents per share on revenue of $11.9 billion, a decline of 9% year over year.

    Analysts were expecting earnings of 70 cents per share on revenue of $11.85 billion.
    Breaking revenue out by segment, product revenue was down 13% to $8.58 billion, and service revenue was up 2% to $3.34 billion. Product revenue sales from infrastructure platforms, which includes Cisco’s networking and router portfolios, was down 16% to $6.34 billion. Sales of security products grew 6% to $861 million, and sales of applications fell 8% to $1.38 billion for the quarter.
    “Our Q1 results reflect good execution with strong margins in a challenging environment,” said Kelly Kramer, CFO of Cisco.  “We continued to transform our business through more software offerings and subscriptions, driving 10% year over year growth in remaining performance obligations. We delivered strong growth in operating cash flow and returned $2.3 billion to shareholders.”
    In a separate announcement, Cisco revealed that Kramer will retire as CFO in December. Her replacement is Scott Herren, who most recently served as the chief finance officer for Autodesk.
    For the second quarter, the company is predicting non-GAAP earnings between 74 cents and 76 cents with revenue ranging from flat to a decline of 2% year-over-year. Wall Street is looking for non-GAAP earnings of 73 cents per share with $11.63 billion in revenue.
    Shares of Cisco were up over 8% after hours. 

    Tech Earnings More

  • in

    Users shift off 100/40Mbps NBN plans

    Image: ACCC
    Even though they only make up a small portion of its customer base, there has been quite the shift in how users are connecting to NBN on plans offering speeds at or in excess of 100Mbps.
    In the latest edition of the ACCC’s NBN Wholesale Market Indicators Report, almost 43,000 users moved away from 100/40Mbps during the quarter. At the same time, over 113,000 users moved onto NBN’s 100/20Mbps plan that is labelled as Home Fast, over 3,000 took up the 250/25Mbps Home Superfast plan, and the user base of those on 500-1000Mbps/50Mbps plans increased by 2,600.
    This is a significant increase on the 45,000 customers that took up the new Home plans in the prior instalment of the report.
    Not to be left out entirely, NBN’s older fast plans also saw some growth, with 379 extra connections on 250/100Mbps plans, an increase of 90 customers on 500/200Mbps, and 38 extra lines signed up to a 1000/400Mbps plan.
    “It is good to see a continuing increase in the number of products on offer, giving savvy consumers a range of differing plans to choose from,” ACCC chair Rod Sims said.
    Over the quarter, 387,410 new services were connected, with the total number of 12/1Mbps connections falling by 25,000, while 86,000 additional lines took up 25/5Mbps, and the 50Mbps plan total increased by 234,000.
    Following the merger of TPG and Vodafone, this is the first report that has combined Vodafone’s NBN market share with TPG. Vodafone’s NBN connections accounted for approximately 2% of the market.

    Telstra’s market share, meanwhile, was slightly down to 45.7%, followed by TPG Telecom with 24.4%, Optus had 15.4% of the market, Vocus had 7.2%, and cruising along with just under 4% market share was the recently listed Aussie Broadband.
    The ACCC added the total capacity (CVC) purchased by retails increased by 10% to just over 20Tbps.
    “CVC per user also increased over the quarter from 2.47Mbps to 2.59Mbps, a near 5% increase since last quarter,” the ACCC said.
    “The latest CVC figures reflect NBN Co’s extension of its temporary offer of additional 40% CVC capacity to RSPs, at no additional cost, in response to the COVID pandemic.”
    NBN said last month it would dial down its CVC boost over the coming two months.
    Using a new baseline that is calculated as the difference between a retailer’s capacity usage at the beginning of the September billing period compared to that in February, NBN will offer retailers 75% of that difference in December, and 50% in January next year.
    “NBN Co’s tapering of COVID-19 CVC Credit offer to internet retailers recognises that peak data demand is returning to normal forecast levels of growth,” the company said at the time.
    Related Coverage More

  • in

    Smartphones become a lifeline for poor Brazilians

    The majority of financially vulnerable Brazilians performs key daily tasks including study and work exclusively via their smartphones, according to a study from the Center of Studies on Information and Communication Technologies (Cetic.br), the research arm of the Brazilian Network Information Center (NIC.br)
    This is the latest study of a research series into technology habits of users after the emergence of the pandemic, which utilizes indicators from the center’s pre-Covid studies on tech adoption.
    The sample size of the research is 101 million Internet users, which corresponds to 83% of Brazilian users aged 16 or over. The latest survey was carried out between September 10 and October 1. It considers the Brazilian socioeconomic class system ranging from the elite (class A), the upper-middle class (class B), the lower middle class (class C), the working-class poor (class D) and the extremely poor and unemployed (class E).
    According to the survey results, 74% of Internet users in Brazil from the bottom of the socioeconomic pyramid, the classes D and E access the web exclusively via smartphones. By comparison 11% of rich Brazilians, from classes A and B, use phones exclusively to access the Internet.
    Exclusive of smartphones to study is also more frequent among the poorest Brazilians: 54% stated they only use their phones to carry out educational activities remotely. This compared with 43% of the users from the class C, and 22% among wealthier Brazilians.
    READ MORE: Pandemic puts spotlight on digital inequality in Brazil
    Conversely, the use of notebooks, desktops and tablets as the main type of equipment for remote education is greater among the classes A and B (66%). Use of such types of equipment is less likely among class C students (30%) and the most financially vulnerable (11%).

    “The lack of digital resources to access classes and remote activities is one of the main aspects that can affect the continuity of educational routines during the pandemic”, said Alexandre Barbosa, manager at Cetic.br.
    “The disparities in access to ICT among students of different socioeconomic profiles also create unequal opportunities for learning”, he added. Barriers cited by participants around remote education included the difficulty to address questions with teachers (38%) and the lack of, or low quality of the Internet connection (36%).
    Reasons most cited by wealthier respondents for not following up on remote educational activities among wealthier Brazilians, the main reasons cited for lack of interest to study remotely were lack of enjoyment to study remotely or not managing to do so from a distance (43%); household duties (38%) and lack of motivation (35%).
    Among users on lower incomes, often cited reasons for not having the encouragement to study online included the need to look for work (63%), household chores (58%) and the lack of equipment to access classes (48%).
    When it comes to work, the Cetic.br research found that approximately 23 million people were working remotely. Over half of Brazilian remote workers (52%) belong to classes A and B and most used a notebook to work. Conversely, 84% of Internet users who worked remotely used their smartphones to do so.
    Separate research from Cetic.br released in August found that there has been an increase in Internet access among Brazil’s classes D and E. This was mostly driven by e-commerce, entertainment, education and digital access to government services. More