More stories

  • in

    ACCC corrects its video conferencing Critical Services Report

    Image: Getty Images/iStockphoto
    The Australian Competition and Consumer Commission (ACCC) has reissued its Critical Services Report that was released on July 8, following uproar from vendors about the consumer watchdog saying they used foreign servers for Australian customers.
    The ACCC had pointed out that Google Meet, Skype, and Microsoft Teams all had lower latency than Zoom, GoToMeeting, and Webex, which the ACCC had pinned on the latter trio for making use of servers overseas.
    After publication, Zoom and Cisco contested the findings, with both stating they had data centres in Australia.
    “Zoom and Cisco advised us that in addition to hosting video conferences on servers based overseas, they do host some video conferences on servers based in Australia, depending on the amount of traffic at any given time,” the ACCC said in a correction issued on Friday afternoon.
    “It is not known how much traffic is off-loaded onto international servers.”

    The ACCC said its report was based on free accounts with the services.
    “Whether a video conference is hosted from a domestic or international server can depend on such factors as where the account is created, whether it is a basic or premium (paid) account, and the overall demand at the time,” it said.
    The original ACCC report used over 850 samples for Zoom and Webex in its report.
    The last week of July 2020 has not been a banner week for the ACCC.
    On Monday, the watchdog announced it was filing legal action against Google for allegedly misleading Australian consumers when it started combining users’ Google profile data with their activity on websites that used DoubleClick to display ads.
    In doing so, the ACCC used an example that Google corrected.
    “An earlier version of this media release used a hypothetical example that suggested that Google used information about users’ health to personalise or target advertisements,” a notice now reads.
    “Google says that it does not show personalised ads based on health information. This example has been removed.”
    By Thursday, the Federal Court dismissed its appeal against a judgment that found TPG did not make misleading representations about its prepayment protocols.
    Woof: Consumer watchdog coverage More

  • in

    Department says 'vast majority' of FttN lines to get 25Mbps speeds in December

    An NBN FttN node getting a Nokia line card installed
    Image: Corinne Reichert/ZDNet
    With the 18-month period of co-existence — where fibre-to-the-node (FttN) infrastructure also needs to support legacy services over copper, such as ADSL — coming to an end across Australia, the Department of Infrastructure, Transport, Regional Development and Communications has said it expects the “vasty majority” of FttN connections to hit the minimum 25Mbps speed on the National Broadband Network (NBN).
    By comparison, NBN only guaranteed speeds of 12Mbps in co-existence.
    “NBN Co Limited has indicated that, as at 1 July 2020, co-existence has ended on 5,750 nodes out of a total of 27,933 nodes in the fibre to the node footprint,” the Department said in response to questions from the Joint Standing Committee on the National Broadband Network.
    “The vast majority of fibre to the node lines are expected to achieve a peak speed of at least 25 megabits per second by December 2020.”
    The department further said that as of May, almost 140,000 FttN premises were not capable of hitting the 25Mbps mark.

    “This reduction from the figures NBN Co provided in April 2019 mainly reflects the company’s progress on ending co-existence, but also the impact of ongoing network optimisation work,” the department said.
    “Where the network is not capable of providing the minimum wholesale download speeds after co-existence has ended, NBN Co will take action to rectify any issues in its network so that the requirements of the Statement of Expectations are met.”
    In response to another question, the department said NBN was unable to proceed with 120 fixed wireless sites. It added that it also decided to move premises on 26 fixed wireless sites onto its fixed line technologies.
    “The changes mean that around 22,000 premises that were planned to be served by fixed wireless or were served by fixed wireless or satellite will instead be served by fixed line, while around 20,000 premises that were planned to be served by fixed wireless will instead be served by satellite,” it said.
    In June, NBN said it was unable to roll out fixed wireless to 500 premises in rural areas around Adelaide and users were shifted onto satellite instead.
    “Opposition to fixed wireless towers in this part of the Adelaide Hills continues to be an obstacle to securing a lease for a suitable site with a willing landowner,” NBN said at the time.
    It further added 21 premises were moved from FttN technology onto its satellite service after encountering rocky soil.
    “The presence of rock in the soil more than tripled the deployment cost for fixed line making it cost prohibitive,” it said.
    “Fixed wireless was not considered in this case due to the very low premises count meaning that deployment of this technology is also cost prohibitive.”
    The department said it was unaware of any plans to shift customers off satellite services after the initial network build is complete.
    It also detailed the fixed wireless sites that were shifted to satellite.
    Those sites were Beechmont, Beechmont North, Blackheath, Blue Mountain, Bonogin South, Boyland, Brigadoon, Broken Head, Bulahdelah East, Byers, Clagiraba, Clyde North, Cornelia, Crabbes Creek, Daylesford, Dooralong, Dulong, Dural East, Fairney View West, Federal, Ferguson, Fingal, Forrestdale, Gilston, Glenorie, Glenorie West, Grose Vale, Gwinganna South, Hellfire Pass, Humbug Scrub, Humpty Doo South, Kangaloon, Karnup South, Kenthurst West, Keppel Sands, Kilmore East, Kingsholme, Kooralbyn, Kulangoor, Lakesland, Lamington North, Lillian Rock, Maroota, Moolap, Moss Vale, Mount Samson West, Mount Walker, Mulgoa North, Mylor, Neranwood South, Nerong, Nethercote, Onkaparinga Hills, Ourimbah West, Pheasants Nest, Piggabeen, Pomborneit, Reesville, Robertson South, Smithfield South, Smiths Gully East, Springbrook North, Springbrook South, Stoneville, Surveyors Bay, Talbingo, Tamborine South, Theresa Park West, Tomewin, Toolern Vale, Torquay East, Uki North, Upper Brookfield, Verrierdale, Verrierdale East, Wedderburn, Wellard, Wendoree Park, Wisemans Ferry, Witta, Wolvi, Woorabinda and Wooragee.
    At a committee hearing in June, Department of Finance attempted to pour cold water on the idea of writing down the value of NBN, stating that any movements in value are dictated by accounting standards.
    “The value of NBN in government-funded statements is in line with the Australian accounting standards. So there isn’t an ability for the government to unilaterally choose to write down or determine the value of NBN,” Department of Finance first assistant secretary for financial analysis, reporting and management, governance and resource management Tracy Carroll told the committee.
    “For the financial year most recently completed, NBN is valued on the basis of the net assets. There’s a process being undertaken right now, for the financial year that will end on 30 June 2020, to consider what’s the appropriate value for the NBN that would be reflected.”
    Carroll added that there is no expectation of a write-down this year.
    Related Coverage More

  • in

    Multiple Tor security issues disclosed, more to come

    Image: Tor Project

    Over the past week, a security researcher has published technical details about two vulnerabilities impacting the Tor network and the Tor browser.
    In blog posts last week and today, Dr. Neal Krawetz said he was going public with details on two alleged zero-days after the Tor Project has repeatedly failed to address multiple security issues he reported throughout the past years.
    The researcher also promised to reveal at least three more Tor zero-days, including one that can reveal the real-world IP address of Tor servers.
    Approached for comment on Dr. Krawetz’s intentions, the Tor Project did not reply to a request for comment and provide additional details on its stance on the matter.
    The first Tor security issue
    Dr. Krawetz, who operates multiple Tor nodes himself and has a long history of finding and reporting Tor bugs, disclosed the first Tor security issue last week.

    In a blog post dated July 23, the researcher described how companies and internet service providers could block users from connecting to the Tor network by scanning network connections for “a distinct packet signature” that is unique to Tor traffic.
    The packet could be used as a way to block Tor connections from initiating and effectively ban Tor altogether — an issue that oppressive regimes are very likely to abuse.
    The second Tor security issue
    Earlier today, in a blog post shared with ZDNet, Dr. Krawetz disclosed a second issue. This one, like the first, allows network operators to detect Tor traffic.
    However, while the first issue could be used to detect direct connections to the Tor network (to Tor guard nodes), the second one can be used to detect indirect connections.
    These are connections that users make to Tor bridges, a special type of entry points into the Tor network that can be used when companies and ISPs block direct access to the Tor network.
    Tor bridges act as proxy points and relay connections from the user to the Tor network itself. Because they are sensitive Tor servers, the list of Tor bridges is being constantly updated to make it difficult for ISPs to block it.
    But Dr. Krawetz says connections to Tor bridges can be easily detected, as well, using a similar technique of tracking specific TCP packets.
    “Between my previous blog entry and this one, you now have everything you need to enforce the policy [of blocking Tor on a network] with a real-time stateful packet inspection system. You can stop all of your users from connecting to the Tor network, whether they connect directly or use a bridge,” Dr. Krawetz said.
    Both issues are specifically concerning for Tor users residing in countries with oppressive regimes.
    Dissatisfaction towards the Tor Project’s security stance
    The reason why Dr. Krawetz is publishing these issues in Tor is that he believes the Tor Project does not take the security of its networks, tools, and users seriously enough.
    The security researcher cites previous incidents when he tried to report bugs to the Tor Project only to be told that they were aware of the issue, working on a fix, but never actually deploying said fix. This includes:
    A bug that allows websites to detect and fingerprint Tor browser users by the width of their scrollbar, which the Tor Project has known about since at least June 2017.
    A bug that allows network adversaries to detect Tor bridge servers using their OR (Onion routing) port, reported eight years ago.
    A bug that lets attackers identify the SSL library used by Tor servers, reported on December 27, 2017.
    All of these issues are still not fixed, which has led Dr. Krawetz in early June 2020 to abandon his collaboration with the Tor Project and take the current approach of publicly shaming the company into taking action.

    I’m giving up reporting bugs to Tor Project. Tor has serious problems that need to be addressed, they know about many of them and refuse to do anything.I’m holding off dropping Tor 0days until the protests are over. (We need Tor now, even with bugs.) After protests come 0days.
    — Dr. Neal Krawetz (@hackerfactor) June 4, 2020

    Updated at 20:30 ET, July 30:
    The Tor Project has responded to Dr. Krawetz’ two blog posts. It’s a lengthy response detailing each issue, which we are reproducing in full below. In summary, the Tor Project’s reply is that they are aware of the issues the researcher reported, but they differ on the threats they pose to users, claiming they can’t be enforced at scale. The full reply is below:
    “We have been working on the first issue raised in the blog post published 7/23 (scrollbar width) here: https://gitlab.torproject.org/tpo/applications/tor-browser/-/issues/22137. The blog post claims that the scrollbar width of a Tor Browser user can be used todistinguish which operating system they are using. There are other ways a Tor Browser user’s operating system can be discovered. This is known and publicly documented. When Tor Browser does not communicate the operating system of its user, usability decreases. Commonly used websites cease to function (ie, Google Docs). The security downside of operating system detection is mild (you can still blend with everybody else who uses that operating system), while the usability tradeoff is quite extreme. Tor Browser has an end goal of eliminating these privacy leaks without breaking web pages, but it is a slow process (especially in a web browser like Firefox) and leaking the same information in multiple way is not worse than leaking it once. So, while we appreciate (and need) bug reports like this, we are slowly chipping away at the various leaks without further breaking the web, and that takes time.
    “The second claim in the first blog post published 7/23 outlines a way to recognize vanilla Tor traffic based on how it uses TLS with firewall rules. Fingerprinting Tor traffic is a well-known and documented issue. It’s an issue that has been discussed for more than a decade. (Example: https://gitlab.torproject.org/tpo/core/torspec/-/blob/master/proposals/106-less-tls-constraint.txt). Fixing the way Tor traffic can be fingerprinted by its TLS use is very small step in the censorship arms race. We decided that we should not try to imitate normal SSL certs because that’s a fight we can’t win. Our goal is to help people connect to Tor from censored networks. Research has shown that making your traffic look like some other form of traffic usually leads to failure (http://www.cs.utexas.edu/~amir/papers/parrot.pdf). The strategy Tor has decided to take is better and more widely applicable, and that strategy is developing better pluggable transports. Tor has an entire anti-censorship team tackling this problem and has funding earmarked for thisspecific purpose.
    “The blog post published 7/30 is correct in suggesting that a finely-calibrated decision tree can be highly effective in detecting obfs4; this is a weakness of obfs4. However, what works in someone’s living room doesn’t necessarily work at nation-scale: running a decision tree on many TCP flows is expensive (but not impossible) and it takes work to calibrate it. When considering the efficacy of this, one also has to take into account the base rate fallacy: the proportion between circumvention traffic and non-circumvention traffic is not 1:1, meaning that false positives/negative rate of 1% (which seems low!) can still result in false positives significantly outweighing true positives. That said, obfs4 is certainly vulnerable to this class of attack. The post says “However, I know of no public disclosure for detecting and blocking obfs4.” There’s work in the academic literature. See Wang et al.’s CCS’15 paper: https://censorbib.nymity.ch/#Wang2015a. See also Frolov et al.’s NDSS’20 paper: https://censorbib.nymity.ch/#Frolov2020a The blog post cites Dunna’s FOCI’18 paper to support his claim that the GFW can detect obfs4. This must be a misunderstanding. On page 2, the paper says: “We find that the two most popular pluggable transports (Meek [7] and Obfs4 [18]) are still effective in evading GFW’s blocking of Tor (Section 5.1).” The blog post also cites another post to support the same claim: https://medium.com/@phoebecross/using-tor-in-china-1b84349925da. This blog post correctly points out that obfs4 bridges that are distributed over BridgeDB are blocked whereas private obfs4 bridges work. This means that censors are not blocking the obfs4 protocol, but are able to intercept bridge information from our distributors. One has to distinguish the protocol from the way one distributes endpoints.”The findings published today (7/30) are variants of existing attacks (which is great!) but not 0-days. They are worth investigating but are presented with little evidence that they work at scale.”
    The Tor Project also disagreed with Dr. Krawetz’ classification of the issues he detailed on the blog as zero-days. The title has been updated accordingly. More

  • in

    Patch now: Cisco warns of nasty bug in its data center software

    Cisco has disclosed a critical security vulnerability in Cisco Data Center Network Manager (DCNM), a key piece of Cisco’s data-center automation software for its widely used MDS and Nexus line of networking hardware.  
    During internal testing, Cisco discovered that a bug in the REST application protocol interface (API) of DCNM could allow anyone on the internet to skip over the web interface’s log in and carry out actions as if they were an administrator of the device. 

    Networking

    The newly disclosed bug, tagged as CVE-2020-3382, is similar to the static encryption key flaw in DCNM that an external researcher discovered earlier this year. 
    SEE: IT Data Center Green Energy Policy (TechRepublic Premium)
    The static key lets attackers use it to generate a valid session token on an affected device and do whatever they want through the REST API with administrative privileges.  

    “The vulnerability exists because different installations share a static encryption key. An attacker could exploit this vulnerability by using the static key to craft a valid session token. A successful exploit could allow the attacker to perform arbitrary actions through the REST API with administrative privileges,” explains Cisco in the advisory. 
    Admins need to install the latest versions of Cisco’s DCNM software releases to shut down the bug since there are no workarounds. However, Cisco notes it is not aware of attackers using this flaw yet. 
    The bug has a severity rating of 9.8 out of a possible 10, and affects DCNM software releases 11.0(1), 11.1(1), 11.2(1), and 11.3(1).
    Cisco also reported a critical flaw with a severity rating of 9.9 in the web interface of its Cisco SD-WAN vManage software. 
    The bug, tracked as CVE-2020-3374, lets a person on the internet with the right credentials attack a system after bypassing authorization. From there, attackers could reconfigure a system and knock it offline or access sensitive information.  
    “The vulnerability is due to insufficient authorization checking on the affected system. An attacker could exploit this vulnerability by sending crafted HTTP requests to the web-based management interface of an affected system,” explained Cisco.  
    “A successful exploit could allow the attacker to gain privileges beyond what would normally be authorized for their configured user authorization level. The attacker may be able to access sensitive information, modify the system configuration, or impact the availability of the affected system.”
    SEE: Cisco releases security fixes for critical VPN, router vulnerabilities
    Again, there are no workarounds, so admins need to install fixed releases from various software trains of Cisco SD-WAN vManage. Devices using releases 18.3 or prior will need to migrate to fixed releases from newer trains.
    Fortunately, this bug was also discovered during a Cisco investigation with a customer. The company is not aware of public exploits for the vulnerability.  
    More on Cisco and network security More

  • in

    Comcast's broadband service gains in Q2 amid COVID-19; media, video not so much

    Comcast added 340,000 high-speed residential broadband customers at a rapid clip in the second quarter to offset video losses and weakness in its media business. 
    The results highlight how broadband has become like electric and water as an essential service amid remote work and education. The COVID-19 pandemic is accelerating shifts in cable consumption. 
    Comcast reported second quarter net income of $2.99 billion, or 65 cents a share, on revenue of $23.72 billion, down nearly 12% from a year ago. Non-GAAP earnings were 69 cents a share. Wall Street was expecting non-GAAP earnings of 55 cents a share on revenue of $23.58 billion. As companies have withdrawn guidance estimates have been off by a wide margin during the second quarter earnings season.
    Overall, the resilience in Comcast’s business comes from the cable unit. Comcast added 217,000 cable customer relationships and 323,000 high speed internet net additions. The breakdown, which doesn’t include more than 600,000 high risk or free Internet Essentials accounts, include:
    Net adds of 340,000 net residential broadband subscribers.
    A net loss of 17,000 business broadband subscribers.
    427,000 net video customers lost.
    A gain of 126,000 net wireless lines.
    Add it up and Comcast is adding one product subscribers as its bundles drop off.
    Comcast

    In Comcast’s media unit, NBCUniversal saw revenue fall 25.4% and adjusted EBITDA fell 29.5%. Cable networks, films and theme parks all had double-digit percentage revenue declines. Broadcast television revenue was down 1.6%. Sky revenue was down 12.9%.   More

  • in

    Brazilian gamers see improvement in broadband latency and speed

    Brazilians have seen recent improvements in fixed broadband latency as demand for online gaming rises during the Covid-19 outbreak, a new study has found.
    Latency – the reaction time of a connection – varies between countries across Latin America, particularly when it comes to fixed broadband. Latency is a key metric in gaming and determines much of the user’s experience in terms of the absence of lags during gameplay. Gamers ideally aim for a latency of less than 50 milliseconds and preferably less than 30ms.
    According to the data from Ookla’s Speedtest Intelligence, gamers in Brazil had the lowest mean latency on fixed broadband, relevant for games played on PC and console games, at 19 ms during Q2 2020, down from 23 ms in the same period in 2019. By comparison, Colombia had the highest fixed broadband latency at 43 ms. The study noted investments in fiber contributed to the recent improvements.

    Mobile latency, which is relevant to games played through devices such as smartphones, did not vary as much: Argentina had the best latency on mobile at 40 ms, followed closely by Chile at 41 ms. Brazil had mobile latency at 46 ms and Colombia had the highest latency during this period at 47 ms.
    As well as latency during the pandemic, the report also brought data on internet performance, which is also important to gamers. While some countries experienced a dip in speeds in March, the study noted that on the whole, internet speeds on fixed broadband have increased in Argentina, Brazil, Chile, Colombia and Mexico since the week of March 2, 2020.

    In addition, the report noted that apart from Chile, the largest Latin American economies have also experienced an increase in mobile speeds, ranging from a 2% increase in Colombia to a 19% increase in Mexico.
    According to a separate study by Comscore, Brazil is the world’s fourth-largest market for games after India, United States and China. The report estimates there are 84 million gamers in Brazil – this is equivalent to 70% of the country’s online population, currently estimated at 120 million. Of that total, 64.3 million only use mobile devices to play games. More

  • in

    Juniper extends AI-driven network insights to WAN and branch locations

    Juniper Networks on Wednesday announced it’s extending AI-driven insights to WAN and branch networks with a new cloud-based service called Juniper Mist WAN Assurance. Additionally, the company is introducing a new conversational interface to networking operations, enabling either IT teams or end users to more easily communicate with Marvis, Juniper’s virtual network assistant. 

    The new capabilities are a part of Juniper’s growing focus on AI-driven operations, which it stepped up last year with its acquisition of Mist Systems. Mist and Juniper have already delivered AI-driven networking operations to the enterprise with wi-fi, wired and security services. With the addition of WAN, Juniper says it can provide customers with end-to-end AI-enhanced visibility. 
    Ultimately, the goal is to use AI to shift the focus from network and application behavior to the actual user experience. 
    The new Juniper Mist WAN Assurance service streams key telemetry data from Juniper SRX devices to the cloud-based Mist AI engine. This enables customizable WAN service levels, and it allows for a proactive response to anomaly detections. The service works with Marvis to correlate events across the LAN, WLAN and WAN for rapid fault isolation and resolution. 
    “Today when large enterprises have a problem, they don’t know where to look,” Sujai Hajela, Mist co-founder and Juniper SVP, said to ZDNet. Juniper Mist WAN Assurance aims to solve that problem. 

    Meanwhile, with the new conversational interface for Marvis, customers will be able to learn about their networks with natural language questions such as, “What was wrong with Bob’s Zoom call yesterday?”
    With the new interface, Marvis can provide answers to questions based on its access to a large knowledge base, with interactive queries for further help. It leverages reinforcement learning to get better at answering questions over time. 
    Since the Mist acquisition, Juniper has started rearranging its enterprise business unit around the notion of “AI-driven enterprise,” Hajela said. The reformatted business unit, led by Hajela, brings wired access, wireless access and WAN under common leadership with dedicated sales, marketing and engineering. 
    “The only way to quantify end user experience is to use AI,” he said, with a “cloud stack built from the ground up built to handle AI. We are now extending that paradigm across Juniper.” More

  • in

    Bravo ACCC: Telstra begins flogging NBN overprovisioning as 15% speed boost

    What a so-called 50Mbps plan now delivers.
    Image: Telstra
    Anyone with a passing knowledge of how networking layers work, combined with a tiny amount of experience on how capitalism and marketing operates, could see that NBN overprovisioning would lead to a hell of a lot of spin from Australia’s telcos.
    Last year, the Australian Competition and Consumer Commission (ACCC) decided NBN needed to provide extra layer 2 capacity so that tests the ACCC runs at layer 7 would match the speeds claimed by telcos.
    It is a true apples and oranges comparison, but the ACCC has previously told ZDNet it is happy with its decision.
    The end result was NBN deciding to overprovision its plans by 15%, except for the plans where it doesn’t, which is currently its gigabit options.
    So Australia has overprovisioning of NBN because the ACCC thinks TCP/IP headers get in the way of the results of its tests, and most plans do, but not all. Remember, this is all meant to be easy for consumers to understand.

    As the overprovisioning appears on the NBN, the spin from the telcos have kicked in. Telstra was promoting the overprovisioning as a 15% speed boost.
    “We want to give our customers the best NBN experience possible so we’re rolling out changes that NBN Co has made available to help more customers get faster speeds,” Telstra said on Wednesday.
    “When data is carried across the internet, bandwidth is used to carry that data to its intended destination (known as ‘overheads’) which reduces the speeds available to you.
    “As part of changes to the way NBN Co manages speeds over its network, more bandwidth (or speed) has been made available to compensate for these overheads by allowing services to run up to 15% faster (excluding Fixed Wireless).”
    Presumably selling consumers on the idea of a speed boost, rather than a network-wide knob that NBN had to turn at the behest of the ACCC, is an easier sell.
    And telcos will very likely get away with it. In fact, consumers are going to embrace this “speed boost”.
    The frugal warriors over at OzBargain picked up on the new provisioning last week, and it certainly was not cast in a negative light.
    “Free Speed Boost for NBN Fixed Line Customers,” said an entry from a user called tightarse.
    “From the outset, please note this may not work for everyone but has certainly worked for me and a few others I’ve asked to test … Not sure what the ‘15% overprovisioning’ really means.”
    Far be it from me to criticise people being excited about a faster internet connection, bandwidth is bandwidth after all.
    Thanks to the ACCC, 50Mbps layer 2 plan in Australia could be up to 55Mbps on the fixed line footprint. But it’s definitely not going beyond somewhere around 47Mbps on fixed wireless, if you are lucky, and on satellite, you cannot get that speed at all.  
    Breathe in the simplicity. How good is making things easier to understand?
    Related Coverage More