More stories

  • in

    HPE CEO Neri seeing steady demand as enterprise customers shift priorities

    Hewlett Packard Enterprise CEO Antonio Neri has intimate knowledge of COVID-19 since he was infected. Now he’s directing HPE’s supply chain, employee base and culture amid a pandemic.
    I caught up with Neri to talk demand, edge computing, the promise of HPE software and the company’s pivot to selling its entire portfolio as a service. Recent events from HPE’s last earnings report:
    Here are a few themes and highlights from our talk with the full conversation in the video.
    Returning to work during the COVID-19 pandemic. Neri had COVID-19 and was able to recover quickly. HPE has been able to recover its supply chain and has been able to work through the backlog from last quarter. Workers have been able to work remote in HPE’s 172 markets and customers are looking to support employees with a “cloud native approach.”
    “We have reopened 93 sites around the globe, but none in the United States it’s zero. And we are what we call in phase one, which is up to 20% of the employees are allowed to come back to the office, but we’re working through that.

    The future of work. Neri said:

    We are not going to be going back to the way we used to where people will have a desk and come and do the job. I think the office will be totally redesigned to provide a more collaborative and innovative experience. Probably up to 50% of the workforce will never return to the office to do their job on a daily basis. Obviously, all of them would come back to the office to do innovation center sessions, collaboration sessions, and then ultimately for social aspect of it. It is an opportunity to change the way we work and to provide a better experience, but also to rationalize our footprint.

    Neri added that every worker has a different family situation and school arrangements are going to be challenging. HPE and other companies will have to support those employees with flexibility while retaining the corporate culture.
    The demand picture for HPE. “Demand has been steady, but what we see is the demand is shifting to new areas or different areas for that matter,” he said.
    Neri said any infrastructure or software that covers IT resiliency is garnering attention. Security is another key area as well as the cloud and virtual desktop infrastructure as well as high-performance computing. “Then you have the campus and then you have the branch. And now you have these micro branches, which is all our offices around the globe, our home offices. And so security is essential,” said Neri.
    These branch offices also serve as edge locations. HPE bought Silver Peak to complement its Aruba portfolio. Neri noted that the edge will provide data for analytics and ultimately new business models.
    HPE as an edge company. Neri said:

    We want to be known as the edge-to-cloud platform as-a-service company. And in that there are three major components. One is, as-a- service because obviously customers want to consume their solutions in a more consumption driven, pay only for what you consume. And that experience, at the core is simplicity and automation for all the apps and data, wherever they live.
    Obviously, the edge is the next frontier. And we said two years ago that the enterprise of the future will be edge-centric, cloud-enabled and data-driven. Well, guess what? The future is here now. The edge is where we live and work. And so for us, as customers accelerate the digital transformation the first step in that journey is connectivity. And this is where being an edge platform is essential.

    What about compute and storage? Neri said compute and storage is still the core business for HPE and has become important due to the importance of data and how it is proliferating.
    HPE as software firm. Ezmeral, HPE’s new software brand, is an alternative that can compete with VMware and Red Hat OpenShift. There are other software assets in the portfolio such as GreenLake and HPE InfoSight. I asked Neri about the challenge of seeing HPE as a software firm.
    Neri said:

    Our strategy is to be true open source, autonomous, intelligent, and secure, and be able to connect all their edges and all their clouds in a very automated way. And we have already won quite a significant number of customers because of HPE Ezmeral. There is a large financial institution here in the Bay area that need to run Splunk as a workload on prem because of the amount of data. They don’t want to pay the cost of egress and data back and forth, and they want a true cloud native approach. And by the way, we deploy that because of HPE Ezmeral with our compute and storage solutions, deliver as-a- service to HPE GreenLake. Those are the type of customers we’re going to track going forward.

    It’s about land and expand with new workloads and be able to provide managed services for the entire estate.

    Can everything-as-service drive revenue growth? Neri said:

    It’s a journey. Obviously, there is a component of customer acceptance. But the fact that customers have already embraced the consumption model through shifting some workloads to the cloud that tells you the OPEX model is something that’s being adopted more and more than just a CAPEX model.
    We see the growth, we see the momentum, but obviously when you are (a large) company, to pivot everything in that model will take time.  More

  • in

    This $35 accessory is a must-have for MacBook, iPad Pro, and Windows laptop users

    I test a lot of USB-C accessories to test, but few end up as part of my equipment. But there’s one accessory that I’ve had for over a year now, and I use it pretty much daily on my MacBook, iPad Pro, and any USB-C equipped Windows 10 laptops I happen to be using.
    It’s a hub. A small hub that fits onto a pocket or bag easily. And best of all, it’s only $35.99.
    It’s the Anker 7-in-1 USB-C hub.
    Must read: iPhone iOS 13.6 battery draining fast for no obvious reason? Try this fix

    On the connectivity front, the hub comes with a single 4K 30Hz HDMI, a 100W Power Delivery USB-C port, a USB-C data port, microSD/SD card reader slots, and two USB 3.0 ports.

    It also comes with a 20cm USB-C cable attached. Initially I thought this to be a weak link because if this broke the hub is trash, but after over a year of hard use, it’s still like new.
    Anker quality shines through.
    It also comes with a carry pouch for keeping it scratch-free and any chunks out of the ports.
    The only port that’s missing is an Ethernet port, but to be honest I can’t remember the last time I needed to use one. If you want a very similar portable hub that has an Ethernet port, then Anker make an 8-in-1 hub with that feature for $59.99.

    I’ve literally used this hub on dozens of devices, and taken it with me on long trips, and it has not let me down once.
    All Anker hubs come with an 18-month worry-free warranty in the event of something going wrong.
    I’m curious to know what must-haves you use. Let me know! More

  • in

    Amazon vs Elon Musk's SpaceX: Bezos' internet from space plan moves a step closer

    Amazon has received US approval for its Project Kuiper broadband satellite constellation and says it will invest more than $10bn in the initiative that will bring it into direct competition with SpaceX’s Starlink business. 
    The Federal Communications Commission on Thursday approved Amazon’s plan, revealed in mid-2019, to launch 3,236 satellites into low-Earth orbit and deliver broadband to underserved parts of the world.  

    Networking

    The FCC decision marks the first major development in Amazon’s satellite broadband plans in months. Over the past year, Elon Musk’s SpaceX has launched batches of 60 Starlink broadband satellites on the back of SpaceX rockets at a rate of about one launch per month.
    SpaceX currently has just under 600 Starlink satellites in orbit and is gearing up to launch its private beta with users in North America. 
    In that time, potential satellite broadband rival, OneWeb, filed for chapter 11 bankruptcy in March after failing to secure additional funding and launching just 76 satellites.

    OneWeb was given a controversial $500m lifeline by a consortium including India’s Bharti Global and the UK government in July, which may allow it to reach its target of launching 600 internet-beaming satellites. 
    Amazon CEO Jeff Bezos takes a keen interest in spaceflight, founding aerospace company Blue Origin in 2000. However, it has not yet been confirmed which company will be responsible for launching the Project Kuiper satellites. 
    Amazon’s Kuiper says its broadband service can begin once 578 satellites have been launched. It plans to deploy the system in five phases, with its satellites operating at altitudes of 367 miles, or 590km, 379 miles (610km) and 391 miles (630km), according to the FCC’s approval document.   
    Like SpaceX, Amazon says its service will provide high-speed, low-latency broadband services to places where traditional fiber or wireless network providers haven’t been able to reach. 
    The service will include gateway earth stations, end-user ground terminals, and satellite operations centers.
    It will also provide backhaul solutions for wireless carriers to broaden coverage of LTE and 5G service to new regions. 
    “There are still too many places where broadband access is unreliable or where it doesn’t exist at all. Kuiper will change that. Our $10bn investment will create jobs and infrastructure around the United States that will help us close this gap,” said Dave Limp, senior vice president of Amazon.
    The FCC’s order requires that Amazon launch and operate half of its satellites by July 30, 2026 and then launch the remainder of the constellation by July 30, 2029.
    Amazon CEO Jeff Bezos takes a keen interest in spaceflight, founding aerospace company Blue Origin in 2000. However, it has not yet been confirmed which company will be responsible for launching the Project Kuiper satellites.
    More on SpaceX’s Starlink and internet-beaming satellites More

  • in

    ACCC corrects its video conferencing Critical Services Report

    Image: Getty Images/iStockphoto
    The Australian Competition and Consumer Commission (ACCC) has reissued its Critical Services Report that was released on July 8, following uproar from vendors about the consumer watchdog saying they used foreign servers for Australian customers.
    The ACCC had pointed out that Google Meet, Skype, and Microsoft Teams all had lower latency than Zoom, GoToMeeting, and Webex, which the ACCC had pinned on the latter trio for making use of servers overseas.
    After publication, Zoom and Cisco contested the findings, with both stating they had data centres in Australia.
    “Zoom and Cisco advised us that in addition to hosting video conferences on servers based overseas, they do host some video conferences on servers based in Australia, depending on the amount of traffic at any given time,” the ACCC said in a correction issued on Friday afternoon.
    “It is not known how much traffic is off-loaded onto international servers.”

    The ACCC said its report was based on free accounts with the services.
    “Whether a video conference is hosted from a domestic or international server can depend on such factors as where the account is created, whether it is a basic or premium (paid) account, and the overall demand at the time,” it said.
    The original ACCC report used over 850 samples for Zoom and Webex in its report.
    The last week of July 2020 has not been a banner week for the ACCC.
    On Monday, the watchdog announced it was filing legal action against Google for allegedly misleading Australian consumers when it started combining users’ Google profile data with their activity on websites that used DoubleClick to display ads.
    In doing so, the ACCC used an example that Google corrected.
    “An earlier version of this media release used a hypothetical example that suggested that Google used information about users’ health to personalise or target advertisements,” a notice now reads.
    “Google says that it does not show personalised ads based on health information. This example has been removed.”
    By Thursday, the Federal Court dismissed its appeal against a judgment that found TPG did not make misleading representations about its prepayment protocols.
    Woof: Consumer watchdog coverage More

  • in

    Department says 'vast majority' of FttN lines to get 25Mbps speeds in December

    An NBN FttN node getting a Nokia line card installed
    Image: Corinne Reichert/ZDNet
    With the 18-month period of co-existence — where fibre-to-the-node (FttN) infrastructure also needs to support legacy services over copper, such as ADSL — coming to an end across Australia, the Department of Infrastructure, Transport, Regional Development and Communications has said it expects the “vasty majority” of FttN connections to hit the minimum 25Mbps speed on the National Broadband Network (NBN).
    By comparison, NBN only guaranteed speeds of 12Mbps in co-existence.
    “NBN Co Limited has indicated that, as at 1 July 2020, co-existence has ended on 5,750 nodes out of a total of 27,933 nodes in the fibre to the node footprint,” the Department said in response to questions from the Joint Standing Committee on the National Broadband Network.
    “The vast majority of fibre to the node lines are expected to achieve a peak speed of at least 25 megabits per second by December 2020.”
    The department further said that as of May, almost 140,000 FttN premises were not capable of hitting the 25Mbps mark.

    “This reduction from the figures NBN Co provided in April 2019 mainly reflects the company’s progress on ending co-existence, but also the impact of ongoing network optimisation work,” the department said.
    “Where the network is not capable of providing the minimum wholesale download speeds after co-existence has ended, NBN Co will take action to rectify any issues in its network so that the requirements of the Statement of Expectations are met.”
    In response to another question, the department said NBN was unable to proceed with 120 fixed wireless sites. It added that it also decided to move premises on 26 fixed wireless sites onto its fixed line technologies.
    “The changes mean that around 22,000 premises that were planned to be served by fixed wireless or were served by fixed wireless or satellite will instead be served by fixed line, while around 20,000 premises that were planned to be served by fixed wireless will instead be served by satellite,” it said.
    In June, NBN said it was unable to roll out fixed wireless to 500 premises in rural areas around Adelaide and users were shifted onto satellite instead.
    “Opposition to fixed wireless towers in this part of the Adelaide Hills continues to be an obstacle to securing a lease for a suitable site with a willing landowner,” NBN said at the time.
    It further added 21 premises were moved from FttN technology onto its satellite service after encountering rocky soil.
    “The presence of rock in the soil more than tripled the deployment cost for fixed line making it cost prohibitive,” it said.
    “Fixed wireless was not considered in this case due to the very low premises count meaning that deployment of this technology is also cost prohibitive.”
    The department said it was unaware of any plans to shift customers off satellite services after the initial network build is complete.
    It also detailed the fixed wireless sites that were shifted to satellite.
    Those sites were Beechmont, Beechmont North, Blackheath, Blue Mountain, Bonogin South, Boyland, Brigadoon, Broken Head, Bulahdelah East, Byers, Clagiraba, Clyde North, Cornelia, Crabbes Creek, Daylesford, Dooralong, Dulong, Dural East, Fairney View West, Federal, Ferguson, Fingal, Forrestdale, Gilston, Glenorie, Glenorie West, Grose Vale, Gwinganna South, Hellfire Pass, Humbug Scrub, Humpty Doo South, Kangaloon, Karnup South, Kenthurst West, Keppel Sands, Kilmore East, Kingsholme, Kooralbyn, Kulangoor, Lakesland, Lamington North, Lillian Rock, Maroota, Moolap, Moss Vale, Mount Samson West, Mount Walker, Mulgoa North, Mylor, Neranwood South, Nerong, Nethercote, Onkaparinga Hills, Ourimbah West, Pheasants Nest, Piggabeen, Pomborneit, Reesville, Robertson South, Smithfield South, Smiths Gully East, Springbrook North, Springbrook South, Stoneville, Surveyors Bay, Talbingo, Tamborine South, Theresa Park West, Tomewin, Toolern Vale, Torquay East, Uki North, Upper Brookfield, Verrierdale, Verrierdale East, Wedderburn, Wellard, Wendoree Park, Wisemans Ferry, Witta, Wolvi, Woorabinda and Wooragee.
    At a committee hearing in June, Department of Finance attempted to pour cold water on the idea of writing down the value of NBN, stating that any movements in value are dictated by accounting standards.
    “The value of NBN in government-funded statements is in line with the Australian accounting standards. So there isn’t an ability for the government to unilaterally choose to write down or determine the value of NBN,” Department of Finance first assistant secretary for financial analysis, reporting and management, governance and resource management Tracy Carroll told the committee.
    “For the financial year most recently completed, NBN is valued on the basis of the net assets. There’s a process being undertaken right now, for the financial year that will end on 30 June 2020, to consider what’s the appropriate value for the NBN that would be reflected.”
    Carroll added that there is no expectation of a write-down this year.
    Related Coverage More

  • in

    Multiple Tor security issues disclosed, more to come

    Image: Tor Project

    Over the past week, a security researcher has published technical details about two vulnerabilities impacting the Tor network and the Tor browser.
    In blog posts last week and today, Dr. Neal Krawetz said he was going public with details on two alleged zero-days after the Tor Project has repeatedly failed to address multiple security issues he reported throughout the past years.
    The researcher also promised to reveal at least three more Tor zero-days, including one that can reveal the real-world IP address of Tor servers.
    Approached for comment on Dr. Krawetz’s intentions, the Tor Project did not reply to a request for comment and provide additional details on its stance on the matter.
    The first Tor security issue
    Dr. Krawetz, who operates multiple Tor nodes himself and has a long history of finding and reporting Tor bugs, disclosed the first Tor security issue last week.

    In a blog post dated July 23, the researcher described how companies and internet service providers could block users from connecting to the Tor network by scanning network connections for “a distinct packet signature” that is unique to Tor traffic.
    The packet could be used as a way to block Tor connections from initiating and effectively ban Tor altogether — an issue that oppressive regimes are very likely to abuse.
    The second Tor security issue
    Earlier today, in a blog post shared with ZDNet, Dr. Krawetz disclosed a second issue. This one, like the first, allows network operators to detect Tor traffic.
    However, while the first issue could be used to detect direct connections to the Tor network (to Tor guard nodes), the second one can be used to detect indirect connections.
    These are connections that users make to Tor bridges, a special type of entry points into the Tor network that can be used when companies and ISPs block direct access to the Tor network.
    Tor bridges act as proxy points and relay connections from the user to the Tor network itself. Because they are sensitive Tor servers, the list of Tor bridges is being constantly updated to make it difficult for ISPs to block it.
    But Dr. Krawetz says connections to Tor bridges can be easily detected, as well, using a similar technique of tracking specific TCP packets.
    “Between my previous blog entry and this one, you now have everything you need to enforce the policy [of blocking Tor on a network] with a real-time stateful packet inspection system. You can stop all of your users from connecting to the Tor network, whether they connect directly or use a bridge,” Dr. Krawetz said.
    Both issues are specifically concerning for Tor users residing in countries with oppressive regimes.
    Dissatisfaction towards the Tor Project’s security stance
    The reason why Dr. Krawetz is publishing these issues in Tor is that he believes the Tor Project does not take the security of its networks, tools, and users seriously enough.
    The security researcher cites previous incidents when he tried to report bugs to the Tor Project only to be told that they were aware of the issue, working on a fix, but never actually deploying said fix. This includes:
    A bug that allows websites to detect and fingerprint Tor browser users by the width of their scrollbar, which the Tor Project has known about since at least June 2017.
    A bug that allows network adversaries to detect Tor bridge servers using their OR (Onion routing) port, reported eight years ago.
    A bug that lets attackers identify the SSL library used by Tor servers, reported on December 27, 2017.
    All of these issues are still not fixed, which has led Dr. Krawetz in early June 2020 to abandon his collaboration with the Tor Project and take the current approach of publicly shaming the company into taking action.

    I’m giving up reporting bugs to Tor Project. Tor has serious problems that need to be addressed, they know about many of them and refuse to do anything.I’m holding off dropping Tor 0days until the protests are over. (We need Tor now, even with bugs.) After protests come 0days.
    — Dr. Neal Krawetz (@hackerfactor) June 4, 2020

    Updated at 20:30 ET, July 30:
    The Tor Project has responded to Dr. Krawetz’ two blog posts. It’s a lengthy response detailing each issue, which we are reproducing in full below. In summary, the Tor Project’s reply is that they are aware of the issues the researcher reported, but they differ on the threats they pose to users, claiming they can’t be enforced at scale. The full reply is below:
    “We have been working on the first issue raised in the blog post published 7/23 (scrollbar width) here: https://gitlab.torproject.org/tpo/applications/tor-browser/-/issues/22137. The blog post claims that the scrollbar width of a Tor Browser user can be used todistinguish which operating system they are using. There are other ways a Tor Browser user’s operating system can be discovered. This is known and publicly documented. When Tor Browser does not communicate the operating system of its user, usability decreases. Commonly used websites cease to function (ie, Google Docs). The security downside of operating system detection is mild (you can still blend with everybody else who uses that operating system), while the usability tradeoff is quite extreme. Tor Browser has an end goal of eliminating these privacy leaks without breaking web pages, but it is a slow process (especially in a web browser like Firefox) and leaking the same information in multiple way is not worse than leaking it once. So, while we appreciate (and need) bug reports like this, we are slowly chipping away at the various leaks without further breaking the web, and that takes time.
    “The second claim in the first blog post published 7/23 outlines a way to recognize vanilla Tor traffic based on how it uses TLS with firewall rules. Fingerprinting Tor traffic is a well-known and documented issue. It’s an issue that has been discussed for more than a decade. (Example: https://gitlab.torproject.org/tpo/core/torspec/-/blob/master/proposals/106-less-tls-constraint.txt). Fixing the way Tor traffic can be fingerprinted by its TLS use is very small step in the censorship arms race. We decided that we should not try to imitate normal SSL certs because that’s a fight we can’t win. Our goal is to help people connect to Tor from censored networks. Research has shown that making your traffic look like some other form of traffic usually leads to failure (http://www.cs.utexas.edu/~amir/papers/parrot.pdf). The strategy Tor has decided to take is better and more widely applicable, and that strategy is developing better pluggable transports. Tor has an entire anti-censorship team tackling this problem and has funding earmarked for thisspecific purpose.
    “The blog post published 7/30 is correct in suggesting that a finely-calibrated decision tree can be highly effective in detecting obfs4; this is a weakness of obfs4. However, what works in someone’s living room doesn’t necessarily work at nation-scale: running a decision tree on many TCP flows is expensive (but not impossible) and it takes work to calibrate it. When considering the efficacy of this, one also has to take into account the base rate fallacy: the proportion between circumvention traffic and non-circumvention traffic is not 1:1, meaning that false positives/negative rate of 1% (which seems low!) can still result in false positives significantly outweighing true positives. That said, obfs4 is certainly vulnerable to this class of attack. The post says “However, I know of no public disclosure for detecting and blocking obfs4.” There’s work in the academic literature. See Wang et al.’s CCS’15 paper: https://censorbib.nymity.ch/#Wang2015a. See also Frolov et al.’s NDSS’20 paper: https://censorbib.nymity.ch/#Frolov2020a The blog post cites Dunna’s FOCI’18 paper to support his claim that the GFW can detect obfs4. This must be a misunderstanding. On page 2, the paper says: “We find that the two most popular pluggable transports (Meek [7] and Obfs4 [18]) are still effective in evading GFW’s blocking of Tor (Section 5.1).” The blog post also cites another post to support the same claim: https://medium.com/@phoebecross/using-tor-in-china-1b84349925da. This blog post correctly points out that obfs4 bridges that are distributed over BridgeDB are blocked whereas private obfs4 bridges work. This means that censors are not blocking the obfs4 protocol, but are able to intercept bridge information from our distributors. One has to distinguish the protocol from the way one distributes endpoints.”The findings published today (7/30) are variants of existing attacks (which is great!) but not 0-days. They are worth investigating but are presented with little evidence that they work at scale.”
    The Tor Project also disagreed with Dr. Krawetz’ classification of the issues he detailed on the blog as zero-days. The title has been updated accordingly. More

  • in

    Patch now: Cisco warns of nasty bug in its data center software

    Cisco has disclosed a critical security vulnerability in Cisco Data Center Network Manager (DCNM), a key piece of Cisco’s data-center automation software for its widely used MDS and Nexus line of networking hardware.  
    During internal testing, Cisco discovered that a bug in the REST application protocol interface (API) of DCNM could allow anyone on the internet to skip over the web interface’s log in and carry out actions as if they were an administrator of the device. 

    Networking

    The newly disclosed bug, tagged as CVE-2020-3382, is similar to the static encryption key flaw in DCNM that an external researcher discovered earlier this year. 
    SEE: IT Data Center Green Energy Policy (TechRepublic Premium)
    The static key lets attackers use it to generate a valid session token on an affected device and do whatever they want through the REST API with administrative privileges.  

    “The vulnerability exists because different installations share a static encryption key. An attacker could exploit this vulnerability by using the static key to craft a valid session token. A successful exploit could allow the attacker to perform arbitrary actions through the REST API with administrative privileges,” explains Cisco in the advisory. 
    Admins need to install the latest versions of Cisco’s DCNM software releases to shut down the bug since there are no workarounds. However, Cisco notes it is not aware of attackers using this flaw yet. 
    The bug has a severity rating of 9.8 out of a possible 10, and affects DCNM software releases 11.0(1), 11.1(1), 11.2(1), and 11.3(1).
    Cisco also reported a critical flaw with a severity rating of 9.9 in the web interface of its Cisco SD-WAN vManage software. 
    The bug, tracked as CVE-2020-3374, lets a person on the internet with the right credentials attack a system after bypassing authorization. From there, attackers could reconfigure a system and knock it offline or access sensitive information.  
    “The vulnerability is due to insufficient authorization checking on the affected system. An attacker could exploit this vulnerability by sending crafted HTTP requests to the web-based management interface of an affected system,” explained Cisco.  
    “A successful exploit could allow the attacker to gain privileges beyond what would normally be authorized for their configured user authorization level. The attacker may be able to access sensitive information, modify the system configuration, or impact the availability of the affected system.”
    SEE: Cisco releases security fixes for critical VPN, router vulnerabilities
    Again, there are no workarounds, so admins need to install fixed releases from various software trains of Cisco SD-WAN vManage. Devices using releases 18.3 or prior will need to migrate to fixed releases from newer trains.
    Fortunately, this bug was also discovered during a Cisco investigation with a customer. The company is not aware of public exploits for the vulnerability.  
    More on Cisco and network security More