More stories

  • in

    How to get cheap internet service with no phone line

    (Image: Shutterstock)

    ZDNet Recommends

    The best internet service providers

    When you’re comparing internet providers for your business, don’t just look at speed and price. More than anything else, you want the most reliable connection to keep your business running.

    Read More

    You don’t have to use a phone line in order to get internet. In fact, other types of internet are becoming more and more popular, as most homes can access cheap internet service without a phone line. Depending on where you live, your budget, and the internet speeds you need, there are many options for you. For instance, with satellite, DSL, cable, and wireless (4G), it is possible to get cheap internet service. These are all great and don’t tie up your phone line or require you to pay extra for a phone bill.

    What options are available for cheap internet without a phone line?

    Satellite: As stated in the name, satellite internet uses a satellite orbiting in space to deliver internet access to your home via a dish antenna. This means even the most rural areas can typically still have access to cheap internet service, without phone line service.DSL: Normal DSL actually does require a phone line for internet, but don’t rule it out yet. You can opt for “Naked DSL” (or, standalone DSL), which provides internet via a standard telephone jack, but you don’t pay for or have use of phone service. This is a great option for getting cheap internet service without a phone line.Cable: Similar to cable television, cable internet uses a coaxial cable network instead of a phone line  to allow you to get online. The cable will deliver internet to your modem, which you can connect certain devices to via an ethernet cable. Or roam the house and use wireless internet by connecting your modem to a wireless router.Wireless (4G): Fourth-generation wireless allows you to access the internet on your mobile device – a great option to get internet without cables, cords or phone lines. It may not be the best substitute for wired connections at heavy-use homes, but bell towers around the world will allow you to surf the web with 4G on your phone or tablet. 

    Can you get cheap internet without a cable connection?

    You might be surprised to learn that you can get cheap internet service without phone lines or cable connections. That’s right – you don’t need to pay for phone or TV service to get internet. Wondering how to get internet without cable? Satellite, DSL and wireless (4G) are all viable options. Many internet service providers have these technologies to choose from, with a price that fits your budget and speeds that line up with your internet use.

    What about fiber-optic internet?

    One of the newest technologies offering internet without cable is fiber internet. Fiber internet uses fiber-optic cables that transfer data via light. While not exactly as fast as the speed of light, fiber-optic internet does offer incredibly fast speeds, and therefore is more expensive than the other options for internet without cable or a phone line.

    ZDNet Recommends More

  • in

    ACCAN says 5G is an indirect substitute for fixed line NBN

    The Australian Communications Consumer Action Network (ACCAN) has said that the National Broadband Network does not face genuine competition, and where it does, is only the margins. “Predominantly in specific use cases and where households live in a 5G footprint and are able to afford those more expensive services. However, for the majority of households, NBN is the only wholesale provider of broadband to appropriately support their telecommunication needs,” the consumer advocacy group said in a submission on NBN’s Special Access Undertaking consultation. “Whilst we do not know the cross elasticity of demand between fixed line broadband and wireless alternatives, we would assume that the two goods are indirect substitutes.” The group said having a third of households not connected to the NBN did not necessarily indicate that competitive market between fixed and mobile connectivity. “Given the disproportionately high number of mobile-only households amongst households in lower socio-economic settings, there will be a significant number of households amongst the 4 million not connected to the NBN who do so out of necessity, and not choice,” it said.On the options put forward by NBN, ACCAN said the halfway house model that removes CVC on plans of 100Mbps and quicker was the least worst choice, followed by the reworking of its current pricing structure, and finally the flat priced model that removes CVC altogether. ACCAN pointed out it could use a May 22 proposal to construct cheaper wholesale prices than the melded plan, was concerned about why the flat fee model increased prices on 81% of NBN connections, and suggested the reworked plan did not have overage charges reflective of NBN’s cost of provisioning capacity.

    Particularly with parts of New South Wales going through their sixth week of lockdown, ACCAN called on NBN to introduce its low-income product before current pricing discussion was completed. “NBN Co has been consulting on a low-income product for vulnerable households since 2019, and we were led to believe that this much needed product would finally come to market this year. We’re still waiting,” ACCAN CEO Teresa Corbin said. “People need connectivity now; they can’t afford to wait for months and months until the regulatory process is over.” ACCAN said in its submission, the entry-level plan should be the 25Mbps plan, not the current 12Mbps. “The reason for this applying over the duration of the SAU, which lasts until 2040, is that the 12/1Mbps service will become increasingly redundant as households require higher speeds to participate in the digital economy,” it wrote. “Already the 12/1Mbps service does not suit the needs of many households.” The group also called for increasing the rebate paid by NBN for each subsequent month a fixed wireless service remains underperforming, and questioned the threshold used by NBN to deem a service as having a service fault. “ACCAN understands that this threshold currently requires a service to experience 10 or more dropouts within a 24-hour period,” it said. “ACCAN considers that this service fault threshold is too high to ensure a positive experience of the network. In addition, it is unclear to ACCAN what remedies are available to consumers experiencing below 10 dropouts per 24-hour period, who may be contending with regular service drop-outs and interruptions.” In its most recent monthly progress report, NBN reported its right first-time installation metric had recovered to 78% after falling to a low of 74% in May. Similarly, the meeting agreed fault restoration times metric bounced back to 74% after dropping to 70% the month prior. Both metrics had previously been in the high 80% or 90% range. “This metric has been impacted by some unexpected challenges following the recent implementation of a new appointment scheduling system,” the company said in a note attached to the report. “NBN Co is working closely with phone and internet providers and delivery partners to have these issues resolved as soon as possible.” The company recently spelled out how its ServiceMax Go (SMAX-Go) app for technicians interacts with its ServiceNow, ServiceMax, and Oracle back-ends, as well as the cost of some of the system. “The cost to develop the ServiceMax (including SMAX-Go app) component of the system architecture to support the new field contracts under Unify was AU$13.3 million total, over FY19, FY20, and FY21,” NBN said. “SMAX-Go went live in Victoria and South Australia on 14 April 2021, followed by New South Wales, Tasmania, and Australian Capital Territory on 28 April 2021. The app is yet to go live in Western Australia, Northern Territory, and Queensland.” During a hearing in May, NBN said the problems technicians were experiencing when the app launched in NSW was because the system was overloaded. “What happened, when literally it was rolled out in New South Wales, the platform went down and we then had, due to literally the doubling of our workforce on the system, we then add the issues around the functionality where it wasn’t syncing properly, so therefore it caused a poor experience,” NBN COO Kathrine Dyer said. Dyer said the software was hit by a trio of factors: A two-day platform outage that hit NBN and technicians; it wasn’t syncing; and it was updating its functionality. Related Coverage More

  • in

    Starlink: Elon Musk's satellite internet explained

    Shutterstock

    What is Starlink?

    Elon Musk’s satellite internet explained

    Starlink is a satellite internet company owned by Elon Musk, the founder of aerospace company SpaceX.The company’s first priority is bringing high-speed internet to rural areas that don’t currently have it.The beta price for the internet service is an upfront cost of $499 for hardware and a monthly cost of $99 for internet service.Elon Musk is famous for his technology innovations when it comes to his companies, Tesla and SpaceX. But his latest project hits closer to home for many people: bringing high-speed internet access to people in rural areas who don’t currently have access to it. Musk is accomplishing that through Starlink, a satellite internet company within SpaceX. Starlink is rapidly growing its customer base and expects to serve even more customers in 2021, according to predictions by Forbes.

    What do you need to know about Starlink?

    In 2002, Elon Musk founded SpaceX to revolutionize space technology and reduce space transportation costs. In 2020, the company expanded its efforts to provide satellite internet service.According to Starlink, its primary mission and the first order of business is to bring high-speed internet to people who don’t currently have access to it, meaning primarily homes in rural areas. In fact, in late 2020, the Federal Communications Commission awarded SpaceX more than $885 million to help fund its efforts to make high-speed internet more accessible.SpaceX was just one of many companies awarded the grant, and the company has been assigned by the FCC roughly 643,000 locations in 35 states to bring high-speed internet to.SpaceX’s internet service Starlink won’t only be available to rural customers. The company is currently in beta — calling it “Better Than Nothing Beta” — meaning only certain people have access to it. The company is quickly expanding and accepting preorders from people who would like to sign up when the service is available in their area.Because the service is in beta, you can expect it to change and improve over time. In fact, a tweet from Musk in late February indicated it was testing system upgrades, and customers might see much higher download speeds at times.

    How does Starlink work?

    Shutterstock

    Starlink is a satellite internet service, which uses a satellite to transmit a signal to your home. First, the internet service provider gets the internet signal via fiber from satellites in space. Then, the signal is moved to a central location called a network operations center.Finally, the internet company transmits that internet signal to individual customers. In the case of a satellite internet company like Starlink, customers receive it using individual satellite dishes.SpaceX has already launched more than 1,000 satellites into space. And according to Starlink, the satellites are closer to earth, which will reduce latency (the time it takes for the signal to be transferred).

    What internet speeds does Starlink offer?

    According to Starlink’s website, beta customers can expect to see speeds of anywhere from 50 to 150 Mbps. It expects those speeds to increase as its system is enhanced.But the real question is, how do these speeds compare to other internet providers?To start, 150 Mbps is considerably slower than the speeds of up to 1,000 that many other internet service providers offer. But Starlink is a satellite internet company, and that type of internet is often slower than fiber-optic. When compared just with other satellite internet providers like HughesNet and Viasat, 150 Mbps is actually quite fast.The other good news for Starlink customers is that it doesn’t currently have data caps, meaning customers get the same speeds no matter how much data is used.

    How much does Starlink cost?

    Starlink’s beta service comes with a price tag of $99 per month. There’s also a $499 upfront cost to cover the Starlink Kit, which includes all of the necessary hardware, such as a small satellite dish, as well as a router, power supply, and mounting tripod.Keep in mind that these rates are just for beta customers. Prices could fluctuate when the service becomes more readily available.

    How to pre-order Starlink

    Customers can preorder their Starlink kit on the company’s website.Starlink is currently in beta, meaning not everyone can sign up. The service is presently only available to a limited number of users per coverage area, and orders are fulfilled on a first-come, first-served basis.When visiting the company’s website, customers are prompted to search their addresses and find out if it’s available in their area. In the likely event that the company hasn’t expanded coverage to your area yet, you can preorder your internet service.Customers will pay a $99 up-front preorder cost to reserve a spot on the waiting list, but the full amount will not be due until the Starlink Kit is ready to ship.According to the company’s website, roughly 10,000 customers currently have access to Starlink. It plans to expand into many service areas later in 2021.

    Will Starlink be worth it?

    If everything that SpaceX claims Starlink will be is true, then maybe. The price tag of $99 a month is steep for speeds of only 50 to 150 Mbps. In context, that’s faster than current satellite internet providers, but not as fast as the top high-speed internet providers, which can reach at least 940 Mbps. However, considering rural internet service is notoriously slow or completely unavailable, Starlink meets an otherwise unmet need for connecting rural homes to high-speed internet. Furthermore, Musk’s indications that higher download speeds could be available after system enhancements means that Starlink could be the next hot ISP.

    ZDNet Recommends More

  • in

    Fiber vs. cable: What is the difference?

    Shutterstock

    ZDNet Recommends

    The best internet service providers

    When you’re comparing internet providers for your business, don’t just look at speed and price. More than anything else, you want the most reliable connection to keep your business running.

    Read More

    Staying connected in our modern world is no longer a simple endeavor. Not only will you be choosing between providers for your internet privileges and remote control rights, you’ll also have to choose the technology that powers those entertainment sources. DSL, satellite, fiber-optic, and cable are all options for internet and TV service across the country, and keeping track of the differences can be a difficult and involved process.While DSL and satellite services have great availability, they can hardly compete with the speed and quality that fiber-optic and cable connections offer. The difference between fiber and cable is a bit more nuanced so we’ve pitted the two advanced services against each other to help you navigate your search for the best telco service. The short version: Fiber is faster, more reliable, and more expensive. Cable is slower, but it still supports fast speeds and is more widely available.

    What’s the difference between fiber and cable?

    Many of the differences between fiber and cable can be chalked up to the way they transmit information.Fiber-optic technology uses small, flexible strands of glass to transmit the information as light. The strands are wrapped in a bundle and protected with layers of plastic, making fiber faster, clearer, and able to travel great distances. Fiber cables can also carry more data than a bundle of copper cables of the same diameter.For traditional cable, data is transmitted via electricity. It uses coaxial cables to transmit data. Inside that coax cable is a copper core insulated with aluminum, a copper shield, and an outer plastic layer. Cable is more susceptible to weather events (like extreme cold, storms, etc.) and electromagnetic interference than fiber-optic because it uses electrical signals.

    Is there a disparity in quality?

    Because of differences in transmitting technology, fiber-optic services generally offer better quality. Most notably, fiber is faster. Fiber speeds typically range from 250 Mbps to 1,000 Mbps. It would take you less than 10 seconds to download a two-hour movie with 1,000 Mbps (versus 10+ minutes on a 20 Mbps connection). These speeds far outpace the median household internet speed of 72 Mbps (as of September 2017). Fiber-optic internet providers tend to offer symmetrical upload and download speeds, which means you can upload information to the internet just as fast as you can download it. The is extremely unique and will appeal to heavy internet users. If you’re constantly uploading information and data (like video conferencing for work or when gaming), this structure could save you a lot of time and minimize any lag.Cable internet networks typically offer customers download speeds that range from 10 Mbps to 200+ Mbps, although upload speeds are a fraction of those numbers. The higher speed plans are likely to be enough for most households, based on FCC guidelines and our own research. Cable’s lower speed capabilities can cater to smaller households and minimal internet users who just do a bit of browsing and occasional movie streaming. But one of the big drawbacks of cable technology is that you share your bandwidth with neighbors: Your speeds will slow during evenings if the whole block is binging the latest season of “Stranger Things”. Overall, you’re looking at a more unreliable network susceptible to more outside factors.

    Is one more available than the other?

    For customers, availability will be the starkest difference between fiber-optic and cable service. The Federal Communications Commission (FCC) has estimated that only about 14% of the U.S. can access fiber-optic speeds of 1,000 Mbps or more. By the same measure, cable internet has 88% nationwide coverage at speeds of 25 Mbps or more. That means you’re far more likely to find cable providers who service your address than fiber-optic ones.Why the exclusivity? Building out fiber technology is a long, expensive process. Analysts have estimated that Google Fiber’s early nationwide expansion plan would have cost the company $3,000-$8,000 per home. If a provider like Verizon FiOS has decided to build out service in your neighborhood — you’ve essentially won the lottery.Businesses interested in a fiber connection as a private, secure, and reliable network option can purchase Direct Internet Access (DIA) fiber and have a dedicated line built out to the office. Homeowners hoping for fiber will have to cross their fingers and watch the market.

    Is fiber or cable best for you?

    For most people, cable technology offers great entertainment service. Its higher-tier internet speeds can support a full household of internet users. We’d also recommend cable for people who want to bundle their services to keep prices down. From what we’ve seen, fiber’s TV options are pretty limited, and providers will often contract another providers TV service in order to offer a bundle. For the best TV programming and bundle deals, you’re better off with cable service.Fiber speeds are likely more than most people need right now, but it’s worth noting that fiber is future-proof. Every year, the internet is becoming more central to our lives, technology is advancing, and media quality is increasing (from HD to 4K to 8K). Each season sees more 4K streaming content released, which will take more data and speed to run. Nikolai Tenev, the founder of DigidWorks, told us that tech enthusiasts of every kind will benefit from fiber — designers, gamers, software engineers, etc. Tenev said, “Gamers often need to upload video in real-time while playing an online game. Even the slightest drop in connection or speed can result in them losing the match.” If fiber-optic technology is available to your address, internet enthusiasts and large households will enjoy the perks the most.

    ZDNet Recommends More

  • in

    DeadRinger: Chinese APTs strike major telecommunications companies

    Researchers have disclosed three cyberespionage campaigns focused on compromising networks belonging to major telecommunications companies. 

    On Tuesday, Cybereason Nocturnus published a new report on the cyberattackers, believed to be working for “Chinese state interests” and clustered under the name “DeadRinger.”According to the cybersecurity firm, the “previously unidentified” campaigns are centered in Southeast Asia — and in a similar way to how attackers secured access to their victims through a centralized vendor in the cases of SolarWinds and Kaseya, this group is targeting telcos.  Cybereason believes the attacks are the work of advanced persistent threat (APT) groups linked to Chinese state-sponsorship due to overlaps in tactics and techniques with other known Chinese APTs. Three clusters of activity have been detected with the oldest examples appearing to date back to 2017. The first group, believed to be operated by or under the Soft Cell APT, began its attacks in 2018. The second cluster, said to be the handiwork of Naikon, surfaced and started striking telcos in the last quarter of 2020, continuing up until now. The researchers say that Naikon may be associated with the Chinese People’s Liberation Army’s (PLA) military bureau.  Cluster three has been conducting cyberattacks since 2017 and has been attributed to APT27/Emissary Panda, identified through a unique backdoor used to compromise Microsoft Exchange servers up until Q1 2021. 

    Techniques noted in the report included the exploitation of Microsoft Exchange Server vulnerabilities — long before they were made public — the deployment of the China Chopper web shell, the use of Mimikatz to harvest credentials, the creation of Cobalt Strike beacons, and backdoors to connect to a command-and-control (C2) server for data exfiltration.Cybereason says that in each attack wave, the purpose of compromising telecommunications firms was to “facilitate cyber espionage by collecting sensitive information, compromising high-profile business assets such as the billing servers that contain Call Detail Record (CDR) data, as well as key network components such as the domain controllers, web servers and Microsoft Exchange servers.” In some cases, each group overlapped and were found in the same target environments and endpoints, at the same time. However, it is not possible to say definitively whether or not they were working independently or are all under the instruction of another, central group. “Whether these clusters are in fact interconnected or operated independently from each other is not entirely clear at the time of writing this report,” the researchers say. “We offered several hypotheses that can account for these overlaps, hoping that as time goes by more information will be made available to us and to other researchers that will help to shed light on this conundrum.” Previous and related coverage Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    Auditor finds WA Police accessed SafeWA data 3 times and the app was flawed at launch

    Image: Getty Images
    The Auditor-General of Western Australia has handed down her report into the state’s COVID-19 check-in app, SafeWA, revealing that not only did police access its data, but the app had a number of flaws when it was released.WA Health delivered the SafeWA app in November 2020 to carry out COVID contact tracing.In its report [PDF], the Office of the Auditor-General (OAG) said it was concerned about the use of personal information collected through SafeWA for purposes other than COVID contact tracing. In mid-June, the WA government introduced legislation to keep SafeWA information away from law enforcement authorities after it was revealed the police force used it to investigate “two serious crimes”. The public messaging around the app was that it would be used only for COVID contact tracing purposes.See also: Australia’s cops need reminding that chasing criminals isn’t society’s only need”In March 2021, in response to our audit questioning around data access and usage, WA Health revealed it had received requests and policing orders under the Criminal Investigation Act 2006 to produce SafeWA data to the WA Police Force,” the report said. The WA Police Force ordered access to the data on six occasions and requested access on one occasion. The orders were issued by Justices of the Peace after application by the WA Police Force.

    The WA Police Force was granted orders to access SafeWA data for matters under investigation, including an assault that resulted in a laceration to the lip, a stabbing, a murder investigation, and a potential quarantine breach.The OAG said WA Health ultimately provided access in response to three of the orders before the passage of the legislation. Applications made to WA Health on December 14, December 24, and March 10 were provided to the cops; applications on February 24, April 1, May 7, and May 27 were not. The SafeWA Privacy Policy, which users are required to agree to prior to use, details that WA Health collects, processes, holds, discloses, and uses personal information of people who access and use the SafeWA mobile application. The OAG said it also states that information on individuals may be disclosed to other entities such as law enforcement, courts, tribunals, or other relevant entities.The information that SafeWA captures includes sensitive personal information such as name, email address, phone number, venue or event visited, time and date, and information about the device used to check-in.  As of 31 May 2021, over 1.9 million individuals and 98,569 venues were registered in the SafeWA application. The total number of check-in scans between December 2020 and May 2021 exceeded 217 million.  In addition to police accessing contact tracing data, shortly after the initial release of SafeWA, the app suffered a system outage due to poor management of changes, with the OAG saying this put the availability of SafeWA at risk.”WA Health has addressed this risk and continues to manage the vendor contract which has required changes as the state’s strategy on the use of SafeWA has evolved,” the report said.The app was delivered by GenVis and is hosted in the Amazon Web Services (AWS) cloud. The total contract value was initially AU$3 million, but it has since risen to AU$6.1 million over three years.    GenVis said it has processes in place to delete check-in data 28 days after collection. Should a member of the public test positive for COVID-19 or qualify as a close contact, WA Health may store a subset of the data relevant to that case indefinitely. The OAG said this is contrary to WA Health’s logging and monitoring standard, which requires retention for at least seven years and where possible, for the lifecycle of the system.Of further concern to the OAG was that WA Health does not monitor SafeWA access logs to identify unauthorised or inappropriate access to SafeWA information.The OAG also raised issues with WA Health and GenVis’ ability to only request, not enforce, that AWS not transfer, store, or process data outside Australia.WA Health uses provider-managed encryption keys for SafeWA, which are stored in the AWS database, instead of self-managed keys where the cloud provider has no visibility or access to them. “WA Health advised us that the current solution is required so that AWS can access keys through software to perform platform maintenance and support the vendor with technical issues,” the report said. “Although the likelihood is low, the cloud provider could be required to disclose SafeWA information to overseas authorities as it is subject to those laws.”See also: Attorney-General urged to produce facts on US law enforcement access to COVIDSafePrior to going live, WA Health identified that SafeWA registration could be completed with an incorrect number or someone else’s phone number, the OAG added. “This was because SafeWA did not fully verify a user’s phone number during the registration process,” it said. “Due to the timing of SafeWA development and WA Health’s need to balance risk with implementation, this issue was only partially resolved prior to going live. The remaining weaknesses could be exploited to register fake accounts and check-ins.”The issue was resolved in February.It was not just the cops that may have accessed contact tracing data, however, with the OAG noting it was concerned also about the limited communication around WA Health’s use of personal information collected by other government entities, including Transperth SmartRider, Police G2G border crossing pass data, and CCTV footage in its contact tracing efforts. During the audit, the OAG also identified that WA Health’s Mothership and Salesforce-based Public Health COVID Unified System (PHOCUS) accesses SafeWA data. “When WA Health receives confirmation of a positive COVID-19 case from a pathology clinic, it uses PHOCUS to collate data relevant to the case from several sources,” the report says”WA Health has not provided enough information to the community about other personal information it accesses to assist its contact tracing efforts.”The Mothership contact tracing application, OAG said, has security weaknesses, including a weak password policy and inconsistent use of multi-factor authentication. The OAG is preparing a separate report focused on the Mothership and PHOCUS.RELATED COVERAGE More

  • in

    Constant review of third-party security critical as ransomware threat climbs

    Lulled into complacency, businesses face risks of supply chain attacks even after they have done their due diligence in assessing their third-party suppliers’ security posture before establishing a partnership. In this first piece of a two-part feature on ransomware, ZDNet discusses the need for continuous review of all touchpoints across their supply chain, especially those involving critical systems and data. Enterprises typically would give their third-party suppliers “the keys to their castle” after carrying out the usual checks on the vendor’s track history and systems, according to Steve Turner, a New York-based Forrester analyst who focuses on security and risk. They believed they had done their due diligence before establishing a relationship with the supplier, Turner said, but they failed to understand that they should be conducting reviews on a regular basis, especially with their critical systems suppliers.

    “Anyone who has the keys to the castle, we should know them in and out and have ongoing reviews,” he said in a video interview with ZDNet. “These are folks that are helping you generate revenue and, operationally, should be held accountable [to be] on the same level as your internal security posture.”Third-party suppliers should have the ability to deal with irregular activities in their systems and the appropriate security architecture in place to prevent any downstream effects, he added. Capgemini’s Southeast Asia head of cybersecurity Hamza Siddique noted that technical controls and policies established by third-party or supply chain partners did not always match up to their clients’ capabilities. This created another attack surface or easy target on the client’s network and could lead to risks related to operations, compliance, and brand reputation, Siddique said in an email interview.

    To better mitigate such risks, he said Capgemini recommends a third-party risk management strategy that pulls best practices from NIST and ISO standards. It encompasses, amongst others, the need to perform regular audits, plan for third-party incident response, and implement restricted and limited access mechanisms. The consulting firm’s service portfolio includes helping its clients build a strategy around detection and analysis as well as containment and recovery. Turner urged the need for regular reassessments of third-party systems or, if this could not be carried out, for organisations to have in place tools and processes to safeguard themselves against any downstream attacks.”There needs to be inherent security controls so if something goes off baseline, these can react to ensure [any potential breach] doesn’t spread. A zero trust architecture delivers on that,” he said. “Suppliers have an inherent trust relationship [with enterprises] and this needs to stop.”Steve Ledzian, FireEye Mandiant’s CTO and Asia-Pacific vice president, acknowledged that it was challenging to prevent supply chain attacks because these looked to abuse an existing level of trust between organisations and their third-party vendors. However, he said there still were opportunities to detect and mitigate such threats since hackers would need to carry out other activities before launching a full attack. For instance, after successfully breaching a network via a third-party vendor, they would need to map out the targeted organisation’s network, identify the systems that held critical data, and figure out the privilege credentials they needed to steal to gain access, before they could move laterally within the network. “Once the hacker is in your network, and you’re in detection mode, you have the opportunity to identify and stop them before they are able to breach your data,” Ledzian said in a video interview, stressing the importance of tools and services that enabled enterprises to quickly detect and respond to potential threats. Their defence strategy against ransomware attacks also should look beyond simply purchasing products and into how systems were configured and architected. The main objective here was to bolster the organisation’s resilience and ability to contain such attacks, he added. Acronis’ CISO Kevin Reed also noted that the majority of attacks today still were neither highly sophisticated nor zero-day attacks. Attackers typically needed time and effort after identifying a vulnerability to develop an exploit for it and to make it work successfully. Reed said in a video interview that hackers usually would take several days to develop a workable exploit and this task was increasingly more difficult with modern software architectures. “So it takes time to weaponise a vulnerability,” he said, adding that even highly skilled hackers would take 72 hours to do so. This meant organisations should act quickly to plug any vulnerabilities or deploy patches before exploits were available.He advocated the need for organisations to assess their suppliers’ security posture, validating and cross-verifying that these third-party vendors had the right processes and systems in place. This might be more challenging for small and midsize businesses (SMBs) that did not have the resources or expertise to do so, he noted. Reed added that these companies typically depended on their managed service providers to fulfil the responsibility. Here, he underscored the need for managed service providers to step up, especially in the wake of the Kaseya attack. Increased partnership between hackers a worrying trendRansomware attacks, though, may be primed to get more sophisticated and deployed more quickly in future, as they are no longer developed by a single hacker. According to Ledzian, cyberattacks increasingly are broken down into different parts and delivered by different threat actors specialised in each piece of the attack. One might be tasked to build the malware, while other affiliates focused on reconnaissance and breaching a network and developing the exploit.  “When you have specialised skillsets, then each component is more competent,” he cautioned.

    Global pandemic opening up can of security worms

    Caught by the sudden onslaught of COVID-19, most businesses lacked or had inadequate security systems in place to support remote work and now have to deal with a new reality that includes a much wider attack surface and less secured user devices.

    Read More

    Sherif El-Nabawi, CrowdStrike’s Asia-Pacific Japan vice president of engineering, also highlighted the rise in teamwork amongst cybercriminals and emergence of ransomware-as-a-service. Describing this as an alarming trend, El-Nabawi noted that five or six separate groups specialised in all aspects of a ransomware chain could band together, so a single group no longer needed to develop everything on its own. Such partnerships could entice more threat actor groups to come into play and fuel the entire industry, he said. Ledzian added that ransomware attacks also had evolved to become multi-faceted exploitation, with cybercriminals realising data theft would have a more severe impact on businesses than a service disruption. Having data backups would no longer be sufficient in such instances, as attackers gained greater leverage over businesses concerned about threats to make public confidential data, he said. According to CYFIRMA CEO and Chairman Kumar Ritesh, cybercriminals were moving their target towards young companies and large startups with access to large volumes of personal data, such as developers of “super apps” and mobile apps.He further pointed to increasing focus on OT (operational technology) systems, such as oil and gas and automotive, as well as process manufacturing industries. In particular, Ritesh told ZDNet that there was growing interest in autonomous and connected vehicles, which dashboards enabled users to access their smart home and Internet of Things (IoT) systems. Some of these systems, he noted, lacked basic security features with communication links between car and home systems left unsecured, and at risk of being exploited. Cybercriminals also were shifting focus towards individuals and high-level influencers, such as employees working in their organisation’s product research team or who had privileged credentials that gave them access to critical data and systems, he said. With remote work now the norm amidst the global pandemic, he added that such risks were exacerbated as personal devices that were not adequately secured could be easily breached to give hackers access to a company’s network and its intellectual property. RELATED COVERAGE More

  • in

    Regulations against ransomware payment not ideal solution

    With ransomware attacks increasing, legislations have been mooted as a way to bar companies from paying up and further fuelling such activities. In this second piece of a two-part feature on ransomware, ZDNet looks at how such policies can be difficult to enforce and may result in more dire consequences.  Regulations that compelled victims not to pay up could put these businesses in a precarious position, said Steve Turner, a New York-based Forrester analyst who focuses on security and risk. For one, any debate over whether to pay up would be muted when physical lives were at stake. Turner pointed to ransomware attacks that brought down critical infrastructure systems such as power and healthcare, impacting the likes of US Colonial Pipeline, Ireland’s Health Service Executive, and Germany’s Duesseldorf University Hospital.

    The US pipeline operator paid up almost $5 million in ransom, the bulk of which was later recovered by authorities, while the Irish healthcare operator refused to pay and spent weeks struggling to recover from the attack, affecting hundreds of patients. The Duesseldorf hospital’s inability to function also indirectly caused the death of a patient whose treatment was delayed because she had to be rerouted to a hospital further away.   Capgemini’s Southeast Asia head of cybersecurity Hamza Siddique noted that threat actor groups now had such great success in inflicting critical impact on their victims that it left these organisations with few viable options other than to pay up. “Paying the ransom may be the less expensive option for a cash-strapped company than engaging in the painstaking [task of] rebuilding company systems and databases,” Siddique said in an email interview. “Other entities may choose to pay the threat actor in hopes of avoiding the public release of sensitive information, which may lead to bankruptcy or legal issues.” He advised victims to make “informed decisions” on whether to fork out the ransom or embark on the more difficult path of building from scratch. Paying the ransom not only encouraged threat actors to engage in future ransomware attacks, but also provided funds for these groups to act against nations, governments, and foreign policy interests, he noted.

    On whether penalties should be imposed on companies that chose to pay the ransom, he said this decision should be made in line with the country’s IT policy and cost-benefit analysis. Foremost, emphasis should be on not paying, Siddique said, adding that this should be the case if the impact on the business was low. However, if the impact could lead to bankruptcy or major legal issues, organisations should be allowed to decide if they wanted to pay the ransom, he said. Acronis’ CISO Kevin Reed noted that in the short-term, regulations that outlawed ransom payment could have significant adverse effects, but in the long-term, might have an overall positive impact. He said in a video interview that cybercriminals were interested mainly in financial gains and if they faced increasing obstacles in their efforts to extract money, they would stop doing it. However, he cautioned, criminals tended to be creative in how they extorted money, moving from one plan to another until they succeeded in their goal. Regulations on cryptocurrency also not fool-proof CYFIRMA CEO and Chairman Kumar Ritesh suggested that regulations should instead focus on virtual currencies, since these were used to orchestrate ransom payments. Cryptocurrency exchanges or trading firms could be mandated to provide information to the relevant authorities so transactions or accounts with the targeted unique identifiers could be blocked or frozen, Ritesh said in a video interview. Without a trading platform on which to complete the transaction, cybercriminals would find it more difficult to convert their virtual currencies into fiat money. Turner noted that there already were regulations governing legitimate cryptocurrency trading platforms such as Coinbase, which included intricate identification processes before transactions were processed.

    Such policies that identified movements across these cryptocurrency hubs could help cut down illicit activities conducted by regular scammers who were not very tech-savvy. However, threat actor groups behind the recent massive ransomware attacks were not run-of-the-mill criminals, the Forrester analyst said in a video interview.For one, they would not be trading cryptocurrencies through common digital wallets. They typically had the skillsets to quickly move and launder these currencies, much like any organised crime operation, so these could be “clean” for use in the real-world, he said. Furthermore, Turner added that cybercriminals would simply use alternative payment modes should more regulations be introduced to monitor cryptocurrency transactions or bar companies from paying ransoms. “Attackers will just find another payment mechanism that hasn’t been outlawed,” he said. “It could be something as [innocuous] as Walmart gift cards, as long as it doesn’t enable hackers to be traced and allows companies to pay the ransom. Outlawing [the use of] cryptocurrency will only put ransomware victims in a bad position.” Turner noted, though, that some form of regulations could raise the collective security posture of companies across the board, since there would be stronger motivation to avoid being put in a position where they would be held ransom. Policies needed to ensure vendors continue critical support Regulations also may be necessary to ensure businesses remain protected when vendors cease support for IT products and systems.  For example, Western Digital in June advised users of its My Book Live and My Book Live Duo to unplug their devices from the internet following a series of remote attacks that triggered a factory reset, wiping out all data on the device. The breach was due to a vulnerability that was introduced in April 2011 due to a coding oversight. Launched in 2010, the portable storage devices were issued their final firmware update in 2015, after which Western Digital discontinued support for the products. The storage vendor later provided data recovery services for customers who lost data as a result of the attacks.Siddique noted that organisations today were mostly digital in nature and highly dependent on vendors and suppliers to provide support as well as reliable products over a longer period of time, and even after these systems were discontinued. “It’s imperative that there should be policies in place for a vendor to provide minimum support for discontinued product lines, considering client may not be in position to upgrade their software or may have certain dependency on the old version of the products,” he said. There should be clearly defined policies for such support to be provided for a specific minimum number of years after its market release, he suggested. Vendors also should be expected to provide information on upcoming product releases and ease migration to new products. He said changes could be made in the SLA (service level agreement) and, if it was not viable for vendors to maintain a support team for discontinued products, there should be minimum requirement for such provisions based on the severity of security vulnerabilities. At the very least, Turner noted, vendors that chose to continue to support online services linked to their products, should then also continue to offer support to the actual products. Otherwise, these online services should be disabled, he said, noting that Western Digital should have disabled the remote access or online services for the My Book models when they cut support for the products in 2015. “If there are no eyes on it, someone is going to exploit it,” the analyst said. He added that the optics would not look good for a manufacturer of data storage products to suffer a breach of this scale.  Any potential regulation here could look at requiring vendors to support a product as long as they supported the services that required the product to connect to the internet, he said. However, Reed suggested that such policies, if introduced, should apply only to critical systems such as medical and industrial control systems. He noted that some hospitals today operated MRI (magnetic resonance imaging) machines that ran on old versions of Windows that were no longer supported by Microsoft. And these machines could impact actual lives, he said. While he agreed that software vendors should take more responsibility for their products, he said legislations were not necessary for all sectors. RELATED COVERAGE More