More stories

  • in

    UK association defends ransomware payments in cyber insurance policies

    The Association of British Insurers (ABI) has been accused of “funding” organized crime by including ransomware blackmail payments in cyber insurance policies. 

    As reported by the BBC, ABI said that the inclusion in first-party policies was not “an alternative” to organizations doing everything else possible to mitigate the damage and operational risk caused by cyberattacks, but without it, victims could face “financial ruin.”
    Oxford University’s Prof. Ciaran Martin said that insurers taking this approach were “funding organized crime” and as it remains legal to do so, there are “incentives” to pay up. 
    Ransomware can be one of the most devastating forms of malware to land on corporate networks. Once ransomware executes on a vulnerable system, it will usually encrypt resources, files, and backups, and will then lock users out. 
    A blackmail payment is then demanded in return for a decryption key — which may or may work — often in cryptocurrency such as Bitcoin (BTC), Ethereum (ETH).
    Popular and well-known ransomware strains include WannaCry, Cerber, and Locky.
    Businesses and organizations without viable backups or with an urgent need to restore their systems — such as hospitals and energy utilities — are then under extreme pressure to pay up. 

    It is not illegal to do so in the UK and if they have previously taken out cyber insurance policies covering ransomware, this is when their protection comes into play. 
    A spokesperson for the ABI told the publication that in order for claims to be processed, similar “reasonable precautions” in terms of security have to be met. This is comparable to filing a claim for burglary and whether or not your home had reasonable measures — such as locked doors and windows — in place to prevent theft in the first place. 
    According to US cyber insurance provider Coalition, ransomware incidents accounted for 41% of claims filed during the first half of 2020. 
    This week, credit rating service Moody’s released its 2021 outlook for cybersecurity and cyber-related risks. The agency predicts that the “continued proliferation” of ransomware attacks will force insurers to re-examine their cyber insurance policies and coverage over the coming year. 
    Moody’s predicts that as more claims are made, policies covering ransomware will surge in price in what is a “small, but growing line of business.”
    “Insurers have responded to rising financial losses by raising premium rates and narrowing terms and conditions, including raising deductibles or lowering policy limits, or both,” the company says. “Higher insurance costs, in turn, could weigh on the finances of some organizations, causing them to rethink the purchases of these products.”
    Previous and related coverage
    Have a tip? Get in touch securely via WhatsApp | Signal at +447713 025 499, or over at Keybase: charlie0 More

  • in

    10-years-old Sudo bug lets Linux users gain root-level access

    A major vulnerability impacting a large chunk of the Linux ecosystem has been patched today in Sudo, an app that allows admins to delegate limited root access to other users.

    The vulnerability, which received a CVE identifier of CVE-2021-3156, but is more commonly known as “Baron Samedit,” was discovered by security auditing firm Qualys two weeks ago and was patched earlier today with the release of Sudo v1.9.5p2.
    In a simple explanation provided by the Sudo team today, the Baron Samedit bug can be exploited by an attacker who has gained access to a low-privileged account to gain root access, even if the account isn’t listed in /etc/sudoers — a config file that controls which users are allowed access to su or sudo commands in the first place.
    For the technical details behind this bug, please refer to the Qualys report or the video below.
    [embedded content]
    While there have been two other Sudo security flaws disclosed over the past two years, the bug disclosed today is the one considered the most dangerous of all three.
    The two previous bugs, CVE-2019-14287 (known as the -1 UID bug) and CVE-2019-18634 (known as the pwfeedback bug), were hard to exploit because they required complex and non-standard sudo setups.
    Things are different for the bug disclosed today, which Qualys said impacts all Sudo installs where the sudoers file (/etc/sudoers) is present — which is usually found in most default Linux+Sudo installs.

    CVE-2021-3156 basically means free root on any setup that has sudo installed, omfg
    — Alba 🌸 (@mild_sunrise) January 26, 2021

    Making matters worse, the bug also has a long tail. Qualys said the bug was introduced in the Sudo code back in July 2011, effectively impacting all Sudo versions released over the past ten years.
    The Qualys team said they were able to independently verify the vulnerability and develop multiple exploit variants for Ubuntu 20.04 (Sudo 1.8.31), Debian 10 (Sudo 1.8.27), and Fedora 33 (Sudo 1.9.2).
    “Other operating systems and distributions are also likely to be exploitable,” the security firm said.
    All in all, the Baron Samedit vulnerabilities is one of the rare Sudo security flaws that can also be successfully weaponized in the real world, in comparison to the previous two bugs disclosed in years prior.
    Qualys told ZDNet that if botnet operators brute-force low-level service accounts, the vulnerability could be abused in the second stage of an attack to help intruders easily gain root access and full control over a hacked server.
    And as ZDNet reported on Monday, these types of botnets targeting Linux systems through brute-force attacks are quite common these days.
    Today’s Sudo update should be applied as soon as possible to avoid unwanted surprises from both botnet operators or malicious insiders (rogue employees). More

  • in

    Predictive policing is just racist 21st century cyberphrenology

    Image: Chris Duckett/ZDNet
    In 1836, the Scottish geologist, chemist, and “agricultural improver” Sir George Stewart Mackenzie was concerned about what he called the “recent atrocities” of violent crime in the British penal colony of New South Wales, Australia.
    The root cause, he thought, was a failure to manage which criminals were transported to work in the colony — especially the two-thirds of convicts who worked for private masters.
    “At present they are shipped off, and distributed to the settlers, without the least regard to their characters or history,” Mackenzie wrote in a representation [PDF] to Britain’s Secretary for the Colonies, Lord Glenelg.
    For Mackenzie it was a moral question. It was about rehabilitating a criminal regardless of “whether the individual have [sic] spent previous life in crime, or has been driven by hard necessity unwillingly to commit it”.
    Only convicts with the correct moral character should be sent to the colonies, to be brought back to “a course of industrious and honest habits”, he wrote.
    The rest could just rot in British prisons.
    So how did Mackenzie propose to identify these convicts with the correct moral character? By measuring the shape of their heads.

    “In the hands of enlightened governors, Phrenology will be an engine of unlimited improving power in perfecting human institutions, and bringing about universal good order, peace, prosperity, and happiness,” he wrote.
    Yes, in 1836, phrenology was promoted as a cutting-edge science that could predict, among many other things, a person’s likelihood of criminality. Now, of course, we know that it’s complete rubbish.
    Here in the 21st century, predictive policing, or algorithmic policing, makes similarly bold claims about its ability to spot career criminals before they commit their crimes.
    How predictive policing can entrench racist law enforcement
    At its core, predictive policing is simply about using the magic of big data to predict when, where, and by whom crime is likely to be committed.
    The payoff is meant to be a more efficient allocation of police resources, and less crime overall.
    Increasingly, it’s also about ubiquitous facial recognition technology.
    An important player here is the secretive company Clearview AI, a controversy magnet with far-right political links.
    Clearview’s tools have already been used by Australian Federal Police and police forces in Queensland, Victoria, and South Australia, though it took journalists’ investigations and a massive data breach to find that out.
    The Royal Canadian Mounted Police even denied using Clearview’s technology three months after they’d signed the contract.
    The potential payoff to all this isn’t just identifying and prosecuting criminals more efficiently after the fact.
    Increasingly, it’s also the idea that individuals who have been predicted to be potential criminals, or whose behaviour matches some predicted pattern for criminal behaviour, can be identified and tracked.
    At one level, predictive policing simply provides some science-ish rigour to the work of the cops’ own in-house intelligence teams.
    “Looking at crimes like burglary, one can create quite a useful predictive model because some areas have higher rates of burglary than others and there are patterns,” said Professor Lyria Bennett Moses, director of the Allens Hub for Technology, Law and Innovation at the University of New South Wales, last year.
    Cops also know, for example, that drunken violence is more likely in hot weather. An algorithm could help them predict just when and where it’s likely to kick off based on past experience.
    According to Roderick Graham, an associate professor of sociology at Old Dominion University in Virginia, there are more innovative ways of using data.
    Suppose the cops are trying to identify the local gang leaders. They’ve arrested or surveilled several gang members, and through “either interrogation, social media accounts, or personal observation”, they now have a list of their friends, family, and associates.
    “If they see that a person is connected to many gang members, this gives police a clue that they are important and maybe a leader,” Graham wrote.
    “Police have always done this. But now with computer analyses, they can build more precise, statistically sound social network models.”
    But this is where it all starts to get wobbly.
    As American researchers William Isaac and Andi Dixon pointed out in 2017, while police data is often described as representing “crime”, that’s not quite what’s going on.
    “Crime itself is a largely hidden social phenomenon that happens anywhere a person violates a law. What are called ‘crime data’ usually tabulate specific events that aren’t necessarily lawbreaking — like a 911 call — or that are influenced by existing police priorities,” they wrote.
    “Neighbourhoods with lots of police calls aren’t necessarily the same places the most crime is happening. They are, rather, where the most police attention is — though where that attention focuses can often be biased by gender and racial factors.”
    Or as Graham puts it: “Because racist police practices overpoliced black and brown neighbourhoods in the past, this appears to mean these are high crime areas, and even more police are placed there.”
    Bennett Moses gave a distinctly Australian example.
    “If you go to police databases in Australia and look at offensive language crimes, it looks like it is only Indigenous people who swear because there isn’t anyone else who gets charged for it,” she wrote.
    “So you have a bias there to start within the data, and any predictive system is going to be based on historical data, and then that feeds back into the system.”
    Cops don’t want to talk about predictive policing
    In 2017, NSW Police’s Suspect Targeting Management Plan (STMP) singled out children as young as 10 for stop-and-search and move-on directions whenever police encountered them.
    The cops haven’t really explained how or why that happens.
    According to the Youth Justice Coalition (YJC) at the time, however, the data they’ve managed to obtain shows that STMP “disproportionately targets young people, particularly Aboriginal and Torres Strait Islander people”.
    According to an evaluation of STMP in 2020 by the respected NSW Bureau of Crime Statistics and Research, “STMP continues to be one of the key elements of the NSW Police Force’s strategy to reduce crime”.
    The roughly 10,100 individuals subject to SMTP-II since 2005, and the more than 1,020 subjected to an equivalent system for domestic violence cases (DV-STMP), were “predominately male and (disproportionately) Aboriginal”, they wrote.
    Yet when compared with non-Aboriginal people, the Aboriginal cohort in the sample saw a “smaller crime reduction benefit”.
    Victoria Police has thrown the veil of secrecy over their own predictive policing tool. They haven’t even released its name.
    The trial of this system only became public knowledge in 2020 when Monash University associate professor of criminology Leanne Weber published her report on community policing in Greater Dandenong and Casey.
    In interviews with young people of South Sudanese and Pacifika background, she heard how, at least in your correspondent’s view, racism is being built into the data from the very start.
    “Many experiences reported by community participants that appeared to be related to risk-based policing were found to damage feelings of acceptance and secure belonging,” she wrote.
    “This included being prevented from gathering in groups, being stopped and questioned without reason, and being closely monitored on the basis of past offending.”
    One participant seemed to nail what was going on: “The police don’t give a reason why they are accusing them. It’s so that the police can check and put it in their system.”
    Victoria Police told Guardian Australia that further details about the tool could not be released because of “methodological sensitivities”, whatever they are.
    It’s telling, however, that this secret tool was only used in Dandenong and surrounding Melbourne suburbs, one of the most disadvantaged and “culturally diverse” regions in Australia.
    More detailed explorations of predictive policing tools put it bluntly, like this headline at MIT Technology Review: Predictive policing algorithms are racist. They need to be dismantled.
    Or as John Lorinc wrote in his lengthy feature for the Toronto Star, “big data policing is rife with technical, ethical, and political landmines”.
    The pushback against predictive policing is underway
    At the global level, the United Nations Committee on the Elimination of Racial Discrimination has warned [PDF] how predictive policing systems that rely on historical data “can easily produce discriminatory outcomes”.
    “Both artificial intelligence experts and officials who interpret data must have a clear understanding of fundamental rights in order to avoid the entry of data that may contain or result in racial bias,” the committee wrote.
    In the UK, the Centre for Data Ethics and Innovation has said that police forces need to “ensure high levels of transparency and explainability of any algorithmic tools they develop or procure”.
    In Europe, the EU Commission’s vice president Margrethe Vestager said predictive policing is “not acceptable”.
    Individual cities have been banning facial recognition for policing, including Portland, Minneapolis, Boston and Somerville in Massachusetts, Oakland, and even tech hub San Francisco.
    At least the phrenologists were open and transparent
    Back in 1836, Mackenzie’s proposal went nowhere, despite his hard sell and offer to prove his plan with an experiment.
    “I now put into your hands a number of certificates from eminent men, confirming my former assertion, that it is possible to classify convicts destined for our penal settlements, so that the colonists may be freed from the risk of having atrocious and incorrigible characters allotted to them, and the colonial public from the evils arising out of the escape of such characters,” he wrote.
    Lord Glenelg, it turns out, wasn’t convinced that phrenology was a thing, and, in any event, he didn’t have the funding for it.
    The irate skull-fondlers expressed their dismay in The Phrenological Journal and Magazine of Moral Science for the year 1838 [PDF], even blaming the colonial governors for the violent crimes.
    “As phrenologists, we must assume (and we assume this, because we speak on the strength of undeniable facts,) that the occurrence of such outrages might be much diminished, if not wholly prevented; and consequently, we must regard those to whom the power of prevention is given, but who refuse to exert that power, as morally guilty of conniving at the most deadly crimes,” they wrote.
    The cops keep drinking the Kool-Aid
    There are three key differences between predictive policing in 2021 and 1836.
    First, the secrecy.
    Mackenzie “unhesitatingly” offered a public test of phrenology in front of Lord Glenelg and “such friends as you may wish to be present”. Today, it’s all confidential proprietary algorithms and police secrecy.
    Second, the gullibility.
    Even in a time of great faith in science and reason, Lord Glenelg was sceptical. These days the cops seem to drink the Kool-Aid as soon as it’s offered.
    And third, the morality, or rather, the lack of it.
    Whatever you may think of Mackenzie’s promotion of what we now know to be quackery, his overall aim was the moral improvement of society.
    He spoke out against the “ignorance of the human constitution” which led rulers to think that “degradation is… the fitting means to restore a human being to self-respect, and to inspire an inclination towards good conduct”.
    Among cops and technologists alike, a coherent discussion of ethics and human rights seems to be lacking. That must be fixed, and fixed soon.
    Related Coverage More

  • in

    OAIC orders Home Affairs to compensate asylum seekers over data breach

    The Office of Australian Information Commissioner (OAIC) has ordered the Department of Home Affairs, formerly the Department of Immigration and Border Protection, to determine the amount owed for each individual and pay compensation for “mistakenly” releasing the personal information of 9,251 asylum seekers.
    The Australian Information Commissioner and Privacy Commissioner, Angelene Falk, determined that the federal government at the time had “interfered” with the privacy of these individuals by accidentally publishing their full names, nationalities, locations, arrival dates, and boat arrival information on its website in 2014.
    Following the publishing of their personal information, the asylum seekers launched legal action against the department. The asylum seekers in New South Wales, Western Australia, and the Northern Territory claimed the breach exposed them to persecution from authorities in their home countries.
    A total of 1,297 applications were lodged as part of the legal case requesting that compensation be paid because those affected suffered loss or damage due to the data breach.
    The commissioner said the compensation to be paid to participating class members would range from AU$500 to more than $20,000 and would be determined on a case-by-case basis.
    “This matter is the first representative action where we have found compensation for non-economic loss payable to individuals affected by a data breach,” she said.
    “It recognises that a loss of privacy or disclosure of personal information may impact individuals and depending on the circumstances, cause loss or damage.”

    The compensation process is expected to take up to 12 months to complete. It will involve ensuring that individuals agree to their compensated amount. If the department and the individual cannot agree on the compensation amount, there will be opportunities to re-assess the payable amount, the OAIC said.
    The OAIC said it would also publish information about the determination in 21 languages to ensure all participating class members are informed about the process so they can finalise their claims. 
    Last week, the OAIC requested for amendments to be made to the Privacy Act 1988 that would update its regulatory powers and remove exemptions such as for political parties. 
    In a 150-page submission [PDF] to the Attorney-General’s review of the Act, the OAIC made a handful of recommendations, including enhancing its own ability to regulate, which it said would bring its powers in line with “community expectations”. 
    The current Privacy Act positions the regulator to resolve individual privacy complaints through negotiation, conciliation, and determination. The OAIC has described this nearly 33-year-old function as outdated. 
    “This reflects the context in which the Privacy Act was first introduced. In the digital environment, privacy harms can occur on a larger scale. While resolving individual complaints is a necessary part of effective privacy regulation, there must be a greater ability to pursue significant privacy risks and systemic non-compliance through regulatory action,” it said.
    “While Australia’s current framework provides some enforcement powers, these need to be strengthened and recalibrated to deter non-compliant behaviour and ensure practices are rectified.” 
    Related Coverage More

  • in

    ASIC reports server breached via Accellion vulnerability

    The Australian Securities and Investments Commission (ASIC) has said one of its servers was breached on January 15.
    “This incident is related to Accellion software used by ASIC to transfer files and attachments,” the corporate regulator said in a notice posted on the evening before a public holiday.
    “It involved unauthorised access to a server which contained documents associated with recent Australian credit licence applications.”
    ASIC said while some “limited information” has been viewed, it did not see evidence that any application forms were downloaded or opened. The regulator said access to the server has been disabled and it was working on other arrangements.
    “No other ASIC technology infrastructure has been impacted or breached,” it added.
    “ASIC is working with Accellion and has notified the relevant agencies as well as impacted parties to respond to and manage the incident.”
    Accellion was also used as the vector to breach the Reserve Bank of New Zealand (RBNZ) earlier this month.

    “We have been advised by the third-party provider that this wasn’t a specific attack on the Reserve Bank, and other users of the file sharing application were also compromised,” the Bank said at the time.
    In an update posted last week, Bank Governor Adrian Orr said the cause of the breach was “understood and resolved”.
    “Based on the results of our investigation and analysis to date we have been able to tell stakeholders which of their files on the File Transfer Application were downloaded illegally during the breach,” he said.
    “There are some serious questions that have been answered by the team at the Bank and there are more for the supplier of the system that was breached. That is the subject of an independent review by KPMG that is now underway.”
    RBNZ said it was already in the process of implementing a new secure file transfer system to be used with external stakeholders, and that work has been sped up.
    For its part, Accellion said on January 12 that it had been aware of the vulnerability in its legacy File Transfer Application since mid-December, and had released a patch in 72 hours to the “less than 50 customers affected”.
    “Accellion FTA is a 20-year-old product that specialises in large file transfers,” it said.
    “While Accellion maintains tight security standards for its legacy FTA product, we strongly encourage our customers to update to kiteworks, the modern enterprise content firewall platform, for the highest level of security and confidence.”
    Related Coverage More

  • in

    Singapore must return data control to users to regain public trust

    Singapore repeatedly has emphasised the need for trust so the adoption of new technology can thrive, but its provision for widening business access to user data — amidst continuing security breaches and slips — poses worrying risks ahead. There is urgent need to ensure users have stronger control of their personal data, especially as the government itself will need to restore public trust following a major gaffe involving the country’s COVID-19 contact tracing efforts.
    Singapore in recent years has been opening up access to citizen data as part of efforts to facilitate business transactions and ease workflow. Just last November, the Personal Data Protection Act (PDPA) was updated to allow local organisations to use consumer data without prior consent for some purposes, such as business improvement and research. 

    Amongst the key changes is the “exceptions to the consent” requirement, which allows businesses to use, collect, and disclose data for “legitimate purposes”, business improvement, and a wider scope of research and development. In addition to existing consent exceptions that include for the purposes of investigations and responding to emergencies, these now include efforts to combat fraud, enhance products and services, and carry out market research to understand potential customer segments. 
    Businesses also can use data without consent to facilitate research and development (R&D) that may not yet be marked for productisation. 
    Concerns were raised that the amendments, specifically with regards to exceptions and deemed consent, were too broad and might be abused by organisations. “Legitimate interests”, for instance, can be viewed from an organisation’s perspective and its assessment subjective when considering whether these interests outweigh potential adverse effects on an individual, which is a requirement outlined in the amendment.
    And while individuals still can withdraw consent after the opt-out period, how can they do so when they’re not even aware they’ve been opted in to begin with? Under the “exceptions to consent” rule, are businesses required to inform consumers their data will be used and how it will be used? 
    Singapore’s Communications and Information Minister S. Iswaran has explained that data is a key economic asset in the digital economy as it provides valuable insights that inform businesses and generate efficiencies. It also empowers innovation and enhances products, and will be a critical resource for emerging technologies such as artificial intelligence.

    I totally get that, after all, access to data is what powers APIs (application programming interfaces) and fuels market competition.
    However, consumers need to be given the ability to decide who and how they want their own data to be accessed because for-profit businesses, when given a free buffet, will inevitably seek to grab as much as they can.
    My bank, for instance, is planning to phase out use of its physical token as a two-factor authentication option and transition fully to digital tokens. This means customers like me will be forced to download the bank’s mobile app, with which the digital token is integrated, just to authenticate my identity and access any of its online banking services. 
    The key frustration here is that the bank’s app wants a whole host of permissions including the ability to read my contacts details as well as access to my phone’s Bluetooth settings and location data. 
    Any external access to my personal data should be restricted to a need-to-have-only basis. I deem this practice essential in mitigating my security risks, especially as cyber threats are increasingly sophisticated and data breaches seemingly inevitable. 
    If major companies such as Lazada’s RedMart and Grab can overlook security loopholes that resulted in breaches and compromised customers’ data, what else are smaller businesses with much more limited resources failing to plug, even as they collect more of consumers’ personal information? 
    And what happens when the companies decide to modify their data use and privacy policies? This can often occur when there’s an acquisition such as Facebook and WhatsApp, and we know these businesses don’t always keep their pledge to maintain status quo after a merger with regards to customer data.  Sure, users furious over WhatsApp’s privacy policy change can move to alternatives such as Signal and Telegram, but what happens when the alternatives get bought out by another market giant like Google, Apple, or Microsoft? 
    Ill thought-out business decisions and security lapses can erode confidence and when consumers no longer trust that their personal data will be protected and used responsibly, they will pull back on adopting new digital services and technologies. And this can have adverse economic as well as social impact.
    Lessons from TraceTogether privacy debacle
    Singapore should know this best, since public trust took a severe hit when it was revealed the country’s COVID-19 contact tracing data could, in fact, be accessed for various purposes other than for its original intent. 
    The government early this month admitted law enforcers could use the TraceTogether data to aid in their criminal investigations, contradicting previous assertions that contact tracing information would only be be accessed if the user tested positive for the virus. 
    The revelation triggered much public outcry, with some threatening to circumvent the data collection by deactivating the TraceTogether app, turning off their phone’s Bluetooth connection, or placing their device including the TraceTogether token into an RFID-blocking pouch. 

    Much already has been said about the whole saga so I won’t comment on it further, but there are important lessons here for everyone, especially the government. 
    Topmost, it now must realise large sections of the local population do care enough about their personal data and privacy, and will choose to defend it when they’re able to. This should send a strong signal that serious, rather than token (pardon the pun), consideration is needed with regards to how citizens data is treated before policies are rolled out. 
    There clearly needs to be a mindset change in how the government operates and works on nationwide projects. A multi-ministry taskforce had been set up to deal with the COVID-19 pandemic, with contact tracing efforts often taking centrestage and focus. Yet, months had passed — since TraceTogether was launched — without any one of the ministries or even the police, that presumably would be more familiar with the Criminal Procedure Code, raising the alarm that public statements made repeatedly about the use of contact tracing data had failed to consider exceptions to criminal investigations. 
    At worst, this could be perceived — even if wrongly — as a deliberate attempt to deceive the public. At best, it would indicate gross carelessness and lack of communication between the different ministries and government agencies tasked to work on critical national initiatives, such as the COVID-19 pandemic.
    The TraceTogether privacy saga further demonstrates the need for users to have stronger ownership of their own data, so they can continuously ask questions about how their personal information is collected, stored, and used, as well as take active steps to safeguard their own cyber hygiene. 
    Because if they don’t, it’s clear that businesses as well as the government should not be expected to be able do so, effectively, on their behalf. What other loopholes and potential security gaps have been overlooked that can potentially lead to serious data breaches down the road?
    Such risks can be better mitigated if users were let in on efforts to manage their own data and empowered to decide for themselves whether businesses should, or should not, have access to all or some of their personal data. 
    In addition, every announcement about new policies that involve access to citizens’ data should be accompanied by a security factsheet detailing exactly how access will be protected and data stored and safeguarded. Declarations about the need to secure data should be more than lip service and go beyond brief one or two liners, uttered merely as ‘business as usual’ attempts to address security concerns.
    “People, Process, Technology.” Isn’t that the basic framework oft cited by businesses and governments as critical to successful adoption? Establishing the processes and technology will mean nothing if users aren’t properly equipped to help defend their own data.
    RELATED COVERAGE More

  • in

    F5 Networks fiscal Q1 revenue, profit beat expectations, revenue outlook higher as well

    Application security pioneer F5 Networks this afternoon reported fiscal Q1 revenue and profit that topped analysts’ expectations, and forecast this quarter’s revenue higher, but profit a bit below, sending its shares sharply lower in late trading.
    Revenue in the three months ended in December rose to $625 million, yielding EPS of $2.59. 
    Analysts had been modeling $623 million and $2.45 per share. 
    Also: F5 to acquire multi-cloud security software maker Volterra for $500 million, raises financial outlook 
    The results compare to a raised forecast for $623 million to $626 million in revenue offered two weeks ago, when the company announced it would acquire privately held, Volterra of Santa Clara, California, a maker of distributed multi-cloud application security and load-balancing software.
    For the current quarter, the company sees revenue in a range of $625 million to $645 million, higher than the consensus for $621 million; and EPS in a range of $2.32 to $2.44, slightly below consensus for $2.41. 
    F5 shares are down about 3% at $203 in after-hours trading and had initially dropped as much as 6%.

    Also: F5 Networks tops third quarter earnings targets

    Tech Earnings More

  • in

    Apple fixes another three iOS zero-days exploited in the wild

    Apple has released today security updates for iOS to patch three zero-day vulnerabilities that were exploited in the wild.

    All three zero-days were reported to Apple by an anonymous researcher.
    One impacts the iOS operating system kernel (CVE-2021-1782), and the other two are in the WebKit browser engine (CVE-2021-1870 and CVE-2021-1871).
    The iOS kernel bug was described as a race condition bug that can allow attackers to elevate privileges for their attack code.
    The two WebKit zero-days were described as a “logic issue” that could allow remote attackers to execute their own malicious code inside users’ Safari browsers.
    Security experts believe the three bugs are part of an exploit chain where users are lured to a malicious site that takes advantage of the WebKit bug to run code that later escalates its privileges to run system-level code and compromise the OS.
    However, official details about the attacks where these vulnerabilities were used were not made public, as is typical with most Apple zero-day disclosures these days.

    The three bugs today come after Apple patched another set of three iOS zero-days in November last year. The November zero-days were discovered by one of Google’s security teams.
    News of another set of iOS zero-days also came to light in December when Citizen Lab reported attacks against Al Jazeera staff and reporters earlier in 2020. These iOS zero-days were inadvertently patched when Apple released iOS 14, an iOS version with improved security features. More