More stories

  • in

    Get paid to improve Linux and open-source security

    Linux and open-source software are much easier to secure than proprietary software. As open-source co-founder Eric S. Raymond pointed out with Linus’ law: “Given enough eyeballs, all bugs are shallow.” But it requires eyeballs looking for bugs in the first place to make it work. Jim Zemlin, the Linux Foundation (LF)’s executive director, said in the aftermath of the Heartbleed and Shellshock security fiascos: “In these cases, the eyeballs weren’t really looking.” To help remedy this, David A. Wheeler, the LF’s director of Open Source Supply Chain Security, recently revealed the LF or its related foundations and projects directly fund people to do security work. Here’s how it works.

    The funding comes from a variety of pro-Linux and open-source organizations. These include Google, Microsoft, the Open Source Security Foundation (OpenSSF), the LF Public Health foundation, and the LF itself. When a problem is found, a developer reaches out to the appropriate LF organization. Generally speaking, a contract that briefly describes what problem needs to be fixed and how it will be done, the funds required for it, and who will do the work is set up.  The proposal is then examined by the appropriate LF technical review point of contact (POC). This POC is often Wheeler himself. Once your project is approved, progress reports are made approximately once a month. These must include:A stable URL of a publicly accessible post (e.g., a blog or archived mailing list post) describing what you did that month.The post must briefly describe what has been accomplished using the funding since the last invoice. Include its date and hyperlinks to details. If git commits were involved, include hyperlinks to them. Make it easy for technical people to learn details (e.g., via hyperlinks).Also briefly describe why this work is important or link to such description(s), for someone who is not intimately familiar with it. Some readers may see your post out of context.Give credit, similar to National Public Radio. (e.g., “This work to was [partially] funded by the OpenSSF, Google, and The Linux Foundation.”) Thanking others is always polite. We also want people to consider funding OSS security as normal.Publicly provide an identifier (a personal name, pseudonym, or project name) of who’s doing the work. This simplifies referring to the work. You do not need to reveal your personal name(s) publicly, though you’re welcome to do so.This is a lightweight process. It shouldn’t take more than 20 minutes to write these reports. You may find it easier to write your post while you do the work. Funded work must be available under the appropriate open-source licenses. For example, bug fixes to Linux must be licensed under the Gnu General Public Licenses Version 2 (GPLv2).The POC will then review the post, and if it seems reasonable, approve the payment. Wheeler explained: “We understand that sometimes problems arise. We just want to see credible efforts. If there’s a serious roadblock, try to suggest ways to overcome it or provide partial/incremental benefits. We need to provide confidence to funders that we aren’t wasting their money.”

    So, what kind of projects are we walking about? Wheeler cites several examples. These include:Ariadne Conill, the Alpine Linux security team chair, is improving this important container Linux distro’s security. In particular, Conill has improved its vulnerability processing and made it reproducible. For example, this resulted in Alpine 3.14 being released with the lowest open vulnerability count in the final release in a long time. On Git, the vital distributed version control system, David Huseby has been working on modifying git to have a much more flexible cryptographic signing infrastructure. This will make it easier to verify the integrity of software source code.It’s not just Linux-related programs that get security help. Theo de Raadt, founder and leader of OpenBSD and OpenSSH, has received funding to secure OpenSSH’s plumbing. OpenSSH is an important suite of secure Secure Shell (ssh)networking utilities based on the protocol. De Raadt has also been funded to help secure Resource Public Key Infrastructure (RPKI), which protects internet routing protocols from attack. Besides fixing known problems, the LF and company are also looking for security troubles we don’t know about yet. That’s being done with security audits via the Open Source Technology Improvement Fund (OSTIF). These projects include two Linux kernel security audits. One for signing and key management policies and the other for vulnerability reporting and remediation. Subject matter experts perform the audit reports, while Wheeler ensures these reports are clear to non-experts while still being accurate.Looking ahead, OpenSSF is also working on improving overall open-source software security. These include free courses on how to develop secure software and the CII Best Practices badge project. Other projects improve OSS security, include sigstore, which is making cryptographic signatures much easier and improving software bill-of-materials (SBOMs).If you’d like to help pay for this kind of work, the LF wants to hear from you. You can contribute to the OpenSSF by just contacting the organization, Or, if you’d rather, you can create a grant directly with the Linux Foundation itself. If you have questions just email Wheeler at [email protected]. For smaller amounts — say, to fund a specific project — you can also use the LFX crowdfunding tools to fund or request funding.Having trouble with the business side of funding security coding and audits? You’re not alone. As Wheeler said: “Many people and organizations struggle to pay individual open-source software developers because of the need to handle taxes and oversight. If that’s your concern, talk to us. The LF has experience and processes to do all that, letting experts focus on getting the work done.”Related Stories: More

  • in

    Palo Alto beats Q4 estimates with strength in large customer transactions

    Palo Alto Networks on Monday reported better-than-expected fourth quarter financial results, highlighting “notable strength in large customer transactions.”As many as 18 customers signed 8-figure transactions in Q4, the company said.
    Palo Alto Networks
    Non-GAAP net income for the fourth quarter was $161.9 million, or $1.60 per diluted share. Fourth quarter revenue grew 28% year-over-year to $1.2 billion. Analysts were expecting earnings of $1.43 per share on revenue of $1.17 billion.For the full fiscal year 2021, revenue grew 25% to $4.3 billion.”Our strong Q4 performance was the culmination of executing on our strategy throughout the year, including product innovation, platform integration, business model transformation and investments in our go-to-market organization,” chairman and CEO Nikesh Arora said in a statement. “In particular, we saw notable strength in large customer transactions with strategic commitments across our Strata, Prisma and Cortex platforms.”  
    Palo Alto Networks
    Fourth quarter billings grew 34% year-over-year to $1.9 billion. Fiscal year 2021 billings grew 27% to $5.5 billion.

    Deferred revenue grew 32% year-over-year to $5 billion, while remaining performance obligation (RPO) grew 36% to $5.9 billion. For Q1 2022, Palo Alto expects revenue in the range of $1.19 billion to $1.21 billion. Analysts are expecting revenue of $1.15 billion. For the fiscal year 2022, the company expects revenue in the range of $5.275 billion to $5.325 billion.

    Tech Earnings More

  • in

    Microsoft Power Apps misconfiguration exposes 38 million data records

    Sensitive data including COVID-19 vaccination statuses, social security numbers and email addresses have been exposed due to weak default configurations for Microsoft Power Apps, according to Upguard. Upguard Research disclosed multiple data leaks exposing 38 million data records via Microsoft Power Apps portals configured to allow public access. The data leaks impacted American Airlines, Microsoft, J.B. Hunt and governments of Indiana, Maryland and New York City. Upguard first discovered the issue involving the ODdata API for a Power Apps portal on May 24 and submitted a vulnerability report to Microsoft June 24. According to Upguard, the primary issue is that all data types were public when some data like personal identifying information should have been private. Misconfiguration led to some private data being surfaced. Microsoft Power Apps are low-code tools to design apps and create public and private web sites. More

  • in

    Singapore, US pledge deeper collaboration in cybersecurity

    Singapore and the US have inked a series of Memorandums of Understanding (MOUs) to widen their collaboration in cybersecurity across defence, financial, and research and development. Such initiatives will encompass further information sharing, joint exercises, training, and competency development. Three MOUs were signed Monday as part of US Vice President Kamala Harris’ three-day visit to the Asian nation this week. One of these involved an agreement between Singapore’s Cyber Security Agency (CSA) and the US Cybersecurity and Infrastructure Security Agency (CISA) to deepen cooperation in cybersecurity beyond data sharing and exchanges. The two government agencies would look to include new areas of cooperation in critical technologies as well as research and development, amongst others. 

    CSA’s chief executive David Koh noted that both countries shared “deep mutual interests” in enhancing cybersecurity cooperation, particularly as cybersecurity now was a key enabler with the two nations leveraging digitalisation to grow their respective economy and enhance their population’s lives.CISA Director Jen Easterly said: “Cyber threats don’t adhere to borders, which is why international collaboration is a key part of the Biden-Harris administration’s approach to cybersecurity. The MOU allows us to strengthen our existing partnership with Singapore, so that we can more effectively work together to collectively defend against the threats of today and secure against the risks of tomorrow.”In a second MOU inked between Singapore’s Ministry of Defence (Mindef) and the Singapore Armed Forces (SAF) and the US Department of Defense, both countries would aim collaborate on various cyberspace initiatives. These would include efforts to establish “mutual understanding” and data-sharing as well as cooperation in “capacity-building”. Singapore’s Chief of Defence Force Lieutenant-General Melvyn Ong said: “This MOU on cyberspace cooperation between Singapore and US defence establishments is an important step in formalising our cyber cooperation, and a reflection of our continued commitment to expand our defence collaboration in more areas. We look forward to cooperating with the US in this complex cybersecurity landscape”.

    According to Mindef, both nations have had “extensive” defence engagements that included military-to-military exchanges, training, and defence technology collaboration. These encompassed previous agreements such as the 2005 Strategic Framework Agreement that recognised Singapore as a major security cooperation partner, and the 2015 Enhanced Defence Cooperation Agreement, which widened defence cooperation across various security areas, including cyberdefence and biosecurity. A third MOU involved the Monetary Authority of Singapore (MAS) and US Treasury Department, and aimed to further drive initiatives in cybersecurity and strengthen bilateral institutional partnerships. Both financial agencies have been exchanging cyber threat information since 2018, they said in a joint statement Monday.The agreement encompassed collaboration in various areas, including information-sharing on cybersecurity regulations and incidents as well as threat intelligence, employee training to drive collaboration in cybersecurity, and competency-building initiatives such as cross-border cybersecurity exercises. US Secretary of the Treasury Janet L. Yellen noted that the cybersecurity cooperation agreement would enhance the cyber resilience of both countries’ financial systems.MAS Managing Director Ravi Menon said: “Given the growing complexity of cyber attacks and how interconnected the global financial system is, close cooperation is essential to ensure the cyber resilience of our financial systems. This MoU between the US Treasury and MAS will be particularly useful in the areas of cyber threat information-sharing and cross-border cybersecurity exercises.”RELATED COVERAGE More

  • in

    446 Australian breach notifications with 30% of system faults found after a year

    The health services industry has continued to be the sector responsible for the highest number of reported data breaches in Australia, accounting for 85 of the 446 total breaches notified to the Office of the Australian Information Commissioner (OAIC) in the six months to 30 June 2021.The 446 total is down 16% when compared to the previous six month’s figure of 530 notifications. For the 2020-21 financial year, 976 notifications were received under the Notifiable Data Breaches (NDB) scheme.March saw the highest number of notifications with 102.In the reporting period, 81% of breaches were identified by the entity within 30 days of it occurring, but in 4% of occasions, it took the entity longer than 365 days.”For data breaches caused by malicious or criminal attack or human error, more than 80% of entities identified the incident within 30 days of it occurring,” the OAIC wrote. “Where entities experienced a data breach resulting from a system fault, only 61% identified the incident within 30 days, and 30% did not become aware of the incident for over a year.”In the reporting period, 72% of entities notified the OAIC within 30 days of becoming aware of an incident that was subsequently assessed to be an eligible data breach. 27 entities took longer than 120 days from when they became aware of an incident to notify the OAIC.  71% of Australian government agencies reporting an incident found it within 30 days. 9%, however, took over a year to find. 3% took over a year to notify the OAIC.

    Since the mandate, health has been the most affected sector. Coming in second to health this half was the finance sector, which accounted for 57 notifications, followed by legal and accounting with 35, and the Australian government and insurance sectors each with 34. The Australian government entered the top five sectors in the first half of FY21. All agencies and organisations in Australia that are covered by the Privacy Act 1988 are required to notify individuals whose personal information is involved in a data breach that is likely to result in “serious harm”, as soon as practicable after becoming aware of a breach. The Privacy Act covers most Australian government agencies; it does not cover a number of intelligence and national security agencies, nor does it cover state and local government agencies, public hospitals, and public schools.In its latest six-month report [PDF] capturing notifications made under the NDB scheme, the OAIC said most data breaches involved the personal information of 5,000 individuals or fewer.Three notifications affected over 1 million individuals, with one affecting over 10 million individuals.Contact information, identity information, and financial details continue to be the most common types of personal information involved in data breaches. 407 — or 91% — of breaches notified under the scheme involved contact information, such as an individual’s name, home address, phone number, or email address.247 instances saw the breach of identity information, 193 exposed financial information, 136 health information, tax file numbers were exposed in 102 breaches, and other sensitive information was compromised in 75 of the occasions. Malicious or criminal attacks were the largest source of data breaches notified to the OAIC, accounting for 289 breaches. 192 breaches were caused by “cyber incidents”, 35 of them resulted from social engineering or impersonation, on 28 occasions the actions taken by a rogue employee or insider threat was the cause, and theft of paperwork or storage devices was responsible for 34 notifications.The report says human error also remained a major source of breaches, accounting for 134 notifications, while system faults accounted for the remaining 23 breaches.Human error breaches include sending personal information to the wrong recipient via email, unintended release or publication of personal information, and failure to use the blind carbon copy function when sending group emails.Unauthorised disclosure/unintended release or publication occurred in 31 notifications. This alone affected 523,998 individuals. The Australian government did not report any incidents pertaining to system faults, but reported 25 as human error, and nine as a malicious or criminal attack. The Australian government also reported one incident as “hacking”.The top sources of cyber incidents during the reporting period were phishing, compromised or stolen credentials, and ransomware. “More than half of cyber incidents (62%) during the reporting period involved malicious actors gaining access to accounts using compromised or stolen credentials,” OAIC said. “The most common method used by malicious actors to obtain compromised credentials was email-based phishing (58 notifications).”Ransomware incidents increased by 24% in the second half of the year, up from 37 in the first half to 46.Data breach notifications under the NDB scheme since inception
    Image: OAIC
    Need to disclose a breach? Read this: Notifiable Data Breaches scheme: Getting ready to disclose a data breach in AustraliaRELATED COVERAGE More

  • in

    UK competition authority raises alarm over Nvidia and Arm merger

    The United Kingdom competition authority said it has uncovered competition concerns with Nvidia’s proposed acquisition of the intellectual property business of UK-based Arm, following an initial investigation that was sparked by national security concerns.In delivering its report to the Secretary of State for Digital, Culture, Media and Sports (DCMS), the Competition and Markets Authority (CMA) outlined that the merged business would have the ability and incentive to harm the competitiveness of Nvidia’s rivals by restricting access to Arm’s intellectual property (IP). Currently, Arm’s IP is used by companies to produce semiconductor chips and related products that rival products produced by Nvidia. These companies include Intel, Qualcomm, AMD, and Xilinx, which recently expressed outrage over the deal. The CMA noted that if the proposed merger were to go ahead, it would result in “foreclosure in the supply of CPUs, interconnect products, GPUs, and SoCs across several global markets, spanning the datacentre, internet-of-things, automotive, and gaming console applications”.In addition, the report said while Nvidia offered a set of “behavioural remedies” to address the CMA’s concern, the competition authority found the suggestions would only result in “considerable specification, circumvention, and monitoring and enforcement risks”, and not alleviate any of its concerns. “We’re concerned that Nvidia controlling Arm could create real problems for Nvidia’s rivals by limiting their access to key technologies, and ultimately stifling innovation across a number of important and growing markets. This could end up with consumers missing out on new products, or prices going up,” CMA boss Andrea Coscelli said. “The chip technology industry is worth billions and is vital to products that businesses and consumers rely on every day. This includes the critical data processing and datacentre technology that supports digital businesses across the economy, and the future development of artificial intelligence technologies that will be important to growth industries like robotics and self-driving cars.”

    The CMA also advised the DCMS that further investigation into the planned merger was warranted. “The majority of customers and competitors that responded to the CMA’s investigation in relation to general-purpose personal computers also raised vertical foreclosure concerns … the CMA has not been able to investigate this area sufficiently … the CMA believes that this is an area which may warrant further examination in any phase 2 investigation,” it said in its report.The US chipmaker giant announced it was going to purchase Arm from Softbank, in a controversial deal worth $40 billion, last September. At the time, Nvidia founder and CEO Jensen Huang told journalists that the companies were “completely complementary”.”Nvidia doesn’t design CPUs, we have no CPU instruction set, Nvidia doesn’t license IP to semiconductor companies, so, and in that way, we’re not competitors. We have every intention to add more IP tools and also unlike Arm, Nvidia does not participate in the cell phone market,” he said.”Our intention is to combine the engineering and the tech — the R&D capacity of both companies so that we can accelerate the development of technology for Arm’s vast ecosystem, and one of the areas … that we very interested in, is to accelerate the development of server CPUs.”Arm’s president of IP Products Group Rene Haas has also previously assured there would be a “firewall” between the two companies and added that they would not give any early access to Nvidia. But Haas later admitted that Arm would have to share certain information with Nvidia, like if large customers move to RISC-V, an open-source competitor to Arm.Related Coverage More

  • in

    NextDC joins Fujitsu and Equinix as latest certified to store Canberra's sensitive data

    Image: Shutterstock
    The Digital Transformation Agency (DTA) has added three further providers to its list of certified players to store sensitive data locally.Added this week is NextDC, which joins recently added Equinix and Fujitsu.NextDC has its Perth 1 and 2, Sydney 1 and 2, Melbourne 1 and 2, Brisbane 1 and 2, and Canberra 1 facilities classed as certified against the requirements defined in the Hosting Certification Framework.Equinix Australia has its CA1, SY3, SY4, SY5, SY6, SY7, PE2, and ME4 faciltiies certified, while Fujitsu Australia’s Western Sydney and Homebush facilities have been accepted by the DTA.The DTA is the government’s certifying authority for the Hosting Certification Framework.The framework aims to operationalise the principles outlined in the whole-of-government hosting strategy, and to support the secure management of government systems and data.”The framework will assist agencies to mitigate against supply chain and data centre ownership risks, and enable them to identify and source appropriate hosting and related services,” the DTA claims.

    In June, the DTA certified Australian Data Centres, Canberra Data Centres, and Macquarie Telecom’s Canberra Campus as the initial three providers to store sensitive government data.The Australian Signals Directorate (ASD) shuttered the government’s cloud certification program in July 2020, after an independent review recommended for the system be reworked. ASD cloud services certifications, and consequently all services listed on the Certified Cloud Services List, became void. In its place is the Cloud Security Guidance, which aims to guide organisations including government, cloud service providers, and Information Security Registered Assessors Program assessors on how to perform a “comprehensive assessment of a cloud service provider and its cloud services so a risk-informed decision can be made about its suitability to handle an organisation’s data”.
    LATEST FROM CANBERRA More

  • in

    IBM finds ASX outage the result of trade platform not being ready for go-live

    The Australian Securities Exchange (ASX) experienced “software issues” when it went live with the refresh of its trade equity platform in November last year, causing the exchange to pause trade.At the time, the exchange said its technology provider Nasdaq, as well as customers and independent specialist third parties, conducted extensive testing for over a year on the ASX Trade system, including four dress rehearsals, in preparation for sending it out in the wild. The tech used, it said, was the latest generation of a Nasdaq-developed trading system used around the world.Following the outage, the Reserve Bank of Australia (RBA) and the Australian Securities and Investments Commission (ASIC) requested an independent review, and the ASX saw fit to hand this responsibility to IBM.Never forget: IBM lambasted by ABS for failing to handle Census DDoSOn Monday, IBM served the ASX with 17 recommendations and found a number of shortcomings in the project, such as noting the trade platform was not ready for go-live.”Factors that suggested the ASX Trade system was not ready to go-live considering ASX’s near zero appetite for service disruption. This was the case even though the formal implementation readiness processes were completed and verified by multiple parties without objection to go-live,” IBM found.

    “There were gaps in the rigour applied to the project delivery risk and issue management process expected for a project of this nature, and risk and issue management, project compliance to ASX practices, project requirements and the project test strategy/planning did not meet accepted industry practices. “It was not reasonable to expect the test plan used would meet the ASX’s near zero appetite for service disruption.”According to Big Blue, there were seven factors that suggested the platform was not ready for go-live, which included historical software product quality indicators, additional testing needs being noted, the quantity of open defects, gaps in end-to-end test coverage, proximity to year-end change freeze windows for participants, risk likelihood ratings, and a lack of evidence of challenges to the risk rating or to go-live.”Last November’s market outage fell short of ASX’s high standards,” ASX MD and CEO Dominic Stevens said on Monday. “We believed that the software was ready for go-live, as did our technology provider Nasdaq. Clearly there were issues, which was particularly disappointing given the significant progress we have made on resilience in recent years.”IBM also concluded the project could have benefited from additional and independent scrutiny.It determined there were gaps in the rigour applied to the project delivery risk and issue management process, such as opportunities to identify additional risks being missed, differences between project delivery risk templates and the enterprise delivery risk processes, the project not receiving risk resources with greater experience in technical projects that it would have benefitted from, and governance being shifted to a group that had a wide range of responsibilities. “The shift diluted attention given to the project,” IBM said.The review found some positives, however, with IBM saying the ASX met or exceeded leading industry practices in 58 out of 75 of the capabilities assessed.”We acknowledge the findings in the report. It’s pleasing that ASX met or exceeded leading industry practices in most areas. But the report does point to some important areas for improvement and we will address all of its recommendations,” Stevens added. “ASX is well advanced in developing a detailed response plan for execution over the next 12 to 18 months, and we’ll commission the independent expert to review our actions to meet its recommendations. Our delivery of this program of work will be under the oversight of ASIC and the RBA.”IBM said the project’s business case development and project change management exceeded accepted practices; that the project was provided with, and had access to, sufficient financial, time, people, and technological resources at all stages of delivery to meet its objectives; that communications with key stakeholders were appropriately managed by the ASX, and said incident management actions taken by the exchange were appropriate.The exchange in 2018 was asked to up its risk management practices following an “unprecedented” hardware failure in September 2016 that resulted in the outage of its equity market. According to ASIC, the actions taken by ASX during the 2020 incident were appropriate and reflected the lessons learned from the 2016 incident.”ASX takes the resilience and reliability of its markets extremely seriously. That’s why we immediately engaged with our regulators to commission this external review and will address all of its recommendations. It’s also why we’ve already taken action to change our project delivery practices,” Stevens continued. “The changes we’ve made to our management structure are aligned to these objectives. “Driving technological change is hard and creates transition risk. No market will operate without incidents or outages from time to time. Nevertheless, all outages are regrettable.”The regulators expect ASX to apply the insights from IBM’s findings across the exchange to ensure existing and proposed projects, including the CHESS replacement program, are managed and implemented appropriately.ASIC is also undertaking a separate investigation into the ASX Trade outage to determine whether ASX met its obligations under its Australian Market Licence.MORE FROM THE EXCHANGE More