More stories

  • in

    Windows security: 20 years on from Bill Gates' Trustworthy Computing memo, how much has changed?

    It’s almost 20 years since then-Microsoft boss Bill Gates wrote his famous Trustworthy Computing memo, in which he urged the company to produce more secure software. “Eventually, our software should be so fundamentally secure that customers never even worry about it,” wrote Gates. It’s a grand ambition, and despite years of work, it is not one that any software has really achieved yet. And even as engineers try to improve their products, a new wave of security threats have appeared.

    Enterprise Software

    “I think it was hard for anyone at the time – even in Bill Gates’ grand vision – to see we would have sophisticated state-sponsored hackers breaking those SWIFT banking system codes, people flattening oil production by wiping hard-drives. The threat landscape is beyond any science fiction novel or what John le Carré could predict,” says Dave Weston, Microsoft’s director of enterprise and Windows security.  SEE: Windows 11 upgrade: Five questions to ask firstHe admits that, as a “hardened industry professional”, he is surprised by the sophistication of attacks today. “The breadth and sophistication [of these attacks] is what continues to make this job interesting. There is never a dull moment here,” he says.”Fifteen years ago we were thinking of these attackers as basically script kiddies – people sitting in their parents’ basements on the weekend doing things for mischievous reasons. That was the archetype 15 years ago. The archetype now is somebody who is working in the military-industrial complex, who works in an office.” That’s a pretty stark contrast, Weston points out.

    “If we’re up against that, are we in a better position? I would say, unequivocally, yes. Twenty years ago, the price of an exploit was cheap. Now when you’re talking about Windows 10 or 11, or browsers, you’re talking millions of dollars to acquire an exploit.”The difference between those two points is the level of defenses in the operating system, he argues. “The reality is today that there are a smaller number of people who today can attack a Windows PC than there was 10 or 15 years ago and I think that in itself is a triumph.”     That increasing threat level is one issue, while the tech security goalposts themselves have also been changing rapidly. Back in 2002 when Gates wrote his memo, the focus of security was all about the software: he didn’t even mention hardware or CPUs. Today, with an uptick in zero-day exploits, CPU attacks like Meltdown/Spectre and more, Windows security is much more concerned with hardware.    For example, in Windows 10 and Windows 11, Microsoft has brought in Control Enforcement Technology (CET), a security mitigation it co-developed with Intel. CET is an on-chip technology that targets some of the most common attack vectors, such as return oriented programming, says Weston. It’s available on Intel 11th Gen or AMD Zen 3 CPUs.Virtualization-based security, dubbed VBS at Redmond, restricts techniques used in the WannaCry ransomware attack by hardening the Windows kernel. Windows 11 also promises to make the goal of ‘Zero Trust’ – the concept of borderless networks that the Biden White House is pushing – easier by reducing the amount of configuration required for Windows endpoints.But, as Weston highlights, organizations will need to run some numbers to figure out whether to upgrade hardware and migrate to Windows 11 versus reconfiguring PCs and servers that only cut it for Windows 10. On Windows 11, admins don’t need much to configure that security; with Windows 10 they can create the same level of security – but with a bit more work.Organizations that adopt Zero Trust assume their perimeter has already been breached. It also recognizes that data needs to be protected within and outside the network on corporate-issued and employee-owned devices. Zero Trust has became more pertinent after the pandemic forced many more people to remote working.Weston, however, contends that Windows 11 does make it easier for businesses, assuming they have new hardware suitable for it. “Where the hardware fits in, we’ve been working to make sure things can be turned on by default when you meet the hardware baseline. We’re expecting a certain level of performance and reliability from recent drivers and hardware pieces. That allows us to turn on more, by default, with confidence. That’s where the hardware piece fits in with Zero Trust,” he says. 

    But will customers be left behind because of hardware? “The answer is firmly ‘no’,” insists Weston. SEE: Microsoft’s Windows 11: How to get it now (or later)Even if organisations want to stay with Windows 10, many of those features like Windows Hello, Virtualization Based Security and Secure Boot are still available, he says – you’ve just got to turn it on and evaluate your own environment.”If you’ve got the hardware, you can install Windows 11, things are simple. If you don’t have that hardware or that’s something you’re planning for the future, you can still partake in all of these security baselines by taking our free security baseline and apply that to Windows 10-level hardware. You may have to do some initial analysis on performance trade-offs, which makes it a little more difficult, but you can certainly get there.”Microsoft has set October 14, 2025 for the end of Windows 10 patches. Weston reckons you can still configure Windows 10 to meet Windows 11 standards, and he optimistically bets that most organizations should have refreshed most of their hardware by 2025.    “By 2025, when the refresh cycle will have turned over for the vast majority of the businesses, you will have more reason to move to Windows 11 because by that point there will have been two or three releases of security goodness to have been added, which we think is going to provide a substantial value proposition,” he says.”My advice would be: if you need to stay on Windows 10 for hardware reasons, great; follow our security guidance from 11 and apply that to 10. Plan in your refresh cycles and security budgets to get the right hardware to get to 11 because, if you stay on 10 for too long, we will start to introduce things that are 11-specific – trust me, we have many on the way now – and we want as many customers as possible to get that value. It’s very similar to the transition we went through from Windows 7 to 10: there’s security goodness if you can get there.”  More

  • in

    The White House is having a big meeting about fighting ransomware. It didn't invite Russia

    The White House has held a meeting with ministers and officials from 30 nations and the European Union to discuss how to combat ransomware and other cyber threats. The two-day series of meetings aimed to find an answer to ransomware and followed calls from US president Joe Biden for the Kremlin to hold Russia-based ransomware gangs accountable for their file-encrypting attacks, rather than turning a blind eye to them so long as they don’t attack Russian organizations.   

    ZDNet Recommends

    Notably absent from the White House-led group was Russia itself, which was not invited. In June, Biden told Russian President Vladimir Putin that 16 US critical infrastructure entities should be off-limits from ransomware attackers operating from Russia. SEE: Ransomware attackers targeted this company. Then defenders discovered something curiousThe aim of the talks was to figure out an international approach to disrupting and ultimately stopping ransomware attacks. In the two days of virtual talks, India led discussions on Thursday about resilience, while Australia focused on how to disrupt cyberattacks. The UK’s contribution focused on virtual currency, while Germany discussed diplomacy. Other countries involved included Canada, France, Brazil, Mexico, Japan, Ukraine, Ireland, Israel, and South Africa.Although Russian officials didn’t participate, a White House spokesperson said the US is in ongoing discussions with Russia via the US-Kremlin Experts Group, which is led by the White House, and was established by Biden and Putin. 

    One of the most disruptive ransomware attacks on US infrastructure was against Colonial Pipeline, which halted fuel distribution on the US east coast for a week in May. The company reportedly paid the equivalent of $4.4 million in bitcoin for a decryption tool from the attackers.The FBI blamed the Colonial attack on DarkSide, which went offline shortly afterwards but resurfaced in June, according to FireEye’s incident response unit, Mandiant. 

    DarkSide is one of several ransomware gangs operating as a service provider, allowing other criminal gangs to use its software to extort targets. Others, including Revil, steal data and threaten to leak it online if the ransom isn’t paid.    SEE: BYOD security warning: You can’t do everything securely with just personal devicesThe other major threat Biden has raised concerns nation-state cyber attackers, such as this year’s attacks on Microsoft Exchange email servers, which UK and US officials blamed on Chinese state-sponsored hackers, dubbed Hafnium by Microsoft. Microsoft this week reported that Kremlin-backed hackers were by far the most prolific attackers. The message from the White House is that nations need to cooperate to bolster “collective cyber defenses” against criminal and state-sponsored cyberattacks. “We’ve worked with allies and partners to hold nation states accountable for malicious cyberactivity as evidenced by, really, the broadest international support we had ever in our attributions for Russia and China’s malicious cyber activities in the last few months,” a White House official said at a media briefing.  More

  • in

    ACSC offers optional DNS protection to government entities

    Image: Getty Images/iStockphoto
    The Australian Cyber Security Centre will be offering its Australian Protective Domain Name Service (AUPDNS) for free to other government entities at federal and state level across Australia. AUPDNS has already inspected 10 billion queries, and blocked 1 million connections to malicious domains, Assistant Minister for Defence Andrew Hastie said on Thursday. “A single malicious connection could result in a government network being vulnerable to attack or compromise, so it’s vital we do everything we can to prevent cybercriminals from gaining a foothold,” he said.”Currently AUPDNS is protecting over 200,000 users, and this number is growing”. The blocklist functionality was developed with Nominet Cyber. Elsewhere on Thursday, Labor deputy chair of the Parliamentary Joint Committee on Intelligence and Security — which examines national security legislation and often leads to Labor waving continuous legislation through — Anthony Byrne tended his resignation. “The work of the PJCIS is crucial to Australia’s national security and its integrity should never be questioned,” Byrne said.

    “I have always put the work of this bipartisan Committee first and have always served in its best interests.” Byrne is in hot water after telling Victoria’s Independent Broad-based Anti-corruption Commission he was involved in branch stacking. Replacing Byrne to fill the ALP post will be Senator Jenny McAllister, with Peter Khalil appointed to the committee. “Byrne has served the PJCIS in a number of roles since 2005 including as Chair and Deputy Chair,” Labor leader Anthony Albanese said. “I thank Mr Byrne for his important contributions to this committee in Australia’s national interest.” On Wednesday, the Australian government announced a new set of standalone criminal offences for people who use ransomware under what it has labelled its Ransomware Action Plan. The plan creates new criminal offences for people that use ransomware to conduct cyber extortion, target critical infrastructure with ransomware, and deal with stolen data knowingly obtained in the course of committing a separate criminal offence, as well as buying or selling malware for the purposes of undertaking computer crimes.Related Coverage More

  • in

    Singapore to develop mobile defence systems with Ghost Robotics

    Singapore’s Defence Science and Technology Agency (DSTA) has inked a partnership with Philadelphia-based Ghost Robotics to identify uses cases involving legged robots for security, defence, and humanitarian applications. They will look to test and develop mobile robotic systems, as well as the associated technology enablers, that can be deployed in challenging urban terrain and harsh environments.The collaboration also would see robots from Ghost Robotics paired with DSTA’s robotics command, control, and communications (C3) system, the two partners said in a joint statement released Thursday. The Singapore government agency said its C3 capabilities were the “nerve centre” of military platforms and command centres, tapping data analytics, artificial intelligence, and computer vision technologies to facilitate “tighter coordination” and effectiveness during military and other contingency operations. Its robotics C3 system enabled simultaneous control and monitoring of multiple unmanned ground and air systems to deliver a holistic situation outline for coordinated missions, including surveillance in dense urban environments. With the partnership, DSTA and Ghost Robotics would test and develop “novel technologies and use cases” for quadrupedal unmanned ground vehicles, which would be integrated with multi-axis manipulators. These would enhance how the autonomous vehicles interacted with their environment and objects within it. Power technologies, such as solid-state batteries or fuel cells, also would be integrated to allow the robotics systems to operate for extended periods of time. DSTA’s deputy chief executive for operations and director of land systems, Roy Chan, said: “In the world of fast-evolving technology, close collaboration between organisations is imperative to co-create use cases and innovative solutions. In partnering Ghost Robotics, DSTA hopes to advance robotic capabilities in defence and shape the battlefield of the future.

    “We envision that robots would one day become a defender’s best friend and be deployed to undertake more risky and complex operations in tough terrains,” Chan said. DSTA is tasked with tapping science and technology to develop capabilities for the country’s Singapore Armed Forces (SAF), including the use of autonomous vehicles. The Ministry of Defence and SAF in June 2021 unveiled a transformation strategy to address evolving security challenges and threats, which encompassed efforts to leverage technological advancements to better tap data and new technologies, such as robotics C3 systems, and integrate these technologies into warfighting concepts to improve operational effectiveness and reduce manpower requirements.According to Ghost Robotics, its quadrupedal unmanned ground vehicles were built for unstructured terrain, on which a typical wheeled or tracked device could not operate efficiently. RELATED COVERAGE More

  • in

    7-Eleven breached customer privacy by collecting facial imagery without consent

    Image: Getty Images
    In Australia, the country’s information commissioner has found that 7-Eleven breached customers’ privacy by collecting their sensitive biometric information without adequate notice or consent. From June 2020 to August 2021, 7-Eleven conducted surveys that required customers to fill out information on tablets with built-in cameras. These tablets, which were installed in 700 stores, captured customers’ facial images at two points during the survey-taking process — when the individual first engaged with the tablet, and after they completed the survey. After becoming aware of this activity in July last year, the Office of the Australian Information Commissioner (OAIC) commended an investigation into 7-Eleven’s survey. During the investigation [PDF], the OAIC found 7-Eleven stored the facial images on tablets for around 20 seconds before uploading them to a secure server hosted in Australia within the Microsoft Azure infrastructure. The facial images were then retained on the server, as an algorithmic representation, for seven days to allow 7-Eleven to identify and correct any issues, and reprocess survey responses, the convenience store giant claimed. The facial images were uploaded to the server as algorithmic representations, or “faceprints”, that were then compared with other faceprints to exclude responses that 7-Eleven believed may not be genuine. 7-Eleven also used the personal information to understand the demographic profile of customers who completed the survey, the OAIC said. 7-Eleven claimed it received consent from customers who participated in the survey as it provided a notice on its website stating that 7-Eleven may collect photographic or biometric information from users. The survey resided on 7-Eleven’s website.

    As at March 2021, approximately 1.6 million survey responses had been completed. Angelene Falk, Australia’s Information Commissioner and Privacy Commissioner, determined that this large-scale collection of sensitive biometric information breached Australia’s privacy laws and was not reasonably necessary for the purpose of understanding and improving customers’ in-store experience. In Australia, an organisation is prohibited from collecting sensitive information about an individual unless consent is provided.   Falk said facial images that show an individual’s face is sensitive information. She added that any algorithmic representation of a facial image is also sensitive information. In regards to 7-Eleven’s claim that consent was provided, Falk said 7-Eleven did not provide any information about how customers’ facial images would be used or stored, which meant 7-Eleven did not receive any form of consent when it collected the images. “For an individual to be ‘identifiable’, they do not necessarily need to be identified from the specific information being handled. An individual can be ‘identifiable’ where it is possible to identify the individual from available information, including, but not limited to, the information in issue,” Falk said. “While I accept that implementing systems to understand and improve customers’ experience is a legitimate function for 7-Eleven’s business, any benefits to the business in collecting this biometric information were not proportional to the impact on privacy.” As part of the determination, Falk has ordered for 7-Eleven to cease collecting facial images and faceprints as part of the customer feedback mechanism. 7-Eleven has also been ordered to destroy all the faceprints it collected. Related Coverage More

  • in

    Singapore must take caution with AI use, review approach to public trust

    In its quest to drive the adoption of artificial intelligence (AI) across the country, multi-ethnic Singapore needs to take special care navigating its use in some areas, specifically, law enforcement and crime prevention. It should further foster its belief that trust is crucial for citizens to be comfortable with AI, along with the recognition that doing so will require nurturing public trust across different aspects within its society.  It must have been at least two decades ago now when I attended a media briefing, during which an executive was demonstrating the company’s latest speech recognition software. As most demos went, no matter how much you prepared for it, things would go desperately wrong.  Her voice-directed commands often were wrongly executed and several spoken words in every sentence were inaccurately translated into text. The harder she tried, the more things went wrong, and by the end of the demo, she looked clearly flustered.  She had a relatively strong accent and I’d assumed that was likely the main issue, but she had spent hours training the software. This company was known, at that time, specifically for its speech recognition products so it wouldn’t be wrong to assume its technology then was the most advanced in the market.  I walked away from that demo thinking it would be near impossible, with the vast difference in accents within Asia alone and even amongst those who spoke the same language, for speech recognition technology to be sufficiently accurate. 

    Singapore wants widespread AI use in smart nation drive

    With the launch of its national artificial intelligence (AI) strategy, alongside a slew of initiatives, the Singapore government aims to fuel AI adoption to generate economic value and provide a global platform on which to develop and testbed AI applications.

    Read More

    Some 20 years later, today, speech-to-text and translation tools clearly have come a long way, but they’re still not always perfect. An individual’s accent and speech patterns remain key variants that determine how well spoken words are translated.  However, wrongly converted words are unlikely to cause much damage, safe from a potentially embarrassing moment on the speaker’s part. The same is far from the truth where facial recognition technology is concerned. 

    In January, police in Detroit, USA, admitted its facial recognition software falsely identified a shoplifter, leading to his wrongful arrest.  Vendors such as IBM, Microsoft, and Amazon have maintained a ban on the sale of facial recognition technology to police and law enforcement, citing human rights concerns and racial discrimination. Most have urged governments to establish stronger regulations to govern and ensure the ethical use of facial recognition tools.  Amazon had said its ban would remain until regulators addressed issues around the use of its Rekognition technology to identify potential criminal suspects, while Microsoft said it would not sell facial recognition software to police until federal laws were in place to regulate the technology. IBM chose to exit the market completely over concerns facial recognition technology could instigate racial discrimination and injustice. Its CEO Arvind Krishna wrote in a June 2020 letter to the US Congress: “IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and principles of trust and transparency. “AI is a powerful tool that can help law enforcement keep citizens safe. But vendors and users of Al systems have a shared responsibility to ensure that Al is tested for bias, particularity when used in law enforcement, and that such bias testing is audited and reported,” Krishna penned.  I recently spoke with Ieva Martinkenaite, who chairs the AI task force at GSMA-European Telecommunications Network Operators’ Association, which drafts AI regulation for the industry in Europe. Martinkenaite’s day job sees her as head of analytics and AI for Telenor Research. In our discussion on how Singapore could best approach the issue of AI ethics and use of the technology, Martinkenaite said every country would have to decide what it felt was acceptable, especially when AI was used in high risk areas such as in detecting criminals. Here, she noted, there remained challenges amidst evidence of discriminatory results including against certain ethnic groups and gender.   In deciding what was acceptable, she urged governments to have an active dialogue with citizens. She added that until veracity issues related to the analysis of varying skin colours and facial features were properly resolved, the use of such AI technology should not be deployed without any human intervention, proper governance, or quality assurance in place. Training AI for multi-ethnic SingaporeFacial recognition software has come under fire for its inaccuracy, in particular, in identifying people with darker skintones. An MIT 2017 study, which found that darker females were 32 times more likely to be misclassified than lighter males, pointed to the need for more phenotypically diverse datasets to improve the accuracy of facial recognition systems.  Presumably, AI and machine learning models trained with less data on one ethnic group would exhibit a lower degree of accuracy in in identifying individuals in that group.  Singapore’s population comprises 74.3% Chinese, 13.5% Malays, and 9% Indians, with the remaining 3.2% made up of other ethnic groups such as Eurasians. Should the country decide to tap facial recognition systems to identify individuals, must the data used to train the AI model consist of more Chinese faces since the ethnic group forms the population’s majority? If so, will that lead to a lower accuracy rate when the system is used to identify a Malay or Indian, since fewer data samples of these ethnic groups were used to train the AI model?  Will using an equal proportion of data for each ethnic group then necessarily lead to a more accurate score across the board? Since there are more Chinese residents in the country, should the facial recognition technology be better trained to more accurately identify this ethnic group because the system will likely be used more often to recognise these individuals?  These questions touch only on the “right” volume of data that should be used to train facial recognition systems. There still are many others concerning data alone, such as where training data should be sourced, how the data should be categorised, and how much training data is deemed sufficient before the system is considered “operationally ready”.  Singapore will have to navigate these carefully should it decide to tap AI in law enforcement and crime prevention, especially as it regards racial and ethnic relations important, but sensitive in managing. Beyond data, discussions and decisions will need to be made on, amongst others, when AI-powered facial recognition systems should be used, how automated should they be allowed to operate, and when human intervention would be required. The European Parliament just last week voted in support of a resolution banning law enforcement from using facial recognition systems, citing various risks including discrimination, opaque decision-making, privacy intrusion, and challenges in protecting personal data. 

    “These potential risks are aggravated in the sector of law enforcement and criminal justice, as they may affect the presumption of innocence, the fundamental rights to liberty and security of the individual and to an effective remedy and fair trial,” the European Parliament said.  Specifically, it pointed to facial recognition services such as Clearview AI, which had built a database of more than three billion pictures that were illegally collected from social networks and other online platforms.  The European Parliament further called for a ban on law enforcement using automated analysis of other human features, such as fingerprint, voice, gait, and other biometric and behavioural traits.  The resolution passed, though, isn’t legally binding. Because data plays an integral role in feeding and training AI models, what constitutes such data inevitably has been the crux of key challenges and concerns behind the technology.  The World Health Organisation (WHO) in June issued a guidance cautioning that AI-powered healthcare systems trained primarily on data of individuals in high-income countries might not perform well for individuals in low- and middle-income environments. It also cited other risks such as unethical collection and use of healthcare data, cybersecurity, and bias being encoded in algorithms.  “AI systems must be carefully designed to reflect the diversity of socioeconomic and healthcare settings and be accompanied by training in digital skills, community engagement, and awareness-raising,” it noted. “Country investments in AI and the supporting infrastructure should help to build effective healthcare systems by avoiding AI that encodes biases that are detrimental to equitable provision of and access to healthcare services.” Fostering trust goes beyond AISingapore’s former Minister for Communications and Information and Minister-in-charge of Trade Relations. S. Iswaran, previously acknowledged the tensions about AI and the use of data, and noted the need for tools and safeguards to better assure people with privacy concerns.  In particular, Iswaran stressed the importance of establishing trust, which he said underpinned everything, whether it was data or AI. “Ultimately, citizens must feel these initiatives are focused on delivering welfare benefits for them and ensured their data will be protected and afforded due confidentiality,” he said.   Singapore has been a strong advocate for the adoption of AI, introducing in 2019 a national strategy to leverage the technology to create economic value, enhance citizen lives, and arm its workforce with the necessary skillsets. The government believes AI is integral to its smart nation efforts and a nationwide roadmap was necessary to allocate resources to key focus areas. The strategy also outlines how government agencies, organisations, and researchers can collaborate to ensure a positive impact from AI, as well as directs attention to areas where change or potential new risks must be addressed as AI becomes more pervasive.  The key goal here is to pave the way for Singapore, by 2030, to be a leader in developing and deploying “scalable, impactful AI solutions” in key verticals. Singaporeans also will trust the use of AI in their lives, which should be nurtured from a clear awareness of the benefits and implications of the technology.  Building trust, however, will need to go beyond simply demonstrating the benefits of AI. People need to fully trust the authorities across various aspects of their lives and that any use of technology will safeguard their welfare and data. The lack of trust in one aspect can spill over and impact trust in other aspects, including the use of AI-powered technologies. Singapore in February urgently pushed through new legislation detailing the scope of local law enforcement’s access to COVID-19 contact tracing data. The move came weeks after it was revealed the police could access the country’s TraceTogether contact tracing data for criminal investigations, contradicting previous assertions this information would only be used when the individual tested positive for the coronavirus. It sparked a public outcry and prompted the government to announce plans for the new bill limiting police access to seven categories of “serious offences”, including terrorism and kidnapping.Early this month, Singapore also passed the Foreign Interference (Countermeasures) Bill amidst a heated debate and less than a month after it was first proposed in parliament. Pitched as necessary to combat threats from foreign interference in local politics, the Bill has been criticised for being overly broad in scope and judicial review restrictive. Opposition party Workers’ Party also pointed to the lack of public involvement and speed at which the Bill was passed.Will citizens trust their government’s use of AI-powered in “delivering welfare benefits”, especially in law enforcement, when they have doubts–correctly perceived or otherwise–their personal data in other areas is properly policed? Doubt in one policy can metastasise and drive further doubt in other policies. With trust, as Iswaran rightly pointed out, an integral part of driving the adoption of AI in Singapore, the government may need to review its approach to fostering this trust amongst its population. According to Deloitte, cities looking to use technology for surveillance and policing should look to balance security interests with the protection of civil liberties, including privacy and freedom. “Any experimentation with surveillance and AI technologies needs to be accompanied by proper regulation to protect privacy and civil liberties. Policymakers and security forces need to introduce regulations and accountability mechanisms that create a trustful environment for experimentation of the new applications,” the consulting firm noted. “Trust is a key requirement for the application of AI for security and policing. To get the most out of technology, there must be community engagement.”Singapore must assess whether it has indeed nurtured a trustful environment, with the right legislations and accountability, in which citizens are properly engaged in dialogue, so they can collectively decide what is the country’s acceptable use of AI in high risk areas. RELATED COVERAGE More

  • in

    Google analysed 80 million ransomware samples: Here's what it found

    Image: Google
    Google has published a new ransomware report, revealing Israel was far and away the largest submitter of samples during that period. The tech giant commissioned cybersecurity firm VirusTotal to conduct the analysis, which entailed reviewing 80 million ransomware samples from 140 countries. According to the report [PDF], Israel, South Korea, Vietnam, China, Singapore, India, Kazakhstan, Philippines, Iran and the UK were the 10 most affected territories based on the number of submissions reviewed by VirusTotal. Israel had the higher number of submissions and that amount was a near-600% increase from its baseline amount of submissions. The report did not state what Israel’s baseline amount of submissions was during that period. From the start of 2020, ransomware activity was at its peak during the first two quarters of 2020, which VirusTotal attributed to activity by ransomware-as-a-service group GandCrab. “GandCrab had an extraordinary peak in Q1 2020 which dramatically decreased afterwards. It is still active but at a different order of magnitude in terms of the number of fresh samples,” VirusTotal said. There was another sizeable peak in July 2021 that was driven by the Babuk ransomware gang, a ransomware operation that was launched at the beginning of 2021. Babuk’s ransomware attack generally features three distinct phases: Initial access, network propagation, and action on objectives.

    GandCrab was the most active ransomware gang since the start of 2020, accounting for 78.5% of samples. GandCrab was followed by Babuk and Cerber, which accounted for 7.6% and 3.1% of samples, respectively.
    Image: Google
    According to the report, 95% of ransomware files detected were Windows-based executables or dynamic link libraries (DLLs) and 2% were Android-based. The report also found that exploits consisted of only a small portion of the samples — 5%. “We believe this makes sense given that ransomware samples are usually deployed using social engineering and/or by droppers (small programs designed to install malware),” VirusTotal said. “In terms of ransomware distribution attackers don’t appear to need exploits other than for privilege escalation and for malware spreading within internal networks.”  After reviewing the samples, VirusTotal also said that there was a baseline of between 1,000 and 2,000 first-seen ransomware clusters at all times throughout the analysed period. “While big campaigns come and go, there is a constant baseline of ransomware activity that never stops,” it said. Related Coverage More

  • in

    Brazilian e-commerce firm Hariexpress leaks 1.75 billion sensitive files

    Around 1.75 billion sensitive files were leaked by a Brazilian e-commerce integrator that provides services to some of the country’s largest online shopping websites.Hariexpress is headquartered in São Paulo and integrates multiple processes into a single platform to improve the efficiency and operational capability of retailers with more than one e-commerce store. Some of the company’s clients include Magazine Luiza, Mercado Livre, Amazon and B2W Digital. The national postal service, Correios, is also among the company’s partners and was also impacted by the incident.

    According to security researcher Anurag Sen at Safety Detectives, who discovered the leak in July 2021, the incident is attributed to a misconfigured and unprotected ElasticSearch server and involves more than 610GB of exposed data. The researchers noted they were unsuccessful in their attempts to resume communication with the company after an initial contact. Banking information relating to customers was not compromised, according to the experts; on the other hand, the leak exposed a vast set of sensitive information including customers’ full names, e-mail addresses, business and residential addresses, company registration and social security numbers. In addition, all manner of details relating to purchases including dates, times and prices of products sold, as well as copies of invoices and login credentials to the Hariexpress service were also exposed, according to Safety Detectives. The researchers could not estimate the exact number of impacted users, due to the amount of duplicate email addresses found in the exposed set of data, but it is estimated that several thousands of users were potentially affected by the leak.Moreover, it is not possible to tell whether other parties had access to the data, according to the researchers. The experts warned that the data set, which contains information that directly identifies users of marketplaces integrated by the company, could be used in phishing and social engineering attacks. The report also warned about the potential for other types of crimes such as burglaries, as the data exposed includes residential and business addresses and extortion, since the information also includes purchases of intimate products. Contacted by ZDNet, the company did not respond to requests for comment. Brazil’s National Data Protection Agency was also contacted for comment on the case and had not responded at the time of publication. More