in

Businesses’ cloud security fails are ‘concerning’ – as AI threats accelerate

blackdovfx/Getty Images

Just under 45% of organizations conduct regular audits and assessments to ensure their cloud environment is secured, which is “concerning” as more applications and workloads are moved to multi-cloud platforms. 

Asked how they were monitoring risk across their cloud infrastructure, 47.7% of businesses pointed to automated security tools while 46.5% relied on native security offerings from their providers. Another 44.7% said they conducted regular audits and assessments, according to a report from security vendor Bitdefender. 

Also: AI is changing cybersecurity and businesses must wake up to the threat

Some 42.1% worked with third-party experts, revealed the study, which surveyed more than 1,200 IT and security professionals including chief information security officers across six markets: Singapore, the UK, France, Germany, Italy, and the US. 

It is “definitely concerning” that only 45% of companies regularly run audits of their cloud environments, said Paul Hadjy, Bitdefender’s vice president of Asia-Pacific and cyber security services, in response to questions from ZDNET. 

Hadjy noted that an over-reliance on cloud providers’ ability to protect hosted services or data persists even as businesses continue moving applications and workloads to multi-cloud environments. 

<!–>

“Most times, [cloud providers] are not as responsible as you would think and, in most cases, the data being stored in the cloud is large and often sensitive,” Hadjy said.

“The responsibility of cloud security, including how data is protected at rest or in motion, identities [of] people, servers, and endpoints granted access to resources, and compliance is predominantly up to the customer. It’s important to first establish a baseline to determine current risk and vulnerability in your cloud environments based on things such as geography, industry, and supply chain partners.”

Among the top security concerns respondents had in managing their company’s cloud environments, 38.7% cited identity and access management while 38% pointed to the need to maintain cloud compliance. Another 35.9% named shadow IT as a concern and 32% were worried about human error, the study found. 

When it comes to generative AI-related threats, however, respondents seem confident in their teammates’ ability to identify potential attacks. A majority 74.1% believed colleagues from their department would be able to spot a deepfake video or audio attack, with US respondents showing the highest level of confidence at 85.5%. 

Also: Code faster with generative AI, but beware the risks when you do

In comparison, just 48.5% of their counterparts in Singapore were confident their teammates could spot a deepfake – the lowest among the six markets. In fact, 35% in Singapore said colleagues from their department would not be able to identify a deepfake, which was the highest in the global pool to say likewise. 

Was the global average of 74.1% who were confident their teammates could spot a deepfake misplaced or well-placed? 

Hadjy noted that this confidence was expressed even though 96.6% viewed GenAI as a minor to very significant threat. A base-level explanation for this is that IT and security professionals do not necessarily trust the ability of users beyond their own teams – and who are not in IT or security – to spot deepfakes, he said. 

“This is why we believe technology and processes [implemented] together are the best way to mitigate this risk,” he added. 

Asked how effective or accurate existing tools are in detecting AI-generated content such as deepfakes, he said this would depend on several factors. If delivered via phishing email or embedded in a text message with a malicious link, deepfakes should be quickly identified by endpoint protection tools, such as XDR (extended detection and response) tools, he explained. 

However, he noted that threat actors depend on a human’s natural tendencies to believe what they see and what is endorsed by people they trust, such as celebrities and high-profile personalities – whose images often are manipulated to deliver messages. 

Also: 3 ways to accelerate generative AI implementation and optimization

And as deepfake technologies continue to evolve, he said it would be “nearly impossible” to detect such content via sight or sound alone. He underscored the need for technology and processes that can detect deepfakes to also evolve. 

Although Singapore respondents were the most skeptical of their teammates’ ability to spot deepfakes, he noted that 48.5% is a significant number.  

Urging again the importance of having both technology and processes in place, Hadjy said: “Deepfakes will continue to get better, and effectively spotting them will take continuous efforts that combine people, technology, and processes all working together. In cybersecurity, there is no ‘silver bullet’ – it’s always a multi-layer strategy that starts with strong prevention to close the door before a threat gets in.”

Training also is increasingly critical as more employees work in hybrid environments and more risks originate from homes. “Businesses need to have clear steps in place to validate deepfakes and protect against highly targeted spearphishing campaigns,” he said. “Processes are key for organizations to help ensure measures for double checking are in place, especially in instances where the transfer of large sums of money is involved.”

According to the Bitdefender study, 36.1% view GenAI technology as a very significant threat with regards to the manipulation or creation of deceptive content, such as deepfakes. Another 45.1% described this as a moderate threat while 15.4% said it was a minor threat. 

Also: Nearly 50% of people want an AI clone to do this for them

A large majority, at 94.3%, were confident in their organization’s ability to respond to current security threats, such as ransomware, phishing, and zero-day attacks.

However, 57% admitted having experienced a data breach or leak in the past year, up 6% from the previous year, the study revealed. This number was lowest in Singapore at 33% and the highest in the UK at 73.5%. 

Phishing and social engineering was the top concern at 38.5%, followed by ransomware, insider threats, and software vulnerabilities at 33.5% each.

–>


Source: Information Technologies - zdnet.com

Researchers use large language models to help robots navigate

Yahoo Mail rolls out new AI tools in ‘most significant’ update in 10 years