in

Is your business ready for a deepfake attack? 4 steps to take before it’s too late

Curly_photo/Moment/Getty Images

Follow ZDNET: Add us as a preferred source on Google.


ZDNET’s key takeaways

  • Deepfakes can cause serious reputational and financial damage.
  • Deepfake incidents are rising, and current defenses may fall short.
  • Take steps now to reduce your business’s deepfake scam risk.

Deepfake technologies are rapidly advancing, and with them, the risks to the enterprise increase.

The emergence of ChatGPT and its almost immediate impact on businesses took many by surprise. The generative AI chatbot proved popular with everyone from students and SMBs to the enterprise, taking many companies by storm and prompting them to explore the benefits of AI in earnest.

Also: Is that an AI video? 6 telltale signs it’s a fake

Generative AI can be transformative for businesses; however, as with any new technology, it can be abused, an issue highlighted by the risks posed by deepfakes.

What are deepfakes?

Deepfakes first emerged on social media as individuals experimented with satirical content — realistic images and videos of everything from high-profile figures to cats driving cars. Entertainment aside, threat actors can employ the same methods for criminal and fraudulent purposes.

Deepfakes are generated through AI tools and large language models (LLMs). You can generate photos and videos of a target saying something or performing an action. Source material fed into LLMs or found online, including existing photos or voice clips such as those scraped from interviews and podcasts, can add enough realism that deepfakes become very difficult to detect.

Also: Google spots malware in the wild that morphs mid-attack, thanks to AI

Generative Adversarial Networks (GANs) can also use existing datasets to create completely new — but believable — people, and you can find these AI beings doing everything from touting scam products online to spreading fake news.

Now that AI tools are freely available online and easy to learn, this has reduced the barrier to entry for cybercriminals interested in creating sophisticated phishing campaigns, scamming individuals, spreading misinformation, or creating malicious ads.

The risks deepfakes pose to businesses

Deepfake technologies are rapidly advancing, and we have yet to understand every angle of attack using generative AI — but we’ve already observed a number of potential attack vectors.

Also: Are AI browsers worth the security risk? Why experts are worried

According to Ironscales’ Fall 2025 Threat Report, there has been a 10% year-over-year increase in deepfake attacks this year, and 85% of organizations surveyed said they had dealt with at least one deepfake-related incident in 2025.

Some of the major risks deepfakes pose to businesses include:

  • Misinformation, propaganda: Deepfakes, whether images or video, can be used to spread misinformation, fake news, and propaganda. This could include supposed employees bashing their company online, fake executives making derogatory comments, or fake news read by presenters that implicate an organization in criminal activities.
  • Reputational harm: Deepfakes spreading misinformation can also lead to severe reputational harm and financial damage, such as share prices plummeting or incidents that erode consumer trust in a brand. This could include fake videos of a CEO admitting to embezzlement, a brand being fraudulently linked via fake news reports to child labor, and more. Companies can also bear the consequences of deepfake content circulating on social media platforms that spreads fake structural changes — such as acquisitions and mergers — that could severely impact stock prices. These kinds of deepfakes can also have a lasting impact on future genuine news, as watchers may not know what to believe.
  • Identity theft, social engineering: Deepfake videos, images, and voice calls are some of the most dangerous deepfake applications available today. By generating convincing videos and synthetic voice recordings, attackers could impersonate business leaders — such as CEOs or VPs — and lure employees into handing over sensitive data or credentials to access company systems, or to approve fraudulent invoices. An example of this is when UK professional services provider Arup lost millions of dollars to a deepfake scam, in which cybercriminals created a deepfake version of an executive to request fraudulent transfers during a video call.
  • Vishing: Leading on from Arup’s case, the use of deepfake technology in these ways is known as vishing. Voice cloning in voicemails, fake audio notes, and deepfake video content embedded in emails or spread across social media may all be used to entice victims into revealing sensitive information or approving fraudulent payments.

Unfortunately, deepfake technologies are a growing market. A recent research report published by Google’s Threat Intelligence Group (GTIG) revealed AI tools being sold underground for creating lure content useful in phishing operations, and even GenAI tools for circumventing know-your-customer (KYC) banking security requirements.

<!–>

How to defend your business from deepfakes

1. Staff training

Providing employees with knowledge, guidance, and support on understanding what deepfakes are and how to detect them should be the first step you take.

Training needs to be consistent, frequent, and interesting, as research has already shown that annual cybersecurity training is almost pointless. Tips on spotting deepfakes are important, as while they are becoming increasingly more difficult to detect, making staff aware of minor details that can indicate a deepfake – such as strange shadows, a distorted voice, a lack of familiar phrases or terms, or blurred features – can also benefit them.

Also: 6 essential rules for unleashing AI on your software development process – and the No. 1 risk

Audian Paxson, principal technical strategist at Ironscales, told ZDNET that video deepfakes are typically the hardest for employees to spot, and so training for this should take priority – even though employers should expect initial pass rates to be quite low.

“88% of organizations in our 2025 research offer deepfake awareness training, but when we asked about first-attempt pass rates on phishing simulations, most fell in the 20-60% range,” Paxson commented. “That’s not a training failure – it’s a reflection of how good these attacks have gotten. You need realistic simulations that mirror actual attack patterns (audio clips of executives, fake video meeting requests) so employees can practice verification behaviors under pressure. And you need to keep running them!”

2. Multi-factor authentication, layered authentication controls

One of the best forms of defense against deepfake attacks based on fraud is to implement layered, distinct authentication and payment verification controls.

No single employee should be able to authorize high-value payments or the transfer of sensitive information, such as financial or payroll records. Instead, by adding a second level of approval, a convincing deepfake attack has to fool more than one victim – and this gives staff a chance to step back, think, rationalize, and potentially detect a deepfake scheme more readily.

It can be simple to implement, too, simply by using a trusted phone number, Slack message, or internal mail.

Also: How to prep your company for a passwordless future – in 5 steps

Another option is to use code words, changed frequently, that have to be said when a payment is requested. Without this internal knowledge, a deepfake voice or video attempt will fail. And if a deepfake attacker has somehow lured an employee into handing over credentials, the use of multi-factor authentication on end devices and systems will create an important barrier to entry.

“A clear ‘call-back’ policy is one of the simplest defenses,” Nick Knupffer, CEO of VerifyLabs.AI, told ZDNET. “If a request seems unusual or urgent, staff should always phone the executive back on a verified number, not the one provided in the message. Multi-factor authentication is another must, ensuring sensitive accounts and payments can’t be approved on a single instruction alone.”

3. Develop an incident response plan

Businesses concerned about the rise in deepfakes must conduct a thorough audit of their networks, identify their weaknesses, determine training requirements, and review their security and authentication measures, including whether a convincing deepfake could fool existing automated verification systems.

Also: Are Sora 2 and other AI video tools risky to use? Here’s what a legal scholar says

Organizations can then develop realistic incident response plans for deepfake-related security incidents, including how to ensure mission-critical systems remain online, how to address fraud, available legal remedies, insurance considerations, and how to handle public relations.

4. Trust nothing

Businesses should now start to consider implementing zero-trust architectures and controls, especially as trust in the human factor and our ability to detect scams is eroding – a challenge that is likely to become even harder as deepfake technologies evolve.

According to Gartner, by next year, attacks using AI-generated deepfakes on face biometrics will cause 30% of enterprises to reduce their trust in isolated identity verification solutions, and so multiple points of verification will become necessary, including solutions that can distinguish a live person from a deepfake.

Also: Anxious about AI job cuts? How white-collar workers can protect themselves – starting now

Investing in zero-trust access and control systems, combined with MFA and behavioral analytics software, may all help reduce the risk of deepfakes and associated technologies from compromising your network.

Want more stories about AI? Check out AI Leaderboard, our weekly newsletter.

Google spots malware in the wild that morphs mid-attack, thanks to AI

My 30-second Samsung watch routine keeps the system running like new (and it works on most models)