Generative AI has opened the floodgates to a new wave of phishing, one that goes far beyond emails and fake login pages. Attackers can now create hyper-realistic audio and video impersonations, allowing them to convincingly pose as executives, colleagues, or trusted partners in real time.
Throughout 2025, deepfake-related scams caused approximately $1.1 billion in damages, nearly triple the amount recorded the year before. What was once a niche tactic is quickly becoming one of the most effective forms of social engineering.
To prevent deepfake attacks, organizations must turn not toward more tools, but toward strengthening human judgment. These attacks exploit trust in real-time interactions that technical controls alone cannot reliably detect or stop.
How Attackers Use Deepfakes for Social Engineering
Unlike common email phishing attacks, which attackers distribute en masse, deepfake attacks are rarely random. They are typically well-planned social engineering campaigns designed to build trust over time, then exploit it at the right moment.
A common technique is executive voice cloning. Attackers might use publicly available audio from interviews, earnings calls, or social media clips to replicate the voice of a CEO or senior leader. They then place a call to an employee, often in finance or operations, requesting an urgent payment or sensitive information. Because the voice sounds familiar and authoritative, victims are far more likely to comply without hesitation.
As AI-generated video has improved drastically in recent years, deepfake video is also gaining ground. In some cases, attackers use AI-generated video during live calls to pose as executives or business partners.
One such case led to a $500,000 fraudulent transfer after a Singaporean finance director joined a Zoom call with deepfake versions of company executives to convincingly authorize the payment.
In others, deepfake technology is used in fraudulent job interviews, where candidates appear legitimate on camera but are actually masking their identities. North Korean IT workers have been known to favor this technique, using it to infiltrate western companies for high salaries and access to sensitive data.
In most cases, attackers don’t jump straight to the deepfake portion of the scam. They usually gain trust via traditional phishing methods, such as emails or messaging platforms, and move over to voice or video interactions to seal the deal.
Key Warning Signs of Deepfake-Based Attacks
While AI has made deepfakes very realistic, they are not impossible to detect. Ultimately, deepfake attacks still rely on the same psychological tactics used in traditional social engineering. Requests often involve urgency, such as a last-minute payment, a confidential transfer, or a sensitive document that must be shared immediately.
Another common red flag is unusual communication channels. For example, a “CEO” reaching out directly to a junior employee via a private messaging app, SMS, or an unexpected call is often a sign that something is off.
When it comes to identifying deepfake audio or video as it’s happening, there are also subtle indicators to watch for:
- Slight delays or unnatural timing in responses during a call
- Inconsistent facial movements, especially around the mouth and eyes
- Poor lip-syncing or audio that doesn’t perfectly match speech patterns
- Unusual lighting, blurring, or visual artifacts around the face
- A reluctance to deviate from the script, such as avoiding follow-up questions or insisting on keeping the camera at a fixed angle
No single sign confirms a deepfake on its own. However, when combined with suspicious context, such as an urgent or unusual request, it can help employees recognize when something isn’t right.
The Right Way to Train Employees for AI Deepfakes
It’s time for phishing training to move beyond email. Employees must learn that cyber attacks can come through the phone and video meetings also.
One of the most effective ways to prepare employees is through deepfake attack simulations. These exercises place employees in realistic scenarios where they must respond to impersonation attempts, such as a fake executive requesting a payment over a call or a suspicious video meeting.
By experiencing these situations firsthand, employees build the instinct to pause, question, and verify before taking action.
The goal of this approach is to build a culture of verification, shared responsibility and a commitment to change behavior over time. Employees should feel encouraged to question and verify any request that involves sensitive information or a large sum of money. Even if the request is real, it must pass through established verification procedures.
The training should also be continuous and adaptive. Attacker tactics evolve every day, requiring regular updates to training content. The days of once-per-year presentation phishing training sessions are over.
Why Humans Are the Best Defense Against Deepfakes
Like with other forms of social engineering, technical controls can reduce risk by filtering and blocking many malicious attempts before they reach employees. However, deepfake attacks rely purely on social engineering.
There are no malicious payloads or other technical indicators that would trigger an alert from security systems. There’s no domain spoofing to double check with verification tools.
At that point, the attack is a human problem, as only the employee can recognize something is off and stop the interaction before any damage is done.
That is the value of phishing training. It turns the human factor from what experts say is the weakest link in cybersecurity, into a proactive and reliable human firewall against social engineering attacks.
Conclusion
Deepfake attacks are a growing threat that many security leaders overlook, but not for long. Losses from deepfake scams are rising sharply year-over-year, signaling that this is quickly becoming a mainstream attack vector rather than a niche risk.
By investing in realistic, continuous training and fostering a culture of multi-channel request verification, organizations can significantly reduce the risk of deepfake-driven fraud.