From Phishing to Deepfakes: The New Face of Social Engineering

phishing AI deepfakes

For decades, phishing attacks have been the bread and butter of cybercrime. Most of us have encountered the familiar signs—an email with a suspicious link or a message from a “bank” urgently asking for your account details.

But cybercrime tactics evolve with technology. Today, phishing has given way to deepfakes. These hyper-realistic audio and video fakes are crafted to make victims believe they’re interacting with someone familiar to exploit trust and a sense of urgency, and they’re reshaping the threat landscape.

Phishing: The Start of the Social Engineering Game

Social engineering—manipulating people into sharing confidential information—has always relied on deception. Phishing emails were one of the most common methods, targeting large groups of unsuspecting victims with generic emails pretending to be legitimate contacts.

Over time, these tactics evolved. Spear-phishing, which uses personal details to tailor messages to individuals, became more common. Smishing—SMS-based phishing—quickly followed. And then there’s vishing, where attackers use voice calls to impersonate legitimate sources, such as bank representatives or government officials.

Deepfakes: The Next Evolution of Social Engineering

While these traditional social engineering tactics are still effective, they have limitations. Savvy users can often spot grammatical errors, suspicious links, or unexpected requests that raise red flags.

“Phishing is not going away. It is evolving with deepfakes,” emphasized Morgan Wright, Chief Security Advisor at SentinelOne. “The use of AI to increase the authenticity and credibility of messages and images is designed to overcome any skepticism of the person consuming the content. The cognitive capabilities of humans react stronger to images rather than text, so deepfakes tap into this natural response.”

By adding convincing visuals or audio that mimic familiar voices or faces, deepfakes elevate manipulation. Their realism makes them much harder to identify than traditional social engineering methods, making them far more persuasive.

How AI Is Powering These Attacks

Deepfakes use advanced AI to analyze video or audio of impersonated individuals, precisely mimicking expressions and speech.

At the core of this process is deep learning, a subset of machine learning that trains a neural network to generate realistic, synthetic content. By analyzing datasets of real footage, the AI learns to produce new video or audio that convincingly mimics the person.

The most advanced deepfakes are often produced using Generative Adversarial Networks (GANs). These networks consist of two competing AIs: one generates fake content, and the other tries to detect it. This competition helps refine the fake until it becomes indiscernible.

The Real Threat: Case Studies of Deepfake Attacks

Nicole Carignan, VP of Strategic Cyber AI at Darktrace, explained, “The ability for attackers to use generative AI to produce deepfake audio, imagery, and video is a growing concern, as attackers are increasingly using deepfakes to start sophisticated social engineering attacks. While the use of AI for deepfake generation is now very real, the risk of image and media manipulation is not new. The challenge now is that AI can be used to lower the skill barrier to entry and speed up production to a higher quality.”

Deepfake use in cyberattacks has real consequences, particularly for businesses and political organizations. One of the most notable recent cases occurred earlier this year, when a deepfake impersonating the CFOand other employees of British engineering firm Arup was used to scam the company out of $25 million. The phony video call convinced a staff member to transfer the funds to accounts in Hong Kong.

In another instance in early 2024, robocalls used a deepfake of President Joe Biden’s voice to discourage New Hampshire voters from casting their ballots in the primary election. This political deepfake attack showed how malicious actors could manipulate public opinion by imitating well-known figures, potentially undermining the democratic process.

The frequency and scale of deepfake-based cybercrime have skyrocketed. According to Sumsub’s 2023 report, deepfake fraud incidents increased tenfold from 2022 to 2023, as more cybercriminals used this technology to bypass security protocols and manipulate identity verification systems.

Defending Against Deepfakes: Tools and Strategies

Detecting deepfakes is not easy. Visible signs, like irregular facial movements or synchronization issues, are becoming less obvious as deepfake technology evolves. Fortunately, tools are available to help detect deepfakes.

“AI-powered social engineering attacks are indeed on the rise, with deepfakes adding a new layer of complexity to threat detection,” declared Stephen Kowski, Field CTO at SlashNext. “Organizations need robust, multi-layered security solutions that can analyze content across various channels in real-time. Advanced AI and machine learning technologies are crucial for identifying subtle signs of manipulation and fraudulent activities.”

One approach is to leverage machine learning to identify inconsistencies in deepfakes. Companies like Sensity and Microsoft are developing AI-based tools to detect manipulated video and audio. These tools analyze visual and auditory data for subtle irregularities, such as pixel-level distortions or mismatched acoustic patterns. However, while AI can help catch some fakes, attackers are constantly improving their techniques to stay ahead of detection systems.

Another emerging solution involves blockchain technology. Blockchain’s decentralized ledger system could be used to authenticate video and audio files, ensuring their integrity from the moment they’re created. Linking digital content to a verified blockchain entry makes it far more difficult for attackers to pass off manipulated files as genuine. This solution is still in its infancy, but it holds promise for industries like media and law enforcement, where content verification is vital.

Training Employees: The First Line of Defense

But the best technology in the world won’t be enough if your employees fall for a well-crafted deepfake. Humans tend to be the weakest link in most cyber defenses. This is why many organizations are prioritizing deepfake awareness training as part of their cybersecurity programs.

To combat deepfake-related social engineering, some companies are using simulated and real-world scenarios in their training programs. These simulations allow employees to practice spotting and responding to potential deepfake attacks in a controlled environment, better preparing them to handle them in real life.

Additionally, companies are updating their verification protocols. For example, if an employee receives a high-stakes video call or an audio message requesting sensitive information, they’re encouraged to verify the identity of the sender through a secondary channel, such as an in-person meeting, a phone call to a known number, or even a secure messaging app.

Comprehensive education and clear reporting protocols are essential for minimizing deepfake threats. By creating a culture of vigilance, organizations can significantly reduce the risk of falling victim to deepfakes.

Staying Ahead of the Deepfake Threat

The rise of AI-powered deepfakes represents a new frontier in social engineering. As these attacks become more prevalent, it’s essential for organizations to stay ahead of the curve. By understanding the evolving landscape of social engineering and implementing both technological and human-centric defenses, businesses can mitigate the growing risks posed by these AI-driven deceptions.

Author
  • Contributing Writer, Security Buzz
    Michael Ansaldo is a veteran technology and business journalist with experience covering cybersecurity and a range of IT topics. His work has appeared in numerous publications including Wired, Enterprise.nxt, PCWorld, Computerworld, TechHive, GreenBiz, Mac|Life, and Executive Travel.