
● ● ●
● Deepfake incidents have increased by 3,000% since 2022, with financial losses exceeding $200 million in Q1 2025 alone.
● Deepfakes have evolved to real-time interactive impersonation capabilities, with voice cloning technology now requiring as little as 3-5 seconds of sample audio to create an 85% voice match.
● The human ability to detect sophisticated deepfakes is extremely poor, at only 24.5% accuracy—worse than random chance.
● 80% of companies report having no established protocols or response plans for handling deepfake-based attacks.
● Global losses from deepfake fraud are projected to reach $40 billion by 2027 if current trends continue.
● ● ●
A new breed of hybrid information attacks has emerged, blending sophisticated artificial intelligence with disinformation and social engineering to devastating effect. Below, we examine three significant cases of hybrid information attacks that occurred between 2023 and 2025, each resulting in substantial financial losses or operational disruptions. Contact expert@counterdis.info for more information on how to defend against these attacks.
● Case Study 1: Arup Engineering Deepfake Conference Call Fraud
The Arup case represents a watershed moment in corporate fraud, demonstrating how deepfake technology can create convincing multi-person video conferences that bypass traditional verification methods. The $25 million loss highlights the urgent need for out-of-band verification protocols and the growing sophistication of AI-enabled deception tactics.
Attack Overview
In January 2024, British multinational engineering firm Arup fell victim to an elaborate deepfake scam that resulted in the loss of approximately $25 million. A finance employee based in the company’s Hong Kong office was manipulated into transferring funds after participating in what appeared to be a legitimate video conference call with the company’s chief financial officer and other colleagues.
The sophisticated nature of this attack marked a significant evolution in deepfake fraud. Rather than simply impersonating a single executive, the attackers created an entire video conference with multiple deepfake participants who “looked and sounded exactly like real colleagues.” This multi-person deepfake approach represented a new level of complexity in social engineering attacks.
As Rob Greig, Arup’s global chief information officer, later acknowledged: “Like many other businesses around the globe, our operations are subject to regular attacks, including invoice fraud, phishing scams, WhatsApp voice spoofing, and deepfakes. What we have seen is that the number and sophistication of these attacks has been rising sharply in recent months.”
Attack Methodology and Tactics
The Arup attack followed a multi-stage approach that combined several sophisticated techniques:
Initial Contact: The attack began with an email to the finance worker discussing a confidential transaction. The employee initially suspected it might be a phishing attempt.
Video Conference Deception: To overcome the employee’s skepticism, the attackers organized a video conference call. During this call, the finance worker interacted with what appeared to be multiple colleagues, including the company’s CFO. In reality, every participant except the victim was an AI-generated deepfake.
Trust Exploitation: The deepfake participants were convincing enough that the employee’s initial suspicions were overcome. The familiar faces and voices created a false sense of security and legitimacy.
Fund Transfer Execution: Following the convincing video call, the employee authorized a series of 15 transactions totaling approximately $25.6 million (HK$200 million) to five local bank accounts.
Vulnerabilities Exploited
The Arup case exposed several critical vulnerabilities that exist in many corporate environments:
Human Trust Mechanisms: The most fundamental vulnerability exploited was the natural human tendency to trust what we see and hear. As one cybersecurity expert noted, “The emotional realism of a cloned voice removes the mental barrier to skepticism. If it sounds like your loved one, your rational defenses tend to shut down.” This psychological vulnerability is particularly difficult to address through technical means alone.
Verification Protocol Gaps: The incident revealed inadequate verification procedures for high-value financial transfers. Despite the substantial sum involved, the company apparently lacked robust out-of-band authentication requirements that could have prevented the fraud.
Deepfake Detection Limitations: Human ability to detect sophisticated deepfakes is extremely poor. Research shows that people can only correctly identify high-quality deepfake videos about 24.5% of the time—worse than random chance. This makes relying on human perception for verification increasingly dangerous.
Remote Work Normalization: The post-pandemic normalization of video conferences as a primary communication channel created an environment where such attacks could succeed. Employees have become accustomed to important business being conducted via video calls, reducing the inherent suspicion that might have existed previously.
Financial Impact and Timeline
Total Loss: Approximately $25.6 million (HK$200 million)
Transaction Pattern: The funds were sent across 15 separate transactions to five different local bank accounts
Timeline: The incident occurred in January 2024, though it was not publicly disclosed until February 2024 when Hong Kong police reported the case
Discovery: The fraud was only discovered when the employee later discussed the transactions with Arup’s head office
The case represents one of the largest publicly reported financial losses directly attributable to a deepfake attack. The multi-transaction approach suggests the attackers were sophisticated enough to understand how to structure the transfers to potentially avoid triggering immediate fraud detection systems.
● Case Study 2: European Energy Conglomerate Voice Cloning Wire Transfer
This case demonstrates the evolution of voice cloning attacks from simple recorded messages to interactive, real-time fraud capable of bypassing traditional verification methods. The $25 million loss highlights how AI-powered voice synthesis has become sophisticated enough to replicate not just words but emotional nuances, cadence, and speech patterns that previously served as verification signals.
Attack Overview
In early 2025, a European energy conglomerate fell victim to a sophisticated deepfake audio attack that resulted in the fraudulent transfer of $25 million. The incident involved attackers using an AI-generated clone of the company’s Chief Financial Officer’s voice to issue urgent wire transfer instructions to a financial employee.
This case represents a significant evolution in voice cloning attacks, demonstrating how the technology has advanced to create voice replicas that can convincingly convey not just words but emotional nuances, speech patterns, and conversational cadence. As reported by Right-Hand AI, “The voice sounded exactly right—pauses, tone, cadence—and the funds were gone within hours.”
The attack is particularly notable for occurring in real-time, suggesting the attackers had developed or obtained technology capable of generating responsive voice deepfakes during live conversations rather than simply playing pre-recorded messages.
Attack Methodology and Tactics
The attack on the European energy conglomerate employed several sophisticated techniques:
Voice Sample Acquisition: The attackers likely gathered voice samples of the CFO from publicly available sources such as earnings calls, interviews, or company presentations. Modern voice cloning technology requires remarkably little source material—as little as 3-5 seconds of audio can be sufficient to create a convincing voice match.
AI Voice Synthesis: Using advanced voice cloning technology, the attackers created a highly realistic simulation of the CFO’s voice. By 2025, voice cloning technology had become sophisticated enough to replicate not just the basic sound of a voice but its emotional qualities and speech patterns.
Urgent Transfer Pretext: The attackers created a scenario of urgency, a common tactic in social engineering that pressures victims to act quickly without thorough verification. The fraudulent CFO voice issued “live instructions for an urgent wire transfer.”
Psychological Manipulation: The attack exploited the psychological impact of hearing a familiar voice in a position of authority. As one expert noted, “The emotional realism of a cloned voice removes the mental barrier to skepticism. If it sounds like your loved one, your rational defenses tend to shut down.”
The technical sophistication of this attack reflects the rapid advancement of voice cloning technology. By 2025, platforms like “Xanthorox AI” had emerged that could “automate both voice cloning and live call delivery—removing the need for manual preparation.” This represented a significant evolution from earlier voice cloning attacks that relied on pre-recorded messages.
Vulnerabilities Exploited
The attack on the European energy conglomerate exploited several key vulnerabilities:
Voice-Based Authentication: The company likely relied on voice recognition—formal or informal—as a means of verification. By 2025, this had become an increasingly vulnerable authentication method, with 91% of U.S. banks reconsidering voice biometric authentication due to AI cloning risks.
Urgency and Authority: The attack exploited the psychological impact of perceived authority combined with urgency. Financial employees are often conditioned to respond quickly to executive requests, especially when framed as time-sensitive.
Insufficient Multi-Factor Verification: The company apparently lacked robust out-of-band verification protocols for large financial transfers. The success of the attack suggests the absence of secondary or tertiary verification requirements independent of voice communication.
Public Information Exposure: The attack leveraged publicly available voice samples of the CFO, highlighting how normal business communications can inadvertently provide training data for AI voice cloning.
Financial Impact and Timeline
Total Loss: $25 million
Timeline: Early 2025, with the funds “gone within hours” of the fraudulent transfer
Attack Vector: Real-time voice cloning impersonation of the company’s CFO
The rapid timeline of the attack—from initial contact to completed transfer—highlights the efficiency with which such frauds can now be executed. The substantial sum involved also demonstrates that attackers are becoming increasingly ambitious in their targets, moving beyond smaller test amounts to major financial fraud.
This case is part of a broader trend of escalating financial losses from deepfake fraud. By 2025, deepfake-enabled fraud had caused over $200 million in financial losses in just the first quarter of the year. The energy sector had become a particular target, with deepfake vishing attacks surging by 1,600% in Q1 2025 compared to Q4 2024.
● Case Study 3: Scattered Spider Airline Industry Attacks
The Scattered Spider airline industry attacks represent a sophisticated evolution of social engineering that combines traditional techniques with AI-powered impersonation. These attacks demonstrate how threat actors are adapting to target critical infrastructure sectors with potentially catastrophic operational impacts beyond mere financial losses. The case highlights the vulnerability of help desk systems and identity management infrastructure to social engineering attacks enhanced by deepfake technology.
In June 2025, the cybercriminal group known as Scattered Spider (also tracked as 0ktapus, Octo Tempest, Scatter Swine, Muddled Libra, and UNC3944) expanded their targeting to include major U.S.-based airlines, representing a significant escalation in their attack scope beyond their previous focus on retail, insurance, and gaming sectors.
These attacks were particularly notable for their sophisticated combination of traditional social engineering techniques with advanced AI-powered impersonation capabilities, including deepfake audio and video technology. The group specifically targeted airlines’ IT helpdesk systems, using AI-generated voice and video to impersonate legitimate employees and trick helpdesk agents into resetting multi-factor authentication (MFA) credentials.
The aviation industry attacks represented a concerning evolution in Scattered Spider’s tactics, as disruption to airline operations could potentially impact critical transportation infrastructure and passenger safety, elevating the stakes beyond mere financial fraud.
Attack Methodology and Tactics
Scattered Spider’s airline industry attacks employed a sophisticated multi-stage methodology:
Initial Reconnaissance: The group conducted detailed research on target organizations, gathering information about employees, organizational structure, and internal processes. This allowed them to create convincing impersonations and understand which employees to target.
AI-Powered Impersonation: Security teams from Unit 42 and Mandiant reported that Scattered Spider had increased their usage of “AI-generated voice and video to impersonate legitimate employees in real time.” This represented an evolution from their earlier tactics that relied primarily on voice-based social engineering without AI enhancement.
Help Desk Targeting: The group specifically focused on IT help desks, exploiting the trust relationship between support staff and employees. As CrowdStrike reported, “SCATTERED SPIDER operators routinely accurately respond to help desk verification questions when impersonating legitimate employees in calls made to request password and/or multifactor authentication (MFA) resets.”
Identity Compromise: Once they had successfully convinced help desk personnel to reset credentials, the attackers gained access to Microsoft Entra ID, single sign-on (SSO), and virtual desktop infrastructure (VDI) accounts.
Lateral Movement: After establishing initial access, Scattered Spider moved laterally through the network, focusing particularly on cloud environments and VMware infrastructure.
Data Exfiltration and Ransomware Deployment: The group’s ultimate goal was typically a combination of data theft and ransomware deployment. As CrowdStrike noted, “SCATTERED SPIDER’s primary goal is deploying ransomware to a victim’s VMware ESXi infrastructure. If an incident is contained prior to ransomware deployment, the adversary often threatens to publicly leak stolen data and demands a ransom.”
Vulnerabilities Exploited
Help Desk Authentication Weaknesses: The primary vulnerability exploited was inadequate authentication protocols at IT help desks. Traditional verification methods like security questions and callback verification had become unreliable against sophisticated AI-generated impersonation.
Identity Management Systems: The attacks specifically targeted identity infrastructure, including Microsoft Entra ID and single sign-on systems, exploiting the centralized nature of these systems—once compromised, they provided broad access to multiple resources.
Human Trust Factors: The attacks fundamentally exploited human psychology and the natural tendency to trust what appears to be a familiar face or voice. By 2025, deepfake technology had become sophisticated enough that “only 61% of participants could tell the difference between AI-generated people and real ones.”
Cloud and Virtualization Infrastructure: Scattered Spider specifically targeted VMware infrastructure, exploiting vulnerabilities in these environments to deploy ransomware and exfiltrate data.
Organizational Pressure: Help desk employees often face pressure to resolve issues quickly and provide good customer service, creating a tension between security and efficiency that attackers could exploit.
The aviation industry might have been particularly vulnerable to these attacks due to its complex IT infrastructure, large workforce, and the critical nature of its operations. As the FBI noted, “These actors rely on social engineering techniques, often impersonating employees or contractors to deceive IT help desks into granting access. These techniques frequently involve methods to bypass multi-factor authentication (MFA), such as convincing help desk services to add unauthorized MFA devices to compromised accounts.”
Financial and Operational Impact
While specific financial losses from the Scattered Spider airline industry attacks have not been publicly quantified, the operational and potential financial impact can be inferred from similar attacks by the group:
Operational Disruption: Previous Scattered Spider attacks, such as the MGM Resorts breach, resulted in a “36-hour outage” of critical systems. Similar disruption to airline operations could cause flight delays, cancellations, and significant customer impact.
Financial Consequences: The MGM Resorts attack resulted in a “$100M hit to its Q3 results, one-time cyber consulting fees in the region of $10M, and a class-action lawsuit later settled for $45M.” Airline industry attacks could potentially cause similar or greater financial damage.
Ransomware Threats: Scattered Spider’s typical pattern involved threatening to leak stolen data if ransomware deployment was prevented, creating additional financial pressure through extortion.
Reputational Damage: Beyond immediate financial losses, such attacks could cause significant reputational damage to affected airlines, potentially impacting customer trust and future bookings.
The timing of these attacks in June 2025 was particularly concerning as it coincided with peak summer travel season, potentially maximizing the operational impact and leverage for extortion demands.
● ● ●
Contact expert@counterdis.info for more details. This service is provided by Inventive Insights LLC.