As we enter into the final foray into Unit 42’s findings concerning social engineering today, we encounter the most worrisome aspect of operating as a cybersecurity profession in the modern day: understanding the adversary, including gaining insights into the way they leverage breakthroughs and innovations against organisations. By the nature of this task, this is always a matter of keeping up with the Joneses (if the Joneses were trying to constantly undermine the way we defend ourselves and gain access to our assets…), meaning the defensive side is always on the back foot. However, it’s not all in vain—indeed, stepping out from the back foot means first understanding how to get on level terms with the adversary. Therefore, in our last review of Unit 42’s work, we look at how the adversary gains an advantage through AI, how we can see it today, and how we can build defences that make it difficult for the adversary to get comfortable. Assessing the AggressorLike all good assessments, this will have to come in stages. We need to dig into the findings before we can positively make a plan for implementing them properly. Therefore, first of all, unpacking the core elements from the Unit 42 report is at the top of the agenda. Adversarial InnovationBy “adversarial innovation,” Unit 42 refers to how threat actors are not just sticking to old tricks but are evolving their playbook. They are combining traditional social-engineering (phishing, help-desk pretexts) with increasingly sophisticated tools. Rather than just sending generic phishing emails, attackers now automate parts of their campaigns, use AI for real-time adaptation, and chain together more advanced steps. This innovation helps them move faster, scale up, and avoid detection. Unit 42+1 AI-Crafted Lures and Voice ClonesGenerative AI (GenAI) is central here. According to Unit 42, attackers are using it to generate highly personalised lures — for example, emails tailored with publicly available data about a target, or follow-ups that mimic human tone. Beyond text, GenAI is being used to clone voices. Threat actors recreate executive voices in callback scams, making the impersonation much more convincing. In more advanced cases, they maintain “live engagement” during impersonation campaigns: AI helps craft responses in real time, so attackers don’t have to rely solely on a human operator. Impersonation CampaignsImpersonation campaigns are social-engineering attacks where the attacker pretends to be someone trusted — a colleague, executive, or support agent — to manipulate the victim. Unit 42 distinguishes between two models:
Situating the findingsPutting Unit 42’s claims in context, how well do they match other research, and where do they differ? To get a richer understanding of the world around us, we need to look at more research and construct fuller images. Unit 42’s observation that AI is amplifying social engineering is well supported. Independent academic research confirms that generative AI greatly enhances the realism and scale of phishing attacks: a systematic review published in Artificial Intelligence Review identifies “realistic content creation,” “advanced targeting and personalisation,” and “automated attack infrastructure” as key dimensions. Research also shows concrete examples of voice-cloning attacks outside Unit 42’s work. For instance, IBM researchers demonstrated “audio-jacking,” where AI-generated voice clones hijack live phone calls, responding dynamically based on trigger phrases. In consumer fraud, there are reports that only a few seconds of someone’s audio, say from social media, can be enough to clone their voice convincingly — leading to emotionally manipulative scams. There is academic work showing proactive threats too: a paper on the malware SpearBot describes how attackers can use large language models in an adversarial loop (generate-criticise-refine) to create spear-phishing emails that evade detection, making phishing more deceptive and harder to block. Another study, AbuseGPT, shows how generative AI chatbots can be misused to produce smishing (SMS-based phishing) campaigns. More broadly, international organisations are also raising the alarm. The UN Office on Drugs and Crime (UNODC) reports (PDF) that generative AI is being used to clone CEO voices to authorise fraudulent wire transfers and to adapt social engineering messages in real time. So, Unit 42’s findings are not isolated: they reflect a broader trend seen across independent research, security vendors, and international bodies. Their data adds real-world incident-response cases, which strengthens the argument that these are not just theoretical risks—it’s really here and it’s really happening. Getting even and Getting aheadGiven the scale and sophistication of the problems Unit 42 describes — and the corroborating evidence from other researchers — here are actionable measures to mitigate the risks. First, organisations need to treat social engineering not as a “user problem” but as a systemic risk. Traditional awareness training is not enough. There should be strong identity governance: ensure users have only the permissions they need, remove or limit over-permissioned accounts, and enforce strict controls on identity recovery workflows. Unit 42 highlights that many attacks exploit “over-permissioned access” and “unverified trust in human processes.” Second, improve detection and visibility. Use behavioural analytics and identity-threat-detection platforms to monitor for anomalous identity usage, especially in high-risk contexts (e.g., privileged accounts). Layer this with Identity Threat Detection and Response (ITDR). By building visibility into human workflows — not just technical alerts — defenders can spot when something feels “off” (e.g., repeated verification requests, odd use of help-desk channels). Third, defend against voice-based deepfakes. Challenge-response systems are promising: academic work (such as PITCH) proposes methods where incoming voice calls are validated via unpredictable “challenges” — for instance, asking the caller to repeat or reformulate a phrase, or embedding audio puzzles. Organisations should consider integrating these into high-risk communication paths (e.g., finance, executive callback lines). Fourth, adopt zero-trust principles for human processes. Just as network traffic needs zero-trust, human interactions within identity workflows should be treated with scepticism. For example, require multi-factor verification for any unusual request, especially involving financial transactions, access escalations or help-desk interventions. Fifth, organisations should prepare for agentic AI risk. Since Unit 42 observed early use of “agentic AI” (AI systems that autonomously execute multi-step tasks), security teams must assess AI usage policies and build guardrails. This includes threat modelling around AI agents, securing prompt design, and monitoring any automated tasks that could be subverted. Finally, build a proactive simulation and training programme. Run red teaming exercises that include AI-enabled phishing, vishing, callback scams, and impersonation. Use these exercises to test both technical controls and human reactions. Awareness is still valuable, but only effective when backed by infrastructure and active practice. Overall, the Unit 42 research paints a worrying but credible picture of how AI is intensifying social engineering. The risks they describe are backed by independent research and real incident reports. Defending against these threats will require a shift from blaming “users” to addressing systemic identity and process vulnerabilities, combined with technical solutions targeted at AI-driven deception. Bibliography
|