AI in Cyber Threats: How Adversaries Weaponize Generative Models

By

In the rapidly evolving landscape of cybersecurity, artificial intelligence has become a double-edged sword. While defenders harness AI for detection and response, threat actors are increasingly embedding these technologies into their attack chains. This Q&A explores the latest findings from threat intelligence research, covering how adversaries use AI for vulnerability discovery, malware development, autonomous operations, disinformation campaigns, and supply chain breaches. Each answer provides detailed insights into the maturing transition from experimental AI use to industrial-scale exploitation.

1. How are threat actors using AI to discover vulnerabilities and generate exploits?

For the first time, security researchers have identified a threat actor that developed a zero-day exploit with the assistance of AI. This criminal group planned to deploy the exploit in a mass exploitation event, but proactive counter-discovery by intelligence teams may have prevented its use. Additionally, state-sponsored actors linked to the People's Republic of China (PRC) and the Democratic People's Republic of Korea (DPRK) are actively exploring how AI can accelerate vulnerability discovery. By training models on vast datasets of code and past exploits, adversaries can automate the identification of software weaknesses and generate tailored exploit code, significantly reducing the time and expertise traditionally required for such tasks.

AI in Cyber Threats: How Adversaries Weaponize Generative Models
Source: www.mandiant.com

2. In what ways does AI augment development for defense evasion?

AI-driven coding tools enable adversaries to rapidly build complex infrastructure suites and polymorphic malware that constantly change their code signatures to evade detection. For example, Russia-nexus threat actors have integrated AI-generated decoy logic into malware, creating obfuscation networks that mislead security systems. These AI-enabled development cycles allow attackers to automate the generation of new variants, making it harder for signature-based defenses to keep pace. By using large language models to rewrite segments of malicious code or to generate plausible-looking benign functions, adversaries can bypass static analysis tools and prolong the lifespan of their campaigns.

3. What is the significance of autonomous malware like PROMPTSPY?

PROMPTSPY represents a paradigm shift in attack orchestration. This AI-enabled malware interprets system states and uses language models to dynamically generate commands, manipulate victim environments, and offload operational decisions to AI. Our analysis reveals previously unreported capabilities, including the ability to adapt to network defenses in real time without human intervention. By automating reconnaissance, lateral movement, and exfiltration, PROMPTSPY allows threat actors to scale attacks massively while reducing the risk of detection. The malware's autonomous nature means it can continue operations even if the attacker's command-and-control infrastructure is disrupted.

4. How is AI being used as a research assistant and in information operations?

Adversaries treat AI as a high-speed research assistant for the entire attack lifecycle—from selecting targets to crafting phishing lures. More concerning is the shift toward agentic workflows, where AI frameworks autonomously plan and execute multi-step attacks. In information operations, AI enables the fabrication of digital consensus at scale. For example, the pro-Russia campaign "Operation Overload" used generative models to produce synthetic media and deepfake content, creating fake grassroots support. These tools can generate thousands of realistic social media posts, comments, and articles to manipulate public opinion, overwhelming fact-checkers and amplifying divisive narratives.

AI in Cyber Threats: How Adversaries Weaponize Generative Models
Source: www.mandiant.com

5. How do adversaries obtain obfuscated access to premium AI models?

Threat actors have developed professionalized middleware and automated registration pipelines to gain anonymized, premium-tier access to large language models while bypassing usage limits. This infrastructure often involves cycling through multiple trial accounts or using stolen payment credentials to subscribe to paid tiers. By obscuring their identity through proxies and VPNs, attackers can query models for malicious purposes—such as generating phishing emails or offensive code—without triggering rate limits. The illicit access also subsidizes their operations by exploiting free trial offers, creating a scalable abuse economy that undermines AI service providers' safeguards.

6. Why are supply chain attacks targeting AI environments a growing concern?

Adversaries like the group known as "TeamPCP" (UNC6780) now specifically target AI development environments and software dependencies as an initial access vector. By compromising AI libraries, model registries, or deployment pipelines, attackers can inject malicious code into widely used AI tools—affecting all downstream users. This supply chain approach amplifies the impact of a single breach, potentially compromising thousands of organizations that rely on compromised components. Early indications suggest these attacks can lead to data exfiltration, model poisoning, or backdoor insertion, making them a high-priority threat for the AI community.

Related Articles

Recommended

Discover More

Revitalize Your Winter: Combat Energy Inefficiency with Solar and Smart Bill ManagementDiscord Down? Here's What You Need to Know About Today's OutageApple Insider Reveals Tim Cook Succession Date, iPhone Air Battery Revolution, and Foldable FutureCVE-2023-33538: Command Injection Attacks Target TP-Link Routers with Mirai Botnet PayloadsLinux Mint's HWE ISOs: Solving Hardware Compatibility for New Systems