A North Korean threat actor known as Kimsuky leveraged OpenAI-powered image generation to reinforce a targeted phishing attack, in new analysis from security firm Genians.
The hackers had concocted a realistic image of a South Korean military ID to legitimize their lure, but they left behind telltale metadata referencing “GPT-4o*OpenAI API” and “ChatGPT,” investigators said. A computer algorithm designed to sniff out deepfakes labeled the ID photo as 98 percent likely to be artificial intelligence-created.

How the Lure Worked
The campaign impersonated a South Korean defense-related institution that provides identification to military-affiliated individuals, Genians says. The emails originated from a domain made to closely resemble that of the real organisation and had an attachment in ZIP format that contained the recipient’s name—corrupted for credibility.
A Windows shortcut disguised as documentation lurked inside the archive. If it was carried out, it launched a PowerShell command that connected the machine of an unsuspecting victim to a remote server, installed backdoor malware and then quietly retrieved an image of what appears to be a fake government ID as cover. The decoy was the story line itself — “this is routine ID processing” — while the intrusion played out behind.
AI Fingerprints on the Fake ID
Genians said the fake ID contained metadata showing it had been generated by OpenAI’s GPT-4o model, generating via API access.
While OpenAI’s systems prevent images of actual government IDs from being created, researchers theorize that the attackers may have skirted the restrictions by framing their request to mimic a harmless mock-up or template — an approach frequently referred to as a jailbreak.
This is not the first time state-backed hackers have crossed paths with the AI industry. OpenAI and Microsoft Threat Intelligence have previously observed and disrupted activity from multiple government-aligned groups, including North Korea’s Kimsuky (which Microsoft tracks as Emerald Sleet). The newest discoveries indicate that is still the case with such actors actively hunting for openings, especially in image tools that can create believable visual cover for social engineering.
Why This Matters
Phishing already heavily trades in trust cues — logos, signatures, familiar language. Factor in photo-realistic AI-generated images, and the skepticism bar gets considerably higher. For much of the world, it’s “close enough” when a sender glances at a badge or form and sees that someone is supposedly covered by this rule — particularly if the sender seems to know what name the someone goes by. It was that mixture of personalization and polished appearance that Kimsuky mined.

The trend dovetails with other data on the effectiveness of social engineering. According to the new Verizon Data Breach Investigations Report, most breaches are a “result of the human element,” noting phishing and pretexting as two of the most popular forms of initial entry. Kimsuky’s background — charted by South Korea’s National Intelligence Service, Mandiant and other researchers — is in credential theft, espionage and recruitment fraud targeting defense, policy and research communities.
Defensive Takeaways
Harden the fundamentals where this campaign is living: aggressively filter or quarantine incoming ZIP and shortcut files from outside sources; insist that users access attachments through approved, scanned portals; and discourage (in any but the most conservative environments) the manual “copy-paste this command” action. Multifactor authentication, even if not perfect, goes a long way toward blunting the damage from credential theft that can result from a successful phish.
On endpoints, limit and log PowerShell (e.g. constrained language mode and script block logging), watch for weird child processes of shell shortcuts, apply strong egress controls to limit CNC transport.
1.deploy SPF,Dkim,Dmarc on the mail&”,$url5]1;}’) domain to reduce brand spoofing. And update training on awareness: a professional-looking ID photo is no proof of legitimacy in this age of generative AI — get verifications through official channels, not email attachments.
A Wider Pattern of AI Abuse
Across the industry, researchers from Google’s Threat Analysis Group, Recorded Future and Mandiant have followed North Korean operators as they practiced using deepfakes, fake recruiter personas and AI-written outreach to build trust or land remote roles within Western companies. The Kimsuky case layers on something new: AI-generated images not as the payload, but rather as a means of protecting the intrusion’s overwatch.
As generative models get better, defenders should expect more polished decoys: forged licenses, invoices or utility notices that pass casual scrutiny. The pragmatic approach is not to demonize AI, but to assume that adversaries will adopt it and elevate the standards for verification accordingly. If a message seems to be pushy, urging you to open an archive, run a command or take an ID at face value then stop — trust is something that has to be earned, not assumed based on a picture.