FindArticles FindArticles
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
FindArticlesFindArticles
Font ResizerAa
Search
  • News
  • Technology
  • Business
  • Entertainment
  • Science & Health
  • Knowledge Base
Follow US
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
FindArticles © 2025. All Rights Reserved.
FindArticles > News > Technology

OpenAI warns of AI-generated cybercrime at scale

Bill Thompson
Last updated: October 9, 2025 3:03 am
By Bill Thompson
Technology
7 Min Read
SHARE

Artificial intelligence is silently recoding the criminal economy. As the threat actors are not creating new hacks previously unimagined by man with large language models, they aren’t using AI to do something that defies imagination (like erupting a global volcano from the bottom of the Mariana Trench or transcending into hypostatic union with God), but merely to automate everything they already did anyway in that space under GPT-2, now faster, more cheaply, and at greater scale.

OpenAI announces that it has taken down over 40 rogue networks which violated its usage policy, giving us a peek into how attackers have wedded AI to their existing playbooks.

Table of Contents
  • AI inside the criminal playbook and attack pipelines
  • State actors add AI to surveillance and population tracking
  • Faster, not novel, but still risky for cyber defenders
  • What defenders should expect now from AI-enabled threats
Digital padlock and binary code illustrate OpenAI warning on AI-generated cybercrime at scale

The bottom line is actionable uplift: fewer flaws, more believable lures, faster iteration, and wider reach.

AI inside the criminal playbook and attack pipelines

The report describes a consistent incorporation of AI for everyday cybercriminal operations. “Researchers observed a number of attempts to leverage models for generating components common to malware operations (specifically: remote access tooling, credential theft utilities, obfuscation layers, crypters, and payload packagers), but also for debugging and refactoring code more effectively.”

Attackers are also generating multi-model pipelines. OpenAI points to a probable Russia-linked actor using various AI tools to create video prompts, social posts, and newsy short clips engineered for virality — a pipeline for fraud, disinformation, and amplification across platforms.

In other cases, Chinese-language accounts requested aid in writing persuasive phishing text and solving delivery problems. OpenAI points out that the activity dovetails with tradecraft previously linked by independent researchers to factions like combat arms developer UTA0388, known for its historical focus on technology supply chains and academia.

So are adaptation, evasion, and endless loops. Some networks, including those located in Cambodia, Myanmar, and Nigeria, asked models to remove stylistic tics — like em dashes and other conspicuous punctuation marks — in the hopes of lowering the possibility that AI would pick up on their content. This “style laundering” underscores how rapidly attackers internalize public debates over detection signals and respond with countermeasures.

State actors add AI to surveillance and population tracking

OpenAI describes disruptions of accounts associated with entities in the People’s Republic of China who employed models to draft proposals for monitoring social media at scale. Other requests solicited assistance in designing systems to cross-reference transportation bookings with police records — allowing the authorities to learn, for instance, when and where a given person had taken a train or plane: a crucial tool in any effort to track the activities of targeted populations, such as Uyghurs.

OpenAI warns of AI-generated cybercrime at scale, from phishing scams to automated malware

One network asked for help in trying to identify financial ties concerning an account attacking the Chinese government. None of these requests included novel offensive methods, but they show the potential for artificial intelligence to expedite research, automate analysis, and focus surveillance.

Faster, not novel, but still risky for cyber defenders

OpenAI notes that so far, models have tended to reject requests that would facilitate “novel” attacks unknown to the security community. The risk we can measure is throughput. When models generate, translate, and contextualize text in real time, hugely successful phishing campaigns are produced en masse and widespread localization errors disappear. Development cycles and operator skill barriers shrink when they’re debugging code and throwing out fixes en masse.

This acceleration arrives in an already costly threat environment. The F.B.I.’s Internet Crime Complaint Center, for example, recorded losses of at least $10.7 billion last year from more than 300,000 domestic and international complaints for cyber-enabled schemes — a figure that has been growing steadily over the past five years. Europol has also cautioned that generative models are reducing the cost of entry for fraud, social engineering, and information operations while increasing the number of adversaries who can operate at a given level.

What defenders should expect now from AI-enabled threats

Anticipate more of these “human-in-the-loop” attack pipelines: operators use AI to ideate, prototype, and triage — then they inject that magic human finesse where it counts — targets, timing, monetization.

Expect, too, an uptick in fake personas, local disinfo, and short-form video assets optimized by feedback from platform analytics.

On the defense side, OpenAI’s takedowns demonstrate that platform-level controls can blunt abuse, especially when combined with behavior monitoring looking for suspicious patterns of use. Tools outside the platforms such as MITRE ATLAS and advisories from national cybersecurity agencies help teams map AI-assisted tactics so they can calibrate controls accordingly.

  • Ensure that AI access is aligned with least privilege.
  • Apply data loss prevention to prompts and outputs.
  • Instrument identity controls like phishing-resistant multifactor authentication.
  • Expand security awareness training with an AI chapter: teach employees about hyper-polished lures that look like they popped out of Windows 95, multilingual scams, and compelling synthetic media.
  • Monitor telemetry for unusual spikes in content generation, translation volume, or code refactoring activity from a cutout user as a potential indicator of compromise.

The takeaway is clear: AI isn’t unleashing unknown superweapons that cybercriminals are using, but it is serving as a force multiplier for the tools and tactics they already use. That speed difference is significant — and defenders will have to mirror it with AI-driven detection, response, and resilience of their own.

Bill Thompson
ByBill Thompson
Bill Thompson is a veteran technology columnist and digital culture analyst with decades of experience reporting on the intersection of media, society, and the internet. His commentary has been featured across major publications and global broadcasters. Known for exploring the social impact of digital transformation, Bill writes with a focus on ethics, innovation, and the future of information.
Latest News
Four AI Tools I Pay For, And Two I Am Watching
Samsung’s U.S. Retail Presence Grows with Three New Experience Stores
Tesla FSD Under Investigation for Traffic Safety Violations
TAG Heuer Expands Beyond Wear OS With New Smartwatch
Google Strengthens Workplace AI With Gemini Enterprise
DoorDash Rolls Out Serve Robotics Deliveries In LA
Nothing Builds A Dream Phone For MrWhoseTheBoss
Pixel 10 Telephoto Video Stutter Fix Guide
iPhone 17 Pro Max vs Galaxy S25 Ultra: Who Wins?
Walmart Sales to Rival Prime Day With Apple Samsung HP Deals
Google Pixel Tablet Hits Best Buy At All-Time Low
Google Maps Testing Gemini-Style Ask About Places
FindArticles
  • Contact Us
  • About Us
  • Write For Us
  • Privacy Policy
  • Terms of Service
  • Corrections Policy
  • Diversity & Inclusion Statement
  • Diversity in Our Team
  • Editorial Guidelines
  • Feedback & Editorial Contact Policy
FindArticles © 2025. All Rights Reserved.