Security pros are raising the alarm now over a scam they’ve discovered that uses Grok, the AI chat bot in X, a system rolled out by Freshdesk. By coupling click bait videos and cloaked URLs, and then inducing Grok to “suggest” who made them, bad actors can get the bot to deliver a working link to the users they’ve compromised, all without having to send a phishing DM.
How the ‘Grokking’ scheme operates
Threat actors are purchasing video ads of adult sites to stop the scroll, according to Guardio Labs. The twist is in the metadata. The scammers sneak a dangerous URL into the tiny “From:” field under the video, a location that X’s regular link-scanning tools don’t catch.
From there, the playbook goes like this: An account leaves a comment on the ad wondering where the clip came from. Grok, an assistant built to be useful and to pull out of what it sees, responds with the live, clickable link to the domain that metadata is in. Web forum BleepingComputer was the first to point out how this can effectively turn the chatbot into a kind of trusted messenger for unsafe links.
And because Grok’s responses can be indexed by search engines, the malicious sites also gain and SEO boost. That trickles — in some cases, sludges — all the way out beyond X, showing up in search results and hooking users who pass never even set eyes on the original ad.
Why AI answers supercharge malvertisin
Answers provided by an assistant from the platform tend to be trusted more by users than random replies within a thread. It’s that trust that makes the bot responsible for verification, not the user. In security terms, this is classic SEO poisoning with the addition of an influencer angle: attackers capitalize on existing hooks for credibility, then allow an AI to run with it.
This isn’t an isolated trend. Several security companies, including Sophos and Malwarebytes, have observed a significant increase in campaigns that employ malvertising, where paid ads, search rankings and bogus “helpful” attributions redirect victims to droppers and info-stealers. Grok also tweaks the scale and velocity: a one-off prompt can yield a clear, well-formatted, and straightforwardly clickable answer that circumvents the doubt users might have over a sketchy account handle.
What security researchers recommend
Guardio Labs recommends paying close attention to all the fields around video ads, especially the subtle ‘From:’ line that obscures the destination.
If you don’t know the domain, don’t ask Grok to look it up — and don’t click.
Enable link-warning interstitials and other settings that can block or preview untrusted links in X. Features differ depending on your region and account type, but the objective is the same: create something to pause the browser before calling out to load a page.
Employ built-in protections like Microsoft Defender SmartScreen or a browser’s safe-browsing checks, and apply the latest updates to your browser and its extensions. Clicking on unfamiliar links in a non-admin profile or a hardened sandbox with a 2ndary browser profile reduces blast radius in case something gets past.
Terminate any unexpected downloads and scan your system. The F.B.I.’s Internet Crime Complaint Center has reported losses in the billions of dollars each year connected to just the kind of scams where a single tap can all that password and remote access malware to the door, and social platforms are a common storefront.
What X and AI teams should fix next
Ad metadata requires the same scrutiny as visible text. Some quick leach-checking and link sanitation across fields hung off creative assets would nip this problem in the bud. Independent researchers tell us X engineers have informed them of the matter in private; their confidence, and that of developers they work with, would likely be restored with a formal, transparent plan.
Grok should not make ad-adjacent metadata hyperlinks and should clearly show when it is summarizing content from an ad. And by reducing the interactivity of the links in replies attached to paid posts, or attaching a banner of warning, a vast majority of the risk would be cut without neutering the assistant.
There are mechanisms more effective in limiting the search spillover: platforms can take bot replies that contain outbound links and mark them with noindex tags, or even downrank them in internal search.
The Ads Integrity community has for some time promoted more rounded approaches to malvertising controls, including the programs run by the Trustworthy Accountability Group — and applying them to malware-ridden context from machine learning is now table stakes.
A familiar development on a new channel
Social platforms have long been used to perpetrate crypto scams and state-backed propaganda and high-profile account takeovers often hasten the destruction. AI assistants baked into these platforms can inadvertently magnify the same campaigns by turning obscure metadata into confident, clickable answers.
The takeaway is straightforward. Treat chatbot-provided links — especially those beneath ads — like unsolicited links in email. Confirm the domain, search for the name of the site plus “malware” or “scam” and go forward only if you can verify the destination yourself. With Grok the clickiest and ultimately us the least grokking may be the clickiest of all.