Security researchers are warning that scammers have found a novel way to weaponize Grok, the AI chatbot on X. By pairing eye-catching video ads with hidden URLs and then prompting Grok to “identify” the source, attackers are getting the bot to hand users a working link to malware-laced sites—no phishing DM required.
How the ‘Grokking’ scheme works
Guardio Labs reports that threat actors are buying video ads containing adult content to stop the scroll. The twist is in the metadata. The scammers tuck a malicious URL into the small “From:” field beneath the video, a spot that doesn’t trigger X’s usual link-scanning tools.

From there, the playbook is simple: an account comments under the ad asking where the clip came from. Grok, trained to be helpful and to extract visible context, replies with a live, clickable link to the domain embedded in that metadata. BleepingComputer first highlighted how this dynamic effectively turns the chatbot into a reliable courier for unsafe links.
Because Grok’s replies can be indexed by search engines, the malicious sites also get an SEO bump. That means a single bait post can ripple far beyond X, surfacing in search results and catching users who never saw the original ad.
Why AI answers supercharge malvertising
Users tend to assign more trust to answers delivered by a platform-native assistant than to random replies in a thread. That trust shifts the burden of verification from the user to the bot. In security terms, this is classic SEO poisoning meets social engineering: attackers use platform features to create the appearance of legitimacy, then let an AI amplify reach and credibility.
This isn’t an isolated trend. Multiple security firms, including Sophos and Malwarebytes, have tracked a rise in malvertising campaigns where paid promotion, search ranking, and fake “helpful” attributions funnel victims toward droppers and info-stealers. Grok changes the scale and speed: a single prompt can produce a clean, well-formatted, easily clickable answer that evades the skepticism users might apply to a sketchy account handle.
What security researchers recommend
Guardio Labs advises scrutinizing every field around video ads, especially the understated “From:” line that can hide a destination. If you don’t recognize the domain, don’t ask Grok to resolve it—and don’t click.
Turn on link-warning interstitials and any available settings that block or preview untrusted links within X. Features vary by region and account type, but the goal is the same: force a pause before the browser loads a page.

Use built-in protection such as Microsoft Defender SmartScreen or a browser’s safe-browsing checks, and keep your browser and extensions updated. Opening unknown links in a non-admin profile or a hardened secondary browser profile reduces blast radius if something slips through.
If a download starts unexpectedly, kill it and scan your system. The FBI’s Internet Crime Complaint Center has documented billions in annual losses tied to scams where a single click leads to credential theft or remote access malware, and social platforms are a recurring delivery channel.
What X and AI teams should fix next
Ad metadata needs the same scrutiny as visible text. Scanning and sanitizing links in fields attached to creative assets would deflate this tactic quickly. Independent researchers say X engineers have acknowledged the problem informally; a formal, transparent mitigation plan would help restore confidence.
Grok should avoid turning ad-adjacent metadata into clickable links and clearly label when it’s summarizing content from an advertisement. Reducing link interactivity in replies tied to paid posts, or attaching a warning banner, would cut risk without neutering the assistant.
Platforms can also curb the search spillover by marking bot replies that contain external links with noindex tags or by downranking them in internal search. The Ads Integrity community, including programs run by the Trustworthy Accountability Group, has long recommended stronger malvertising controls; applying those to AI-generated context is now table stakes.
A familiar pattern on a new channel
Social platforms have repeatedly been used to spread crypto scams and state-backed propaganda, and high-profile account takeovers regularly accelerate the damage. AI assistants layered into these platforms can unintentionally amplify the same campaigns by converting obscure metadata into confident, clickable answers.
The takeaway is straightforward. Treat chatbot-supplied links—especially those appearing under ads—the same way you’d treat unsolicited links in email. Verify the domain, search for the site name plus “malware” or “scam,” and only proceed if you can independently confirm the destination. With Grok, the safest click may be no click at all.