Snapchat is bringing open-ended generative imaging straight into its camera with a new Imagine Lens that converts plain-language prompts into shareable visuals. Type a description, and the Lens will create, edit, or re-roll a Snap on the spot—then let you post it to your Story, send it to friends, or save it for use beyond the app.
What the new Lens actually does
Unlike Snapchat’s earlier AI effects that were locked to specific styles or scenes, Imagine Lens is open prompt. You can ask for an illustrated pet portrait, a surreal landscape behind your selfie, or a stylized character version of yourself. The Lens also includes curated suggestions for quick inspiration, from multi-panel comic treatments to playful caricatures and action shots that place the subject in unexpected settings.

Crucially, the prompt lives in the caption bar, so you can tweak a single phrase to change tone, art style, or composition without starting over. That iterative loop—prompt, preview, edit, regenerate—mirrors the workflow of popular desktop tools, but condensed into a mobile-first AR experience.
How to access it and what it costs
Imagine Lens is featured prominently in the Lens Carousel for Snapchat+ Platinum and Lens+ subscribers and is also listed under the Exclusive category. After selecting the Lens, tap the caption area to enter or refine your prompt.
Snapchat+ Platinum is priced at $15.99 per month, while Lens+ costs $8.99 per month. Gating an open-prompt generator behind premium tiers is a pragmatic move: it helps moderate demand for compute-heavy AI features and provides a clearer path to monetization alongside the broader Snapchat+ bundle.
What’s powering the images
Snap has said its Lenses are built with a mix of in-house and best-in-class industry models. The company previously unveiled a lightweight text-to-image research model optimized for mobile, signaling an intention to reduce latency and bring more creation steps on-device when possible. In practice, expect a hybrid pipeline: rapid camera feedback, server-side diffusion for quality, and post-processing to blend results convincingly into selfies and scenes.
This aligns with Snap’s longer-term strategy to fuse AR and generative AI. The company introduced video-capable generative Lenses and released Lens Studio tools on iOS and the web to lower the barrier for creators. Tighter integration of AI authoring inside the camera should accelerate that ecosystem—especially for short-form storytelling where speed matters.
Why it matters for Snap and creators
Snapchat’s AR has long been a daily habit for a massive audience, and industry analysts estimate that hundreds of millions of users engage with Lenses each day. Embedding open-prompt generation directly in the capture flow turns the camera into a creative assistant, not just a filter drawer. For creators, that means faster concepting: convert a joke into a four-panel gag, test visual styles for a meme format, or mock up a brand pitch without leaving the app.
For Snap, it strengthens differentiation against social rivals rolling out their own AI art and editing tools. Meta has introduced image-generation utilities across its apps, Google has expanded generative editing in Photos, and short-form video platforms have leaned into AI effects. Snapchat’s edge is immediacy—producing and remixing content at the exact moment of capture, where social intent is highest.
Safety, limits, and responsible use
Open-ended generators raise familiar questions about misuse, likeness manipulation, and intellectual property. Snap has historically enforced safety filters and policy checks across its camera features, and the same expectations apply here: prompt moderation, content restrictions, and transparent cues when AI is in play. Industry groups like the Partnership on AI encourage clear disclosures for synthetic media; Snapchat’s in-Lens experience and sharing flows are well positioned to surface those signals.
Users should also expect occasional mismatches between prompt and output—a known limitation of diffusion-based systems—along with style variance across re-rolls. The upside is creative serendipity; the tradeoff is learning to guide the model with concise, descriptive language.
Early take: practical scenarios
Imagine Lens shines in quick-turn formats: turning friends into comic-book heroes for a group Story, generating a whimsical backdrop when the real scene is dull, or crafting a stylized avatar for a profile pic. Local businesses and creators can storyboard a concept photo in seconds, then iterate toward a final look before committing to a shoot.
Because prompts are editable at any time, the Lens becomes a living mood board inside the camera. That’s a subtle but powerful shift—from finding the “right filter” to articulating the idea and letting the system do the rendering.
The bottom line
By introducing an open-prompt generator directly into its AR camera, Snapchat is turning text into instant visuals at the exact moment users are most inclined to create and share. It’s a natural extension of Snap’s AR playbook, a smart premium differentiator, and—if the underlying models keep improving—a credible way to make everyday Snaps feel authored, not just captured.