Ask Gemini to “make a widescreen landscape” or a vertical poster and, these days, you’ll be served the same menu item every time: a square. Users report the model no longer adheres to explicit aspect-ratio prompts, and it delivers 1:1 images despite returning an acknowledgment. For creatives who depend on AI for graphic templates, thumbnails, or social graphics, that pattern of instruction and output is more than a hiccup; it’s workflow-breaking.
The behavior represents a departure from previous performance, when Gemini would automatically select share-friendly aspect ratios such as 16:9 or 9:16. According to company reps on their official support forums, the square-only output is a mistake and a patch is in development. In the meantime, everyone is piling on workarounds while continuing to push for a resolution.
Why Aspect Ratio Matters for Composition and Platforms
Aspect ratio isn’t cosmetic. It determines composition, narrative focus, how an image reads in a split second. With a 16:9 frame, you get the cinematic context; on a 9:16 canvas, you can prioritize telling stories that take place vertically, and at 1.91:1, such content will fit standard social ads without letterboxing. YouTube writes in its documentation that 16:9 is preferred for standard playback, Meta’s guidelines emphasize 4:5 and 1:1 for feeds, while the IAB’s here-are-your-creative-specs list wide banners as an internet advertising workhorse. If your tool only produces squares, you either crop and risk losing key subjects or redesign around the tool’s constraints.
For deadline-beleaguered teams, that friction multiplies exponentially. A marketer wants a 1200×628 hero for their landing page; an editor needs a 16:9 banner that leaves room for text-safe zones; a mobile designer is looking for 9:16 backgrounds to use in app stories. And square-first output would mean forced compromises, or costly post-processing — the kind of limitation that undercuts the speed improvements AI was supposed to have brought.
What Probably Broke Under the Hood of Image Generation
Many of the current text-to-image models are diffusion models, trained on “buckets” of typical resolutions. Even in the case of a service that also supports multiple aspect ratios, its native checkpoints and schedulers are usually chosen for square canvases at 1024×1024 because these are memory-efficient and well represented in training data. For example, if a routing, guardrail, or UI parameter mapping regressed after a deploy, it can fail silently back to the safest bucket (square) while still returning a confident explanation of “understood your request.”
We’ve seen similar quirks elsewhere. Previous (commercial) models imposed fixed sizes and progressively relaxed ratios with bounds. Midjourney’s aspect controls have also matured over subsequent releases, and Stable Diffusion XL enhanced multi-aspect fidelity through bucketing and refiner stages. When those mappings fail, systems commonly fall back to a platitude rather than reject the prompt, which is exactly what users appear to encounter here.
What Google Says, and What Users Report So Far
Posts throughout the company’s help community and developer forums began appearing in recent weeks, indicating that a wave of complaints was building. The routine: Gemini recognizes the commands “16 to 9 picture” or “vertical 9 to 16 poster,” confirms the intention, and outputs a square. Support agents (via 9to5Mac) have replied that this is not meant to happen and that engineers will be looking at ways of resolving it.
That’s a vital acknowledgment, but the lack of a status page or temporary banner warning inside the product adds to the confusion. For tools driven by any kind of expectation, expectations are everything, and when the system seems to “agree” with your constraints but still ignores them, confidence is lost. It also introduces a challenge to reproducibility — teams can’t easily document what will and won’t be respected for a parameter.
Workarounds to Use Until a Fix Is Released by Google
The easiest hack is to seed Gemini with a blank canvas at the target aspect ratio and tell it to fill or stretch whatever scene there should be. Treat it as if you were having someone do an imagined painting and give a mostly white or low-detail image at whatever resolutions (e.g., 1920×1080 or 1080×1920) are convenient for your full canvas size, then tell them what the subject is and what’s in frame. A lot of users are saying this will help maintain the intended crop ratio and also potentially work better for composition.
When that’s not an option, shoot at the next highest square resolution and crop with purpose. Plan compositions with additional negative space where you anticipate making your cut, then do upscaling to recover detail after cropping. Not ideal — cropping throws pixels away, and may amputate focal points — but for thumbnails or backgrounds it’s better than nothing.
If you can’t possibly budge on your deadline and if aspect accuracy is uncompromising, use replacement tools until this gets resolved. Rival systems such as Midjourney and Stable Diffusion-based services also support explicit aspect parameters, and will hand off assets for completion in your main pipeline. Teams that document a workaround path at this point will gain hours if the square-only bug is not purged.
The Bigger Reliability Question for Creative AI Users
This episode serves as a reminder that creative AI isn’t only judged by capability — determinism and fidelity to parameters matter just as much. Enterprises standardize on tools that do what they say, every time. A quick fix and clear communication would help there, as would a visible model status indicator that signals when some controls are degraded.
Gemini’s image engine has held up well for detail and style. But until it stops snapping everything to a square, people will continue to battle against their tools instead of focusing on the work. In creative production, it’s the most frustrating result of all.