AI-generated images remain ineligible for copyright protection in the United States after the Supreme Court declined to review a closely watched challenge, effectively preserving the position that only works created by humans can be copyrighted. The move leaves intact lower-court rulings and the U.S. Copyright Office’s policy that authorship must be human, not machine.
What the Supreme Court decision means for AI-made art
The high court’s refusal to hear the case is not a ruling on the merits, but it cements the current legal landscape: autonomous AI output—art produced without meaningful human control—cannot be registered for copyright. For creators and companies deploying generative models, that means no exclusive rights in purely machine-made images, no takedowns based on those rights, and no licensing leverage built on AI-only output.
The Copyright Office has been consistent. In a policy statement and subsequent registration guidance, it has required applicants to disclose AI involvement and limits protection to the portions of a work that reflect a human author’s “creative choices.” If a person uses a model like Midjourney, DALL·E, or Stable Diffusion and then substantially edits, arranges, or selects content in a way that shows human judgment, those human contributions may be protected—even if the underlying pixels came from a model.
Practically, this places a premium on documentation. Artists seeking protection should keep records of prompts, iterations, and post-processing steps to demonstrate where human creativity begins and ends. This is already influencing creator workflows on platforms that embed provenance signals using standards from the Coalition for Content Provenance and Authenticity, as seen in initiatives backed by Adobe and others.
The Thaler test case on AI authorship and copyright
The case was brought by Stephen Thaler, a computer scientist known for testing the boundaries of AI and intellectual property with his system known as DABUS. He sought to register an image generated by his “Creativity Machine,” identifying the system as the author and himself as the owner. The Copyright Office rejected the application, a federal district court in Washington agreed, and a federal appeals court later upheld that outcome.
Courts relied on a bedrock principle that runs through more than a century of U.S. jurisprudence: authors are human. Decisions ranging from Burrow-Giles Lithographic Co. v. Sarony in the 19th century to the “monkey selfie” case, Naruto v. Slater, underscore that non-human creators do not receive copyrights. The Thaler rulings applied that logic to contemporary AI, concluding that a system operating autonomously cannot be the author under the Copyright Act.
Thaler has pursued similar arguments abroad with little success. Courts in the United Kingdom and the European Union have declined to recognize machine inventorship or authorship, reflecting a broad international consensus. In parallel on the patent side, the U.S. Court of Appeals for the Federal Circuit ruled that AI cannot be named an inventor, and the U.S. Patent and Trademark Office later reinforced that inventorship must be human.
Hybrid works remain protected where humans contribute
The refusal of AI-only copyrights does not shut the door on mixed works. The Copyright Office has already flagged a path in high-profile registrations such as the comic book Zarya of the Dawn, where the human-authored text and arrangement were protected but the Midjourney-generated images were not. The signal to creators is clear: emphasize human selection, sequencing, curation, and post-production to secure rights in the parts that reflect human originality.
This splits the market for licensing. Human-authored components carry enforceable rights; AI-only components function more like public-domain material unless other contractual or platform rules apply. Media companies and stock libraries are already adjusting. Some, including major content marketplaces, limit submissions to images with verifiable human authorship or attach provenance labels to help buyers assess risk.
The next legal fronts: training data and fair use fights
Even as authorship questions settle for now, the most consequential disputes are shifting to training data and fair use. Lawsuits by artists, photo agencies, and news organizations against model developers focus on whether scraping copyrighted works to train generative systems infringes rights or qualifies as fair use. Those cases will shape liability, licensing models, and the cost structure of AI development far more than the authorship issue alone.
Policy makers are also active. The Copyright Office’s AI Initiative continues to solicit public input on disclosure, ownership of outputs, and potential legislative changes. Industry groups are experimenting with content credentials and opt-out registries, while some publishers are cutting licensing deals to feed future model training sets. The contours of that ecosystem will determine who gets paid—and who bears risk—as generative AI becomes standard in creative pipelines.
For now, the rule of the road is straightforward: if a machine made it without meaningful human authorship, it isn’t copyrightable. Creators who want protection should lean into human-driven creativity and keep a clear record of it. Everyone else—from startups to studios—should plan around a world where AI-only art is legally unowned, even as the fight over how these systems learn is just getting started.