Wednesday, April 1, 2026
Back to All Stories
AI & Publishing

SCOTUS Cox Ruling Could Raise the Bar for Publisher Copyright Claims Against AI

The US Supreme Court's unanimous ruling in Cox Communications v. Sony Music has established a new 'intent' test for contributory copyright infringement that could significantly affect publishers' legal strategies against AI firms. The 9-0 decision, authored by Justice Clarence Thomas, holds that secondary liability requires either active inducement or a service 'tailored' for infringement with no substantial non-infringing use — mere knowledge of infringement is no longer sufficient. Publishers Weekly's Edward Nawotka identifies the ruling as a double-edged development: it raises the bar for claims against AI companies, but also opens a new strategic avenue centred on whether AI models are 'tailored' to produce creative outputs that substitute for copyrighted human works.

US Supreme Court building with scales of justice and AI neural network overlay representing copyright intent test

Analysis

The Supreme Court's unanimous ruling in Cox Communications v. Sony Music arrived with the timing of a well-placed plot twist. Just as publishers and authors were building their legal strategies around AI training data, the Court has rewritten the rules of contributory copyright liability in a way that simultaneously complicates and clarifies the path forward.

The core holding is straightforward: secondary copyright liability requires intent. A company that provides a service to the general public is not contributorily liable merely because some users infringe, even if the company knows about that infringement and fails to act on it. Justice Thomas's opinion establishes two pathways to liability — active inducement, where the defendant encourages infringement, or providing a service "tailored" for infringement with no substantial non-infringing use — and closes the door on the knowledge-plus-inaction theory that had previously generated billion-dollar jury verdicts.

For publishers and authors pursuing AI firms, the ruling is a mixed signal. On one hand, it raises the evidentiary bar considerably. A claim that OpenAI or Anthropic is contributorily liable for the infringing acts of users who generate content that reproduces copyrighted text now requires proof of intent, not just knowledge. That is a harder case to make.

On the other hand, the "tailored for infringement" pathway is genuinely interesting when applied to generative AI. The question, as Publishers Weekly's Edward Nawotka frames it, is whether training on copyrighted text and optimising for human-quality creative output constitutes a service "tailored for" infringement in the legal sense. A model that is specifically designed to produce fiction, poetry, or journalism — categories that are entirely composed of copyrighted expression — has a plausible argument against it under this standard. The "no substantial non-infringing use" prong is harder to satisfy for a general-purpose model like Claude or GPT-4, but a model specifically marketed as a creative writing assistant might face a different analysis.

The ruling will reshape litigation strategy on both sides. AI companies will argue that their models have substantial non-infringing uses and that they do not actively encourage copyright infringement. Publishers and authors will argue that the design of creative AI models — trained on copyrighted works, optimised to produce outputs that compete with those works — constitutes tailoring for infringement in the most direct sense. The outcome of that argument will define the legal landscape for AI and publishing for the next decade.