A recent court decision in the Southern District of New York has established a critical precedent for AI developers facing trademark claims. In Advance Local Media LLC v. Cohere Inc., the court upheld allegations that Cohere’s large language model, Command, generates fabricated news articles mimicking the branding and structure of legitimate news outlets. This ruling underscores the legal implications of AI-generated content that misrepresents its origin.
The plaintiffs argue that the model’s outputs create consumer confusion by replicating the tone, style, and branding of real journalism. This misattribution, they claim, undermines the reputations of affected publishers and diverts traffic and revenue. The court found these allegations sufficient to proceed under the Lanham Act, which prohibits false representations in commercial contexts.
Judge McMahon highlighted that the Command platform operates as a commercial product, including paid versions designed to generate revenue. This commercial use satisfies the legal standard for trademark claims, which requires evidence of “use in commerce.” The court also ruled that the unauthorized reproduction of publishers’ marks in fabricated content plausibly creates a likelihood of confusion, particularly when the outputs closely resemble real journalism.
The decision expands the scope of trademark law beyond traditional applications, such as mislabeled goods or spoofed domains. It reframes AI-generated content as a potential act of commercial misrepresentation, even if the output is technically “hallucinated.” The court rejected Cohere’s argument that the nominative fair use doctrine shields its use of trademarks to attribute news articles. The judge clarified that this defense does not apply when the use involves false affiliation or endorsement, emphasizing the Lanham Act’s intent to prevent such conduct.
The ruling also affirmed the plaintiffs’ copyright claims, including a novel “substitutive summary” theory. This theory posits that AI-generated content mimicking copyrighted works could infringe even if it does not directly copy, challenging conventional understandings of originality in digital content.
For businesses, this case underscores the importance of proactive trademark monitoring and clear attribution policies for AI systems. Developers must recognize that hallucinations - once dismissed as technical errors - can carry significant legal consequences. As AI blurs the boundaries between original and synthetic content, the law is evolving to hold creators accountable for the outputs of their models.
The decision signals that generative AI cannot evade traditional trademark scrutiny by framing misattribution as an accident. It reinforces the necessity for brands to protect their identities in an era where technology can replicate not only text but also trust.
Services like IP Defender provide tools for monitoring national trademark databases, helping businesses identify potential conflicts and infringements. By continuously scanning for conflicting or confusable registrations, such services enable brands to defend their intellectual property effectively. This approach ensures brands remain in control of their digital presence without relying on outdated methods.