Copyright class actions could financially ruin AI industry, trade groups say.
AI industry groups are urging an appeals court to block what they say is the largest copyright class action ever certified. They’ve warned that a single lawsuit raised by three authors over Anthropic’s AI training now threatens to “financially ruin” the entire AI industry if up to 7 million claimants end up joining the litigation and forcing a settlement.
Last week, Anthropic petitioned to appeal the class certification, urging the court to weigh questions that the district court judge, William Alsup, seemingly did not. Alsup allegedly failed to conduct a “rigorous analysis” of the potential class and instead based his judgment on his “50 years” of experience, Anthropic said.
They are very likely to be civilly liable for uploading the books.
That’s largely irrelevant because the judge already ruled that using copyrighted material to train an LLM was fair use.
The judge did so in a summary motion, which means that they have to read all of the evidence in a manner most favorable to the plaintiff and they still decided that there is no way for the plaintiff to succeed in their copyright claim about training LLMs because it was so obviously fair use.
Read the Order, which is Exhibit B to Antrhopic’s appellate brief.
Anthropic admitted that they pirated millions of books like Meta did, in order to create a massive central library for training AI that they permanently retained, and now assert that if they are held responsible for this theft of IP it will destroy the entire AI industry. In other words, it appears that this is common practice in the AI industry to avoid the prohibitive cost of paying for the works they copy. Given that Meta, one of the wealthiest companies in the world, did the same exact thing, it reinforces the understanding that piracy to avoid paying for their libraries is a central component of training AI.
While the lower court did rule that training an LLM on copyrighted material was a fair use, it expressly did not rule that derivative works produced are protected by fair use and preserved the issue for further litigation:
Emphasis added. In other words, Anthropic can still face liability if it’s trained AI produces knockoff works.
Finally, the Court held
Emphasis in original.
So to summarize, Anthropic apparently used the industry standard of piracy to build a massive book library to train it’s LLMs. Plaintiffs did not dispute that training an LLM on a copyrighted work is fair use, but did not have sufficient information to assert that knockoff works were produced by the trained LLMs, and the Court preserved that issue for later litigation if the plaintiffs sought to bring such a claim. Finally, the Court noted that Anthropic built it’s database for training it’s LLMs through massive straight-up piracy. I think my original comment was a fair assessment.
It looks, to me, like you’re reading the briefing without understanding how the legal system functions. You’re making some incredibly basic mistakes. Copyright violations and theft are two distinct legal concepts, for example. You’re treating the case summary as if it were the legal argument in the brief and you’re misinterpreting some pretty clear legal language written by the judge.
No, that is not their argument.
Their legal argument, in the appeal of the class certification, is that the judge did not apply the required analysis in order to certify the three plaintiffs as being part of a class. He instead relied on his intuition, not any discovered facts or evidence. This isn’t allowed when analyzing a case for class certification.
In addition, Anthropic adds, it is well supported in case law (cited in the motion) that copyright claims are a bad fit for class action.
This is because copyright law focuses on individual works and each work has to be examined as to its eligibility for copyright protection, the standing of the plaintiff and if, and how much, of each individual work was the defendant responsible for violating copyright.
This can be done when 3 people claim a copyright violation, because they have a limited set of work which a court can reasonably examine.
A class action would require a court to consider hundreds or thousands of claimants and millions of individual works, each of which can be challenged individually by the defendant.
Courts typically don’t like to take on cases that can require millions of briefings, hearings and rulings. Because of this, courts usually always deny class action certification for copyright violations.
The court, in its order, did not address this or apply any of the required analysis. The class was certified based on vibes, something that doesn’t follow clearly established case law.
This is because training an LLM results in a language model.
A language model is in no way similar to a book and so training one is a transformative use of copyrighted material and protected under fair use.
No, the judge didn’t make any claim about the model’s output after training. That isn’t an issue that’s being addressed in this case. You’re misunderstanding how judges address issues in writing.
Here, the judge is addressing a very narrow issue, specifically the exact claim made by the plaintiff (training with copyrighted material = copyright violation).
The subject of the paragraph is concerned with training the LLM. The claim by the plaintiff is that using copyrighted works to train LLMs is a violation of copyright. That’s what the judge is addressing.
The judge dismissed this argument because it was transformative and so protected by fair use.
The judge further noted that the plaintiffs did not show that training the LLM resulted in “any exact copies nor even infringing knockoffs of their works being provided to the public” and if they could show that training the LLM resulted in “any exact copies nor even infringing knockoffs of their works being provided to the public” then they could bring a case in the future. This is the judge hinting that they can amend their filings in this case to clarify their argument, if they had any evidence to support their claim.
The judge is telling the plaintiff that in order to succeed in their claim, which is that training an LLM on their work is a violation of their copyright, they need to show that the thing that they’re claiming has to result in copies of infringing material or knockoffs.
The training resulted in a model. Creating a model is transformative (a model and a book are two completely different things) and the plaintiffs didn’t show that any infringing works were produced by the training and therefore they have no way of succeeding with their argument that training the model violated their rights.
You’re reading a lot of extra into that statement that isn’t there. The plaintiffs never made a claim about the output of a trained model and so that argument wasn’t examined by the judge.