Copyright class actions could financially ruin AI industry, trade groups say.
AI industry groups are urging an appeals court to block what they say is the largest copyright class action ever certified. They’ve warned that a single lawsuit raised by three authors over Anthropic’s AI training now threatens to “financially ruin” the entire AI industry if up to 7 million claimants end up joining the litigation and forcing a settlement.
Last week, Anthropic petitioned to appeal the class certification, urging the court to weigh questions that the district court judge, William Alsup, seemingly did not. Alsup allegedly failed to conduct a “rigorous analysis” of the potential class and instead based his judgment on his “50 years” of experience, Anthropic said.
You’re forgetting the fact that humans always add something of our own when we make art, even when we try to reproduce another’s artpiece as a study.
The many artists we might’ve looked at certainly influence our own styles, but they’re not the only thing that’s expressed in our artwork. Our life lived to that point, and how we’re feeling in the moment, those are also the things, often the point, that artists communicate when making art.
Most artists haven’t also looked at nearly every single work by almost every artist spanning a whole century of time. We also don’t need whole-ass data centers that need towns’ worth of water supply to just train to produce some knock-off, soulless amalgamation of other people’s art.
Look at what they need to mimic a fraction of our power.
You are damn right…
You’re arguing the quality of what AI produces which has nothing to do with the legality of it.
What is the law? A joke or a myth for the poor?
My comment is replying to the guy talking about whether or not you can call AI ‘creative’ though.
You are going to need to expand a little bit more on that notion that we add something of our own. Or more specifically explain how is that not the case for AI. They might not draw from personal experiences since they have none, but not every piece of human art necessarily draws from a person’s experiences.Or at least not in any way that it can even be articulated or meaningfully differentiated from an ai using as reference the lived experiences of another person.
Also look at all the soulless corporate art ie the art that AI is going to replace. Most of it has nothing of the author in it. It simply has the intention of selling. Like I’ve seen a lot of videogame concept art in my life, like 80% of it looks like it was made by the same person. Is that kind of “creativity” any better than what an AI can do? No, it isn’t. At all.
The kind of artists that are making great, unique art that brings something fresh to the table are in no risk of being replaced anytime soon.
Your argument would only be true if AI was making 1 of 1 reproductions of existing works, but that is not the case. It is simply using existing works to produce sentences or works that use a little bit of a piece of each, like making a collage. I fail to see how that is different from human creativity, honestly. I say this as a creative myself.
Your second argument is not really an argument against AI anymore than it is an argument against any tech really. Most technologies are inefficient at first. As time goes on and we look for ways to improve the tech they become more efficient. This is universally true for every technology, in fact I think technological advancement can be pretty much reduced to the progress of energy efficiency.
What I mean by adding something of our own is how art, in Cory Doctorow’s words, contain many acts of communicative intent. There are thousands of microdecisions a human makes when creating art. Whereas imagery generated only by the few words of a prompt to an LLM only contain that much communicative intent.
I feel like that’s why AI art always has that AI look and feel to it. I can only sense a tiny fraction of the person’s intent, and maybe it’s because I know the rest is filled in by the AI, but that is the part that feels really hollow or soulless to me.
Even in corporate art, I can at least sense what the artist was going for, based on corporate decisions to use clean, inoffensive designs for their branding and image. There’s a lot of communicative intent behind those designs.
I recommend checking the blog post I referenced, because Cory Doctorow expresses these thoughts far more eloquently than I do.
As for the latter argument, I wanted to highlight the fact that AI needs that level of resources and training data in order to produce art, whereas a human doesn’t, which shows you the power of creativity, human creativity. That’s why I think what AI does cannot be called ‘creativity.’ It cannot create. It does what we tell it to, without its own intent.
Cory’s take is excellent, thanks for bringing this up because it does highlight what I try to communicate to a lot of people: it’s a tool. It needs a human behind the wheel to produce anything good and the more effort the human puts into describing what it wants the better the result, because as Cory so eloquently puts it, it gets imbued with meaning. So I think my posture is now something like: AI is not creative by itself, it’s a tool to facilitate the communication of an idea that a human has in their heads and lacks the time or skill to communicate properly.
Now I don’t think this really answers our question of whether the mechanics of the AI synthesizing the information is materially different to how a human synthesizes information. Furthermore it is murkied more by the fact that the “creativity” of it is powered by a human.
Maybe it is a sliding scale? Which is actually sort of aligned with what I was saying, if AI is producing 1:1 reproductions then it is infringing rights. But if the prompt is one paragraph long, giving it many details about the image or paragraph/song/art/video etc, in such a way that it is unique because of the specificity achieved in the prompt, then it is clear that no only is the result a result of human creativity but also that it is merely using references in the same way a human does.
The way I think the concept is easier for me to explain is with music. If a user describes a song, its length, its bpm, every note and its pitch, would that not be an act of human creativity? In essence the song is being written by the human and the AI is simply “playing it” like when a composer writes music and a musician plays it. How creative is a human that is replaying a song 1:1 as it was written?
What if maybe LLMs came untrained and the end user was responsible for giving it the data? So any books you give it you must have owned, images etc. That way the AI is even more of an extension of you? Would that be the maximally IP respecting and ethical AI? Possibly but it puts too much of the burden on the user for it to be useful for 99% of the people. Also it shifts the responsibility in respects to IP infringement to the individual, something that I do not think anyone is too keen on doing.
He is more of an imitation and his work has no soul and pain and when you understand this, no matter how perfect the art is, if there is no person or story behind it about why and for what purpose this art was drawn, then it is just factory crap that cannot compare with real soul food.