Copyright class actions could financially ruin AI industry, trade groups say.
AI industry groups are urging an appeals court to block what they say is the largest copyright class action ever certified. They’ve warned that a single lawsuit raised by three authors over Anthropic’s AI training now threatens to “financially ruin” the entire AI industry if up to 7 million claimants end up joining the litigation and forcing a settlement.
Last week, Anthropic petitioned to appeal the class certification, urging the court to weigh questions that the district court judge, William Alsup, seemingly did not. Alsup allegedly failed to conduct a “rigorous analysis” of the potential class and instead based his judgment on his “50 years” of experience, Anthropic said.
I respectfully disagree. Meta was caught downloading books from Libgen, a piracy site, to “train” it’s models. What AI models do in effect is scan information (i.e., copy), and distill and retain what they view as its essence. They can copy your voice, they can copy your face, and they can copy your distinctive artistic style. The only way they can do that is if the “training” copies and retains a portion of the original works.
Consider Shepard Fairies’ use of the AP’s copyrighted Obama photograph in the production of the iconic “Hope” poster, and the resultant lawsuit. While the suit was ultimately settled, and the issue of “fair use” was a close call given the variation in art work from the original source photograph, the suit easily could have gone against Fairey, so it was smart for him to settle.
Also consider the litigation surrounding the use of music sampling in original hip hop works, which has clearly been held to be copyright infringement.
Accordingly, I think it is very fair to say that (1) AI steals copyrighted works; and (2) repackages the essential portions of those works into new works. Might a re-write of copyright law be in order to embrace this new technology? Sure, but if I’m a actor, or voice actor, author, or other artist and I can no longer earn a living because someone else has taken my work to strip it down to it’s essence to resell cheaply without compensating me, I’m going to be pretty pissed off.
Lol. The liberal utopia of Star Trek is a fantasy. Far more likely is that AI will be exploited by oligarchs to enrich themselves and further impoverish the masses, as they are fervently working towards right now. See, AI isn’t creative, it gives the appearance of being creative by stealing work created by humans and repackaging it. When artists can no longer create art to survive, there will be less material for the AI models to steal, and we’ll be left with soulless AI slop as our de facto creative culture.
That action itself can and should be punished. Yes. But that has nothing to do with AI.
Is that what people think is happening? You don’t even have a layman’s understanding of this technology. At least watch a few videos on the topic.
Absurd. It’s their entire fucking business model.
Meaning it would illegal even if they weren’t doing anything with ai…
I think that copying my voice makes this robot a T-1000, and T-1000s are meant to be dunked in lava to save Sarah Connor.
So what an Ai does is the same thing as every human ever who has read/saw/listened a work and then wrote more words being influenced by that book/artwork/piece.
If you’ve ever done anything artistic in your life, you know that the first step is to look at what others have done. Even subconsciously you will pull from what you’ve seen, heard. To say that AI is not creative because it is derivative is to to say that no human being in history has been creative.
You’re forgetting the fact that humans always add something of our own when we make art, even when we try to reproduce another’s artpiece as a study.
The many artists we might’ve looked at certainly influence our own styles, but they’re not the only thing that’s expressed in our artwork. Our life lived to that point, and how we’re feeling in the moment, those are also the things, often the point, that artists communicate when making art.
Most artists haven’t also looked at nearly every single work by almost every artist spanning a whole century of time. We also don’t need whole-ass data centers that need towns’ worth of water supply to just train to produce some knock-off, soulless amalgamation of other people’s art.
Look at what they need to mimic a fraction of our power.
You are damn right…
You’re arguing the quality of what AI produces which has nothing to do with the legality of it.
What is the law? A joke or a myth for the poor?
My comment is replying to the guy talking about whether or not you can call AI ‘creative’ though.
You are going to need to expand a little bit more on that notion that we add something of our own. Or more specifically explain how is that not the case for AI. They might not draw from personal experiences since they have none, but not every piece of human art necessarily draws from a person’s experiences.Or at least not in any way that it can even be articulated or meaningfully differentiated from an ai using as reference the lived experiences of another person.
Also look at all the soulless corporate art ie the art that AI is going to replace. Most of it has nothing of the author in it. It simply has the intention of selling. Like I’ve seen a lot of videogame concept art in my life, like 80% of it looks like it was made by the same person. Is that kind of “creativity” any better than what an AI can do? No, it isn’t. At all.
The kind of artists that are making great, unique art that brings something fresh to the table are in no risk of being replaced anytime soon.
Your argument would only be true if AI was making 1 of 1 reproductions of existing works, but that is not the case. It is simply using existing works to produce sentences or works that use a little bit of a piece of each, like making a collage. I fail to see how that is different from human creativity, honestly. I say this as a creative myself.
Your second argument is not really an argument against AI anymore than it is an argument against any tech really. Most technologies are inefficient at first. As time goes on and we look for ways to improve the tech they become more efficient. This is universally true for every technology, in fact I think technological advancement can be pretty much reduced to the progress of energy efficiency.
What I mean by adding something of our own is how art, in Cory Doctorow’s words, contain many acts of communicative intent. There are thousands of microdecisions a human makes when creating art. Whereas imagery generated only by the few words of a prompt to an LLM only contain that much communicative intent.
I feel like that’s why AI art always has that AI look and feel to it. I can only sense a tiny fraction of the person’s intent, and maybe it’s because I know the rest is filled in by the AI, but that is the part that feels really hollow or soulless to me.
Even in corporate art, I can at least sense what the artist was going for, based on corporate decisions to use clean, inoffensive designs for their branding and image. There’s a lot of communicative intent behind those designs.
I recommend checking the blog post I referenced, because Cory Doctorow expresses these thoughts far more eloquently than I do.
As for the latter argument, I wanted to highlight the fact that AI needs that level of resources and training data in order to produce art, whereas a human doesn’t, which shows you the power of creativity, human creativity. That’s why I think what AI does cannot be called ‘creativity.’ It cannot create. It does what we tell it to, without its own intent.
Cory’s take is excellent, thanks for bringing this up because it does highlight what I try to communicate to a lot of people: it’s a tool. It needs a human behind the wheel to produce anything good and the more effort the human puts into describing what it wants the better the result, because as Cory so eloquently puts it, it gets imbued with meaning. So I think my posture is now something like: AI is not creative by itself, it’s a tool to facilitate the communication of an idea that a human has in their heads and lacks the time or skill to communicate properly.
Now I don’t think this really answers our question of whether the mechanics of the AI synthesizing the information is materially different to how a human synthesizes information. Furthermore it is murkied more by the fact that the “creativity” of it is powered by a human.
Maybe it is a sliding scale? Which is actually sort of aligned with what I was saying, if AI is producing 1:1 reproductions then it is infringing rights. But if the prompt is one paragraph long, giving it many details about the image or paragraph/song/art/video etc, in such a way that it is unique because of the specificity achieved in the prompt, then it is clear that no only is the result a result of human creativity but also that it is merely using references in the same way a human does.
The way I think the concept is easier for me to explain is with music. If a user describes a song, its length, its bpm, every note and its pitch, would that not be an act of human creativity? In essence the song is being written by the human and the AI is simply “playing it” like when a composer writes music and a musician plays it. How creative is a human that is replaying a song 1:1 as it was written?
What if maybe LLMs came untrained and the end user was responsible for giving it the data? So any books you give it you must have owned, images etc. That way the AI is even more of an extension of you? Would that be the maximally IP respecting and ethical AI? Possibly but it puts too much of the burden on the user for it to be useful for 99% of the people. Also it shifts the responsibility in respects to IP infringement to the individual, something that I do not think anyone is too keen on doing.
He is more of an imitation and his work has no soul and pain and when you understand this, no matter how perfect the art is, if there is no person or story behind it about why and for what purpose this art was drawn, then it is just factory crap that cannot compare with real soul food.
Nope. This has been thoroughly debunked by both neuroscientists and AI researchers. It’s nothing but hand-waiving to claim that corporate exploitation is ok because…reasons.
LLMs and similar models are literally statistical models of the data that they have been fed. They have no thought, consciousness, or creativity. They are fundamentally incapable of synthesizing anything not already existing in their dataset.
These same bunk pro-corpo-AI talking points are getting pretty old and should be allowed to retire at this point.
Can you provide a source?
Sure. Though you really ought to provide a shred of evidence to support your extraordinary claims.
And from this point forward, I will not be accepting the unreasonable shift of the burden of proof that AI cultists insist on. Artificial intelligence is something that is new in the history of humanity. Claims that it does anything more than fool people into believing it possesses consciousness, human-like cognition, etc are the extraordinary ones and must be backed with substantial evidence.
I wasn’t shifting the burden of evidence. But I know that we do not understand exactly how humans synthesize new knowledge at a mechanical level. So if you make the claim that it is different from how humans do it implies that we know how humans do it. And I want source for that. I will certainly read this tomorrow and see if it changes my mind.
Also I’m not a cultist for fucks sake. You sound more like a cultist to me because of your absolutely irrational stance. My position is simply that AI is a technology, a tool, and claiming that we should entirely dismiss a tool for reasons that we do not give for other tools is ridiculous. The tool itself can be used for good or wrong, and I happen to believe that there is as much potential in it for good as for wrong. Like you know, every other tool created by humanity ever because tools are tools, we use them to reach goals.
Just a tool? It’s a machine of slavery and total control over the poor. What the hell, what other tool? Are you blind? It’s a goddamn threat to independence!
It’s like praising the weapons that will be used to shoot you tomorrow. “What a useful tool, it’s a pity that it’s not me who’s shooting, but at me”, is this how you’re going to justify yourself? Because your comments say exactly that!
That is true of every tool.
Laws, morals, guns, religion, a pointy stick, a hammer, a knife, a computer. All of them able to liberate or oppress.
The gun doesn’t need to exist for me to be shot at, if they didn’t have guns they would use the pointy stick. Because a technology has no intention of its owns the intention lies in the wielder. Do you not understand how tools work?
So I ask, should we then “freeze” technological progress so to speak? Because tools can be used for very bad things therefore we should not develop new tools. Should we raze all of civilization and go back to the caves? How do we stop ourselves from progressing technologically again? We will make tools no matter what, we evolved for that. So is the logical conclusion then that we should end the human species so that tools cannot be used for wrong?
No, not to return to the caves, but to erase my shameful existence with no hope of a second chance.
But okay, AI is like a nuclear weapon, I think we should come up with a characteristic for the tools, otherwise without explanations it can get confusing, AI is clearly not a simple tool, it is almost like a nuclear weapon, that is, it is some kind of combat type or something? Yes, it is difficult, but if we can typify everything and not call everything the same, then many things will become clearer and there will be no stupid claims, you understand?
A source for LLMs not being conscious?? If you have evidence to the contrary a lot of people are about to get very excited.
Honestly, I don’t care if it has consciousness or not, if there is a threat, it must be destroyed, or will you spare a wild beast that will then eat you just because it has consciousness?
I’m just wondering, does it matter whether he suffers or not if we have a choice, either we kill or he kill us?
This. The burden of proof is on the extraordinary claim that LLMs are anything remotely like consciousness.
That’s a very interesting point I hadn’t thought about. I don’t know, you would need to define what consciousness is very carefully to make the claim that it isn’t I think. I actually read a lot about this, in the context of non human animal mostly, and there’s even growing evidence for insects being conscious so I don’t even know what to make of this.
Dude, in my opinion, almost every living and possibly non-living particle of the universe has its own consciousness, even if it’s not the kind you can imagine or understand.
I agree with that.
That’s the point: what difference does it make whether an AI has consciousness or not? We don’t care about cattle, we just kill them, so why should we coddle an AI if it is also a threat?
Can you?
No because I was using my reasoning abilities to reach my conclusions using my understanding of how people synthesize knowledge. That’s why I asked for sources because as far as I’m aware we really do not fully understand the mechanics behind that.
deleted by creator