Copyright class actions could financially ruin AI industry, trade groups say.

AI industry groups are urging an appeals court to block what they say is the largest copyright class action ever certified. They’ve warned that a single lawsuit raised by three authors over Anthropic’s AI training now threatens to “financially ruin” the entire AI industry if up to 7 million claimants end up joining the litigation and forcing a settlement.

Last week, Anthropic petitioned to appeal the class certification, urging the court to weigh questions that the district court judge, William Alsup, seemingly did not. Alsup allegedly failed to conduct a “rigorous analysis” of the potential class and instead based his judgment on his “50 years” of experience, Anthropic said.

  • nickwitha_k (he/him)
    link
    fedilink
    117 hours ago

    Your take is illogical, unless you are arguing for some sort of pre industrial communism which is never going to happen because I think any sane person can agree that technology has vastly improved our lives. It has introduced pains sure, but everything is a process.

    That’s quite a leap. Not all technology is worthwhile or improves the overall human experience. Are you getting there by assuming that the world is black and white; embracing all technology or rejecting all technology? If so, I would recommend re-evaluation of such assumptions because they do not hold up to reality.

    Oh and speaking of computers did computers and automated production lines destroy the ability for people to make a living?

    Were they developed and pushed for that explicit reason? No. LLMs are. The only reason that they receive as much funding as they do is that billionaires want to keep everything for themselves, end any democratic rule, and indirectly (and sometimes directly) cause near extinction-level deaths, so that there are fewer people to resist the new feudalism that they want. It sounds insane but it is literally what a number of tech billionaires have stated.

    Maybe temporarily and then new jobs popped up.

    Not this time. As many at the Church of Accelerationism fail to see, we’re at a point where there are practically no social safety nets left (at least in the US), which has not been the case in over a century, and people are actively dying because of anthropogenic climate, which is something that has never happened in recorded history. When people lost jobs before, they could at least get training or some other path that would allow them to make a living.

    Now, we’re at record levels of homelessness too. This isn’t going to result in people magically gaining class consciousness. People are just going to die miseable, preventable deaths.

    But I want to understand exactly where you are coming from, like do you think that we should stop all technological progress and simply maintain our civilization in stasis or roll it back to some other time or what?

    Ok. Yes. It does appear that you are figuring a black and white world view where all technology is “progress” and all implements of technology are “tools” with no other classification or differentiation on their value to the species or consideration for how they are implemented. Again, I would recommend reflection as this view does not mesh well with observable reality.

    Someone else already made the apt comparison between this wave of AI tech with nuclear weapons. Another good comparison would be phosgene gas. When it was first mass produced, it was used only for mass murder (as the current LLMs’ financial supports desire them to be used) only the greater part of a century later did the gas get used for something beneficial to humanity, namely doping semiconductors however, its production and use is still very dangerous to people and the environment.

    I’m addition to all of this, it really appears that you fail to acknowledge the danger that accelerating the loss of the ability of the planet to sustain human life poses. Again, for emphasis, I’ll state: AI is not going to save us from this. The actions required are already known - it won’t help us to find them. The technology is being used, nearly exclusively to worsen human life, make genocide more efficient, and increase the rate of poverty, while accelerating global climate change. It provides no net value to humanity in the implementations that are funded. The only emancipation that it is doing is emancipating people from living.

    • @[email protected]
      link
      fedilink
      English
      1
      edit-2
      15 hours ago

      To me it seems you are the one who seems to have a black and white view of the world. Tool is used for bad= tool is bad in your world view. That’s never the case. Tools are tools, they are neither good nor bad. The moral agency lies in the wielder of the tool. Hence my argument is that because technologies cannot be uninvented, and all technologies have potentially beneficial uses, then we need to focus and shape policy so that Ai is used for those beneficial purposes. For example nukes are deterrents as much as they are destroyers, is it better that they would have never been invented? Sure, but they were invented, they exist and once the tech exists you need it in order to maintain yourself competitive. Meaning not being invaded Willy nilly by a nuclear power like Ukraine is right now, which would have not happened if they had been a nuclear power themselves.

      Were they developed and pushed for that explicit reason? No. LLMs are. The only reason that they receive as much funding as they do is that billionaires want to keep everything for themselves, end any democratic rule, and indirectly (and sometimes directly) cause near extinction-level deaths, so that there are fewer people to resist the new feudalism that they want. It sounds insane but it is literally what a number of tech billionaires have stated.

      They have not stated it in those terms, that’s your interpretation of it. I am aware of Curtis Yarvin, Thiel et al. But they are hardly the only ones in control of the tech. But that’s not even the point. The tech exists, even if that was the express intention it doesn’t matter because China will keep pursuing the tech. Which means that we will keep pursuing it because otherwise they could get an advantage that could become an existential threat for us. And even if we did stop pursuing it for whatever reason (which would be illogical) the tech would not stop existing in the world as with nukes, except now all the billionaires will hire their AI workers from China instead of the US. Hardly an appealing proposition.

      Not this time. As many at the Church of Accelerationism fail to see, we’re at a point where there are practically no social safety nets left (at least in the US), which has not been the case in over a century, and people are actively dying because of anthropogenic climate, which is something that has never happened in recorded history. When people lost jobs before, they could at least get training or some other path that would allow them to make a living.

      So your solution is ban the tech instead of changing policies? Jesus Christ my guy. Arguments need to be logical you understand that right? This entire worldview and rhetoric is so detached from reality that it is downright absurd.

      The problem with the environment for example is not that AI exists, but rather that we do not have enough energy produced from renewables. Why would the logical solution be to uninvent AI (or ban it entirely, which is essentially the same) instead of changing policy so that energy production comes from renewables. Which fyi is what is happening at a faster rate than ever.

      I understand the moral imperative and the lack of patience, but the way the world works is that one thing leads to the other, we cannot reach a goal without going through the necessary process to reach it.