Copyright class actions could financially ruin AI industry, trade groups say.

AI industry groups are urging an appeals court to block what they say is the largest copyright class action ever certified. They’ve warned that a single lawsuit raised by three authors over Anthropic’s AI training now threatens to “financially ruin” the entire AI industry if up to 7 million claimants end up joining the litigation and forcing a settlement.

Last week, Anthropic petitioned to appeal the class certification, urging the court to weigh questions that the district court judge, William Alsup, seemingly did not. Alsup allegedly failed to conduct a “rigorous analysis” of the potential class and instead based his judgment on his “50 years” of experience, Anthropic said.

  • Riskable
    link
    fedilink
    English
    102 days ago

    They’re not stealing anything. Nor are they “repackaging” anything. LLMs don’t work like that.

    I know a whole heck of a lot of people hate generative AI with a passion but let’s get real: The reason they hate generative AI isn’t because they trained the models using copyrighted works (which has already been ruled fair use; as long as the works were legitimately purchased). They hate generative AI because of AI slop and the potential for taking jobs away from people who are already having a hard time.

    AI Slop sucks! Nobody likes it except the people making money from it. But this is not a new phenomenon! For fuck’s sake: Those of us who have been on the Internet for a while have been dealing with outsourced slop and hidden marketing campaigns/scams since forever.

    The only difference is that now—thanks to convenient and cheap LLMs—scammers and shady marketers can generate bullshit at a fraction of the cost and really, really quickly. But at least their grammar is correct now (LOL @ old school Nigerian Prince scams).

    It’s humans ruining things for other humans. AI is just a tool that makes it easier and cheaper. Since all the lawsuits and laws in the world cannot stop generative AI at this point, we might as well fix the underlying problems that enable this bullshit. Making big AI companies go away isn’t going to help with these problems.

    In fact, it could make things worse! Because the development of AI certainly won’t stop. It will just move to countries with fewer scruples and weaker ethics.

    The biggest problem is (mostly unregulated) capitalism. Fix that, and suddenly AI “taking away jobs” ceases to be a problem.

    Hopefully, AI will force the world to move toward the Star Trek future. Because generating text and images is just the start.

    When machines can do just about everything a human can (and scale up really fast)—even without AGI—there’s no future for capitalism. It just won’t work when there’s no scarcity other than land and energy.

    • @[email protected]
      link
      fedilink
      272 days ago

      I respectfully disagree. Meta was caught downloading books from Libgen, a piracy site, to “train” it’s models. What AI models do in effect is scan information (i.e., copy), and distill and retain what they view as its essence. They can copy your voice, they can copy your face, and they can copy your distinctive artistic style. The only way they can do that is if the “training” copies and retains a portion of the original works.

      Consider Shepard Fairies’ use of the AP’s copyrighted Obama photograph in the production of the iconic “Hope” poster, and the resultant lawsuit. While the suit was ultimately settled, and the issue of “fair use” was a close call given the variation in art work from the original source photograph, the suit easily could have gone against Fairey, so it was smart for him to settle.

      Also consider the litigation surrounding the use of music sampling in original hip hop works, which has clearly been held to be copyright infringement.

      Accordingly, I think it is very fair to say that (1) AI steals copyrighted works; and (2) repackages the essential portions of those works into new works. Might a re-write of copyright law be in order to embrace this new technology? Sure, but if I’m a actor, or voice actor, author, or other artist and I can no longer earn a living because someone else has taken my work to strip it down to it’s essence to resell cheaply without compensating me, I’m going to be pretty pissed off.

      Hopefully, AI will force the world to move toward the Star Trek future.

      Lol. The liberal utopia of Star Trek is a fantasy. Far more likely is that AI will be exploited by oligarchs to enrich themselves and further impoverish the masses, as they are fervently working towards right now. See, AI isn’t creative, it gives the appearance of being creative by stealing work created by humans and repackaging it. When artists can no longer create art to survive, there will be less material for the AI models to steal, and we’ll be left with soulless AI slop as our de facto creative culture.

      • @[email protected]
        link
        fedilink
        English
        52 days ago

        I respectfully disagree. Meta was caught downloading books from Libgen, a piracy site, to “train” it’s models.

        That action itself can and should be punished. Yes. But that has nothing to do with AI.

        What AI models do in effect is scan information (i.e., copy), and distill and retain what they view as its essence. They can copy your voice, they can copy your face, and they can copy your distinctive artistic style. The only way they can do that is if the “training” copies and retains a portion of the original works.

        Is that what people think is happening? You don’t even have a layman’s understanding of this technology. At least watch a few videos on the topic.

        • @[email protected]
          link
          fedilink
          72 days ago

          I think that copying my voice makes this robot a T-1000, and T-1000s are meant to be dunked in lava to save Sarah Connor.

      • @[email protected]
        link
        fedilink
        English
        2
        edit-2
        2 days ago

        So what an Ai does is the same thing as every human ever who has read/saw/listened a work and then wrote more words being influenced by that book/artwork/piece.

        If you’ve ever done anything artistic in your life, you know that the first step is to look at what others have done. Even subconsciously you will pull from what you’ve seen, heard. To say that AI is not creative because it is derivative is to to say that no human being in history has been creative.

        • Get_Off_My_WLAN
          link
          fedilink
          92 days ago

          You’re forgetting the fact that humans always add something of our own when we make art, even when we try to reproduce another’s artpiece as a study.

          The many artists we might’ve looked at certainly influence our own styles, but they’re not the only thing that’s expressed in our artwork. Our life lived to that point, and how we’re feeling in the moment, those are also the things, often the point, that artists communicate when making art.

          Most artists haven’t also looked at nearly every single work by almost every artist spanning a whole century of time. We also don’t need whole-ass data centers that need towns’ worth of water supply to just train to produce some knock-off, soulless amalgamation of other people’s art.

          Look at what they need to mimic a fraction of our power.

          • @[email protected]
            link
            fedilink
            English
            42 days ago

            You’re arguing the quality of what AI produces which has nothing to do with the legality of it.

          • @[email protected]
            link
            fedilink
            English
            12 days ago

            You are going to need to expand a little bit more on that notion that we add something of our own. Or more specifically explain how is that not the case for AI. They might not draw from personal experiences since they have none, but not every piece of human art necessarily draws from a person’s experiences.Or at least not in any way that it can even be articulated or meaningfully differentiated from an ai using as reference the lived experiences of another person.

            Also look at all the soulless corporate art ie the art that AI is going to replace. Most of it has nothing of the author in it. It simply has the intention of selling. Like I’ve seen a lot of videogame concept art in my life, like 80% of it looks like it was made by the same person. Is that kind of “creativity” any better than what an AI can do? No, it isn’t. At all.

            The kind of artists that are making great, unique art that brings something fresh to the table are in no risk of being replaced anytime soon.

            Your argument would only be true if AI was making 1 of 1 reproductions of existing works, but that is not the case. It is simply using existing works to produce sentences or works that use a little bit of a piece of each, like making a collage. I fail to see how that is different from human creativity, honestly. I say this as a creative myself.

            Your second argument is not really an argument against AI anymore than it is an argument against any tech really. Most technologies are inefficient at first. As time goes on and we look for ways to improve the tech they become more efficient. This is universally true for every technology, in fact I think technological advancement can be pretty much reduced to the progress of energy efficiency.

            • Get_Off_My_WLAN
              link
              fedilink
              21 day ago

              What I mean by adding something of our own is how art, in Cory Doctorow’s words, contain many acts of communicative intent. There are thousands of microdecisions a human makes when creating art. Whereas imagery generated only by the few words of a prompt to an LLM only contain that much communicative intent.

              I feel like that’s why AI art always has that AI look and feel to it. I can only sense a tiny fraction of the person’s intent, and maybe it’s because I know the rest is filled in by the AI, but that is the part that feels really hollow or soulless to me.

              Even in corporate art, I can at least sense what the artist was going for, based on corporate decisions to use clean, inoffensive designs for their branding and image. There’s a lot of communicative intent behind those designs.

              I recommend checking the blog post I referenced, because Cory Doctorow expresses these thoughts far more eloquently than I do.

              As for the latter argument, I wanted to highlight the fact that AI needs that level of resources and training data in order to produce art, whereas a human doesn’t, which shows you the power of creativity, human creativity. That’s why I think what AI does cannot be called ‘creativity.’ It cannot create. It does what we tell it to, without its own intent.

              • @[email protected]
                link
                fedilink
                English
                21 day ago

                Cory’s take is excellent, thanks for bringing this up because it does highlight what I try to communicate to a lot of people: it’s a tool. It needs a human behind the wheel to produce anything good and the more effort the human puts into describing what it wants the better the result, because as Cory so eloquently puts it, it gets imbued with meaning. So I think my posture is now something like: AI is not creative by itself, it’s a tool to facilitate the communication of an idea that a human has in their heads and lacks the time or skill to communicate properly.

                Now I don’t think this really answers our question of whether the mechanics of the AI synthesizing the information is materially different to how a human synthesizes information. Furthermore it is murkied more by the fact that the “creativity” of it is powered by a human.

                Maybe it is a sliding scale? Which is actually sort of aligned with what I was saying, if AI is producing 1:1 reproductions then it is infringing rights. But if the prompt is one paragraph long, giving it many details about the image or paragraph/song/art/video etc, in such a way that it is unique because of the specificity achieved in the prompt, then it is clear that no only is the result a result of human creativity but also that it is merely using references in the same way a human does.

                The way I think the concept is easier for me to explain is with music. If a user describes a song, its length, its bpm, every note and its pitch, would that not be an act of human creativity? In essence the song is being written by the human and the AI is simply “playing it” like when a composer writes music and a musician plays it. How creative is a human that is replaying a song 1:1 as it was written?

                What if maybe LLMs came untrained and the end user was responsible for giving it the data? So any books you give it you must have owned, images etc. That way the AI is even more of an extension of you? Would that be the maximally IP respecting and ethical AI? Possibly but it puts too much of the burden on the user for it to be useful for 99% of the people. Also it shifts the responsibility in respects to IP infringement to the individual, something that I do not think anyone is too keen on doing.

              • @[email protected]
                link
                fedilink
                English
                11 day ago

                He is more of an imitation and his work has no soul and pain and when you understand this, no matter how perfect the art is, if there is no person or story behind it about why and for what purpose this art was drawn, then it is just factory crap that cannot compare with real soul food.

        • nickwitha_k (he/him)
          link
          fedilink
          72 days ago

          So what an Ai does is the same thing as every human ever who has read/saw/listened a work and then wrote more words being influenced by that book/artwork/piece.

          Nope. This has been thoroughly debunked by both neuroscientists and AI researchers. It’s nothing but hand-waiving to claim that corporate exploitation is ok because…reasons.

          LLMs and similar models are literally statistical models of the data that they have been fed. They have no thought, consciousness, or creativity. They are fundamentally incapable of synthesizing anything not already existing in their dataset.

          These same bunk pro-corpo-AI talking points are getting pretty old and should be allowed to retire at this point.

            • nickwitha_k (he/him)
              link
              fedilink
              62 days ago

              Sure. Though you really ought to provide a shred of evidence to support your extraordinary claims.

              And from this point forward, I will not be accepting the unreasonable shift of the burden of proof that AI cultists insist on. Artificial intelligence is something that is new in the history of humanity. Claims that it does anything more than fool people into believing it possesses consciousness, human-like cognition, etc are the extraordinary ones and must be backed with substantial evidence.

              • @[email protected]
                link
                fedilink
                English
                1
                edit-2
                2 days ago

                I wasn’t shifting the burden of evidence. But I know that we do not understand exactly how humans synthesize new knowledge at a mechanical level. So if you make the claim that it is different from how humans do it implies that we know how humans do it. And I want source for that. I will certainly read this tomorrow and see if it changes my mind.

                Also I’m not a cultist for fucks sake. You sound more like a cultist to me because of your absolutely irrational stance. My position is simply that AI is a technology, a tool, and claiming that we should entirely dismiss a tool for reasons that we do not give for other tools is ridiculous. The tool itself can be used for good or wrong, and I happen to believe that there is as much potential in it for good as for wrong. Like you know, every other tool created by humanity ever because tools are tools, we use them to reach goals.

                • @[email protected]
                  link
                  fedilink
                  English
                  3
                  edit-2
                  1 day ago

                  Just a tool? It’s a machine of slavery and total control over the poor. What the hell, what other tool? Are you blind? It’s a goddamn threat to independence!

                  It’s like praising the weapons that will be used to shoot you tomorrow. “What a useful tool, it’s a pity that it’s not me who’s shooting, but at me”, is this how you’re going to justify yourself? Because your comments say exactly that!

                  • @[email protected]
                    link
                    fedilink
                    English
                    1
                    edit-2
                    1 day ago

                    That is true of every tool.

                    Laws, morals, guns, religion, a pointy stick, a hammer, a knife, a computer. All of them able to liberate or oppress.

                    The gun doesn’t need to exist for me to be shot at, if they didn’t have guns they would use the pointy stick. Because a technology has no intention of its owns the intention lies in the wielder. Do you not understand how tools work?

                    So I ask, should we then “freeze” technological progress so to speak? Because tools can be used for very bad things therefore we should not develop new tools. Should we raze all of civilization and go back to the caves? How do we stop ourselves from progressing technologically again? We will make tools no matter what, we evolved for that. So is the logical conclusion then that we should end the human species so that tools cannot be used for wrong?

            • Catoblepas
              link
              fedilink
              English
              52 days ago

              A source for LLMs not being conscious?? If you have evidence to the contrary a lot of people are about to get very excited.

              • @[email protected]
                link
                fedilink
                English
                1
                edit-2
                1 day ago

                Honestly, I don’t care if it has consciousness or not, if there is a threat, it must be destroyed, or will you spare a wild beast that will then eat you just because it has consciousness?

                I’m just wondering, does it matter whether he suffers or not if we have a choice, either we kill or he kill us?

              • nickwitha_k (he/him)
                link
                fedilink
                62 days ago

                This. The burden of proof is on the extraordinary claim that LLMs are anything remotely like consciousness.

              • @[email protected]
                link
                fedilink
                English
                12 days ago

                That’s a very interesting point I hadn’t thought about. I don’t know, you would need to define what consciousness is very carefully to make the claim that it isn’t I think. I actually read a lot about this, in the context of non human animal mostly, and there’s even growing evidence for insects being conscious so I don’t even know what to make of this.

                • @[email protected]
                  link
                  fedilink
                  English
                  1
                  edit-2
                  1 day ago

                  Dude, in my opinion, almost every living and possibly non-living particle of the universe has its own consciousness, even if it’s not the kind you can imagine or understand.

              • @[email protected]
                link
                fedilink
                English
                12 days ago

                No because I was using my reasoning abilities to reach my conclusions using my understanding of how people synthesize knowledge. That’s why I asked for sources because as far as I’m aware we really do not fully understand the mechanics behind that.

    • @[email protected]
      link
      fedilink
      212 days ago

      Meta literally torrented an insane amount of training materials illegally, from a company that was sued into the ground and forced to dissolve because of distributing stolen content

    • @[email protected]
      link
      fedilink
      English
      122 days ago

      “When machines can do just about everything a human can (and scale up really fast)—even without AGI—there’s no future for capitalism.”

      This might be one of the dumbest things I’ve ever read.

      • @[email protected]
        link
        fedilink
        English
        11 day ago

        There is a future, but it will be so soulless and false that those who know what real art is will feel disgust for it. It will no longer be a world but some kind of complete rotting swamp, although you won’t notice it with a consumerist eye.

    • @[email protected]
      link
      fedilink
      52 days ago

      It’s humans ruining things for other humans. AI is just a tool that makes it easier and cheaper

      That’s the main point, though: the tire fire of humanity is bad enough without some sick fucks adding vast quantities of accelerant in order to maximize profits.

      Since all the lawsuits and laws in the world cannot stop generative AI at this point

      Clearly that’s not true. They’ll keep it up for as long as it’s at all positive to extract profits from it, but not past that. Handled right, this class action could make the entire concept poisonous from a profiteering perspective for years, maybe even decades.

      we might as well fix the underlying problems that enable this bullshit.

      Of COURSE! Why didn’t anyone think to turn flick off the switches marked “unscrupulous profiteering” and “regulatory capture”?!

      We’ll have this done by tomorrow, Monday at the latest! 🙄

      Making big AI companies go away isn’t going to help with these problems.

      The cancer might be the underlying cause but the tumor still kills you if you don’t do anything about it.

      the development of AI certainly won’t stop.

      Again, it WILL if all profitability is removed.

      It will just move to countries with fewer scruples and weaker ethics

      Than silicon valley? Than the US government when ultra-rich white men want something?

      No such country exists.

      The biggest problem is (mostly unregulated) capitalism

      Finally right about something.

      Fix that, and suddenly AI “taking away jobs” ceases to be a problem.

      “Discover the cure for cancer and suddenly the tumord in your brain, lungs, liver, and kidneys won’t be a problem anymore!” 🤦

      Hopefully, AI will force the world to move toward the Star Trek future

      Wtf have you been drinking??

      • @[email protected]
        link
        fedilink
        English
        2
        edit-2
        1 day ago

        Hopefully, AI will force the world to move toward the Star Trek future

        It seems like this is another consumer, just ignore him, he is no longer a person but a zombie.

      • @[email protected]
        link
        fedilink
        English
        32 days ago

        You’re seriously kidding yourself if you think China won’t continue to pursue AI even if the profit motive is lost in American companies. And if China continues to develop AI, so will the US even if it is nationalized or developed through military contracting, or even as a secret project, because it will be framed as a national security issue. So unless you find a way to stop China from also developing AI, the tech is here to stay no matter what happens.

        • @[email protected]
          link
          fedilink
          English
          11 day ago

          In short, we are in a catch-22 and people want to stop it regardless of the unintended consequences.

    • @[email protected]
      link
      fedilink
      32 days ago

      Of course many of them are stealing. That’s already been clearly established. As for the other groups, the ones that haven’t gotten caught stealing yet, perhaps it’s just that they haven’t gotten caught, and not that they haven’t been pirating things.

      I like your rant, but I would like it better if the facts were facts.

      • Riskable
        link
        fedilink
        English
        11 day ago

        When you “steal” something, the original owner doesn’t have that thing anymore.

        When you “copy” something, the original owner still had it.

        Stop calling it stealing, damnit! We fought these wars with the MPAA and RIAA in the 90s. By calling it “stealing” you’re siding the the older villains.

        AI isn’t stealing or copying anything. It’s generating stuff. If it was truly copying things that would mean someone wrote all that “AI slop” that’s poisoning everyone’s search results.

    • @[email protected]
      link
      fedilink
      English
      32 days ago

      I disagree that it is fair use. But, I was actually expecting the judiciary to say that it was. So, despite the ruling, I AM still mad that they used copyrighted works (including mine), in violation of the terms. (And, honestly, my terms [usually AGPLv3] are fairly generous.)

      I’m also concerned about labor issues, and environmental impact, and general quality, but the unauthorized use of copyrighted works is still in the mix. And, if they are willing to all my private viewing of torrented TV “theft”, I’m willing to call their selling of an interface to a LLM / NN that was trained on and may have incorporated (or emit!) my works (in whole or in part) “theft”.

      Labor issues are mostly solved by making to the workers control the means of production, not captial. Same old story.

      Environment impact is better policed independent of what the electricity/water is used for. We aren’t making a lot of headway there, but we need to reign in emissions (etc.) whether they are using it to train LLMs or research cancer.

      Quality… is subjective and I don’t think we are near the ceiling of that. And, since I don’t use “AI” for the above reasons, it really isn’t much of a concern to me.

    • @[email protected]
      link
      fedilink
      22 days ago

      LOL @ old school Nigerian Prince scams)

      They were bad on purpose. People responding to such bad writing are easy marks.

      Because generating text and images is just the start.

      But usually this only improves after an AI winter, meaning the whole sector crashes until someone finds a better architecture/technology. Except there are now billions involved.

    • @[email protected]
      link
      fedilink
      English
      2
      edit-2
      2 days ago

      Look these people are neck deep in a tribalistic worldview. You cannot reason with anyone who’s against AI simply because their claims are unfalsifiable and contradictory depending on the day and article they are reading. On the one hand it is the shitiest technology ever made which cannot do anything right and at the same time it is an existential threat to humanity.

      And I can tell you, that the only reason this is the case is because the right is for AI more strongly than the left. If the right had condemned it, you can be damn right the tables would be turned and everyone who thinks they are left would be praising the tech.

      Just move on and take solace in the fact that the technology simply cannot be rebottled, or uninvented. It exists, it is here to stay and literally no one can stop it at this point. And I agree with you, AI is the only tool that can provide true emancipation. It can also enslave. But the fact is that all tools can be used for right or wrong, so this is not inherent to AI.

      • nickwitha_k (he/him)
        link
        fedilink
        3
        edit-2
        1 day ago

        That’s…a take. And clearly not sounding like a cultist at all. /S

        Giving corpos free reign to exploit whatever they want has never resulted in positive things, generally, just bloodshed and suffering. Pretending that flagrant violation of IP when done to train models is ok doesn’t do much for big companies but it does obliterate individuals ability to support themselves. This is the only reason that this environmentally disasterous and unprofitable tech has been so heavily embraced; to be used as a tool of exploitation.

        AI is not going to save anyone. It is not going to emancipate anyone. Absolutely none of the financial benefits are being shared with the working class. And, if they were, it would have little impact on LLMs’ big picture value as they are vastly accelerating the destruction of the planet’s biosphere. When that’s gone, humanity is finished.

        Embracing the current forms of commercialized AI is only to the detriment of humanity and the likelihood of the creation of any artificial sentience.

        • @[email protected]
          link
          fedilink
          English
          12 days ago

          Your take is illogical, unless you are arguing for some sort of pre industrial communism which is never going to happen because I think any sane person can agree that technology has vastly improved our lives. It has introduced pains sure, but everything is a process.

          But assuming that you can admit that technology has improved the quality of life of humans, then it follows that you’ll look at any piece of technology as what it is: a tool. It doesn’t matter what the origins of it are really, only what it can do. Because every major technology of the last 2 centuries is a product of capitalism, that is inevitable because we live in a capitalist world. Would you argue that we should stop using technologies that were created with capitalistic interests? Why don’t you throw out your computer? Should we stop using heavy machinery and power tools?

          Oh and speaking of computers did computers and automated production lines destroy the ability for people to make a living? Maybe temporarily and then new jobs popped up. But ok maybe this time that doesn’t happen. How do you think the system sustains itself without collapsing? I think it is easy to see that it would trigger some kind of revolution. Certainly a new social contract will be needed. This is capitalism creating the conditions for socialism to exist. Something something internal contradictions. Etc etc.

          Whether it’s environmentally harmful is an argument against all technology especially in the early stages as they have all been energy inefficient. In which case maybe you should be arguing that we should have never left the caves. Like I said elsewhere the inevitability of any new technology is that it will be inefficient and we make it more efficient as we develop it further.

          And listen I don’t think AI is all that great, it really cannot take most people’s jobs at this point. But it is a step in the direction of full automation.

          But I want to understand exactly where you are coming from, like do you think that we should stop all technological progress and simply maintain our civilization in stasis or roll it back to some other time or what? Because I really cannot understand where this type of arguments come from as virtually any kind of human activity impacts the environment, that’s literally our adaptation. AI is not the issue here itself it simply were we get our energy from. Thankfully solar, despite the best attempts of the idiots at the White House, continues growing at an unprecedented rate. Because, like I said, everything is a process. I get the impatience, but the reality is that we simply cannot state a destination and hope to be there simply because it is the right place to be, one needs to go through the steps to get there. I don’t know if that makes sense.

          • nickwitha_k (he/him)
            link
            fedilink
            116 hours ago

            Your take is illogical, unless you are arguing for some sort of pre industrial communism which is never going to happen because I think any sane person can agree that technology has vastly improved our lives. It has introduced pains sure, but everything is a process.

            That’s quite a leap. Not all technology is worthwhile or improves the overall human experience. Are you getting there by assuming that the world is black and white; embracing all technology or rejecting all technology? If so, I would recommend re-evaluation of such assumptions because they do not hold up to reality.

            Oh and speaking of computers did computers and automated production lines destroy the ability for people to make a living?

            Were they developed and pushed for that explicit reason? No. LLMs are. The only reason that they receive as much funding as they do is that billionaires want to keep everything for themselves, end any democratic rule, and indirectly (and sometimes directly) cause near extinction-level deaths, so that there are fewer people to resist the new feudalism that they want. It sounds insane but it is literally what a number of tech billionaires have stated.

            Maybe temporarily and then new jobs popped up.

            Not this time. As many at the Church of Accelerationism fail to see, we’re at a point where there are practically no social safety nets left (at least in the US), which has not been the case in over a century, and people are actively dying because of anthropogenic climate, which is something that has never happened in recorded history. When people lost jobs before, they could at least get training or some other path that would allow them to make a living.

            Now, we’re at record levels of homelessness too. This isn’t going to result in people magically gaining class consciousness. People are just going to die miseable, preventable deaths.

            But I want to understand exactly where you are coming from, like do you think that we should stop all technological progress and simply maintain our civilization in stasis or roll it back to some other time or what?

            Ok. Yes. It does appear that you are figuring a black and white world view where all technology is “progress” and all implements of technology are “tools” with no other classification or differentiation on their value to the species or consideration for how they are implemented. Again, I would recommend reflection as this view does not mesh well with observable reality.

            Someone else already made the apt comparison between this wave of AI tech with nuclear weapons. Another good comparison would be phosgene gas. When it was first mass produced, it was used only for mass murder (as the current LLMs’ financial supports desire them to be used) only the greater part of a century later did the gas get used for something beneficial to humanity, namely doping semiconductors however, its production and use is still very dangerous to people and the environment.

            I’m addition to all of this, it really appears that you fail to acknowledge the danger that accelerating the loss of the ability of the planet to sustain human life poses. Again, for emphasis, I’ll state: AI is not going to save us from this. The actions required are already known - it won’t help us to find them. The technology is being used, nearly exclusively to worsen human life, make genocide more efficient, and increase the rate of poverty, while accelerating global climate change. It provides no net value to humanity in the implementations that are funded. The only emancipation that it is doing is emancipating people from living.

            • @[email protected]
              link
              fedilink
              English
              1
              edit-2
              15 hours ago

              To me it seems you are the one who seems to have a black and white view of the world. Tool is used for bad= tool is bad in your world view. That’s never the case. Tools are tools, they are neither good nor bad. The moral agency lies in the wielder of the tool. Hence my argument is that because technologies cannot be uninvented, and all technologies have potentially beneficial uses, then we need to focus and shape policy so that Ai is used for those beneficial purposes. For example nukes are deterrents as much as they are destroyers, is it better that they would have never been invented? Sure, but they were invented, they exist and once the tech exists you need it in order to maintain yourself competitive. Meaning not being invaded Willy nilly by a nuclear power like Ukraine is right now, which would have not happened if they had been a nuclear power themselves.

              Were they developed and pushed for that explicit reason? No. LLMs are. The only reason that they receive as much funding as they do is that billionaires want to keep everything for themselves, end any democratic rule, and indirectly (and sometimes directly) cause near extinction-level deaths, so that there are fewer people to resist the new feudalism that they want. It sounds insane but it is literally what a number of tech billionaires have stated.

              They have not stated it in those terms, that’s your interpretation of it. I am aware of Curtis Yarvin, Thiel et al. But they are hardly the only ones in control of the tech. But that’s not even the point. The tech exists, even if that was the express intention it doesn’t matter because China will keep pursuing the tech. Which means that we will keep pursuing it because otherwise they could get an advantage that could become an existential threat for us. And even if we did stop pursuing it for whatever reason (which would be illogical) the tech would not stop existing in the world as with nukes, except now all the billionaires will hire their AI workers from China instead of the US. Hardly an appealing proposition.

              Not this time. As many at the Church of Accelerationism fail to see, we’re at a point where there are practically no social safety nets left (at least in the US), which has not been the case in over a century, and people are actively dying because of anthropogenic climate, which is something that has never happened in recorded history. When people lost jobs before, they could at least get training or some other path that would allow them to make a living.

              So your solution is ban the tech instead of changing policies? Jesus Christ my guy. Arguments need to be logical you understand that right? This entire worldview and rhetoric is so detached from reality that it is downright absurd.

              The problem with the environment for example is not that AI exists, but rather that we do not have enough energy produced from renewables. Why would the logical solution be to uninvent AI (or ban it entirely, which is essentially the same) instead of changing policy so that energy production comes from renewables. Which fyi is what is happening at a faster rate than ever.

              I understand the moral imperative and the lack of patience, but the way the world works is that one thing leads to the other, we cannot reach a goal without going through the necessary process to reach it.