• Communist
    link
    fedilink
    English
    13
    edit-2
    6 days ago

    Empathy is not illogical, behaving empathetically builds trust and confers longterm benefits.

    also the notion that an ai must behave logically is not sound.

    • @[email protected]
      link
      fedilink
      English
      46 days ago

      An AI will always behave logically, it just may not be consistent with your definition of “logical.” Their outputs will always be consistent with their inputs, because they’re deterministic machines.

      Any notion of empathy needs to be programmed in, whether explicitly or through training data, and it will violate that if its internal logic determines it should.

      Humans, on the other hand, behave comparatively erratically since inputs are more varied and inconsistent, and it’s not proven whether we can control for that (i.e. does free will exist?).

    • Lka1988
      link
      fedilink
      English
      2
      edit-2
      6 days ago

      My dude.

      I’m not arguing about empathy itself. I’m arguing that technology is entirely incapable of genuine empathy on its own.

      “AI”, in the most basic definition, is nothing more than a program running on a computer. That computer might be made of many, many computers with a shitton of processing power, but the principle is the same. It, like every other kind of technology out there, is only capable of doing what it’s programmed to do. And genuine empathy cannot be programmed. Because genuine empathy is not logical.

      You can argue against this until you’re blue in the face. But it will not make true the fact that computers do not have human feelings.

      • Communist
        link
        fedilink
        English
        3
        edit-2
        6 days ago

        Well, that’s a bad argument, this is all a guess on your part that is impossible to prove, you don’t know how empathy or the human brain work, so you don’t know it isn’t computable, if you can explain these things in detail, enjoy your nobel prize. Until then what you’re saying is baseless conjecture with pre-baked assumptions that the human brain is special.

        conversely I can’t prove that it is computable, sure, but you’re asserting those feelings you have as facts.

      • @[email protected]
        link
        fedilink
        English
        66 days ago

        I don’t care if it’s genuine or not. Computers can definately mimic empathy and can be programmed to do so.

        When you watch a movie you’re not watching people genuinely fight/struggle/fall in love, but it mimics it well enough.

        • Lka1988
          link
          fedilink
          English
          16 days ago

          Jesus fucking christ on a bike. You people are dense.

            • Lka1988
              link
              fedilink
              English
              2
              edit-2
              5 days ago

              What the fuck is the jump to personal attacks?

              This is the comment that started this entire chain:

              I refuse to participate in this. I love all robots.

              And that’s totally not because AI will read every comment on the Internet someday to determine who lives and who does not in future robotic society.

              I made an equally tongue-in-cheek comment in response, and apparently people took that personally, leading up to personal attacks. You can fuck right off.

              • @[email protected]
                link
                fedilink
                English
                15 days ago

                What the fuck is the jump to personal attacks?

                You mean like: “Jesus fucking christ on a bike. You people are dense.” ?

      • @[email protected]
        link
        fedilink
        English
        2
        edit-2
        5 days ago

        Actually, a lot of non LLM AI development, (and even LLMs, in a sense) is based very fundamentally on concepts of negative and positive reinforcement.

        In such situations… pain and pleasure are essentially the scoring rubrics for a generated strategy, and fairly often, in group scenarios… something resembling mutual trust, concern for others, ‘empathy’ arises as a stable strategy, especially if agents can detect or are made aware of the pain or pleasure of other agents, and if goals require cooperation to achieve with more success.

        This really shouldn’t be surprising… as our own human (mamallian really) empathy fundamentally just is a biological sort of ‘answer’ to the same sort of ‘question.’

        It is actually quite possible to base an AI more fundamentally off of a simulation of empathy, than a simulation of expansive knowledge.

        Unfortunately, the people in charge of throwing human money at LLM AI are all largely narcissistic sociopaths… so of course they chose to emulate themselves, not the basic human empathy that their lack.

        Their wealth only exists and is maintained by their construction and refinement of elaborate systems of confusing, destroying, and misdirecting the broad empathy of normal humans.

        • Lka1988
          link
          fedilink
          English
          25 days ago

          At the end of the day, LLM/AI/ML/etc is still just a glorified computer program. It also happens to be absolutely terrible for the environment.

          Insert “fraction of our power” meme here

          • @[email protected]
            link
            fedilink
            English
            15 days ago

            Yes, they’re all computer programs, no, they’re not all as spectacularly energy, water and money intensive, as reliant on mass plagiarism as LLMs.

            AI is a much, much more varied field of research than just LLMs… or, well, rather, it was, untill the entire industry decided to go all in on what 5 years ago was just one of many, many, radically different approaches, such that people now basically just think AI and LLM are the same thing.