• Balder
      link
      fedilink
      English
      52
      edit-2
      6 days ago

      All I see is people chatting with an LLM as if it was a person. “How bad is this on a scale of 1 to 100”, you’re just doomed to get some random answer based solely on whatever context is being fed in the input and that you probably don’t know the extent of it.

      Trying to make the LLM “see its mistakes” is a pointless exercise. Getting it to “promise” something is useless.

      The issue with LLMs working with human languages is people eventually wanting to apply human things to LLMs such as asking why as if the LLM knows of its own decision process. It only takes an input and generates an output, it won’t be able to have any “meta thought” explanation about why it outputted X and not Y in the previous prompt.

      • @[email protected]
        link
        fedilink
        English
        116 days ago

        Yeah the interaction are pure waste of time I agree, make it write an apology letter? WTF! For me it looks like a fast track way to learn environment segregation, & secret segregation. Data is lost, learn from it and there are tool already in place like git like alembic for proper development.

        • @[email protected]
          link
          fedilink
          English
          76 days ago

          the apology letter(s) is what made me think this was satire. using shame to punish “him” like a child is an interesting troubleshooting method.

          the lying robot hasn’t heel-turned, any truth you’ve gleaned has been accidental.

      • @[email protected]
        link
        fedilink
        English
        36 days ago

        I wonder if it can be used legally against the company behind the model, though. I doubt that it’s possible, but having a “your own model says it effed up my data” could give some beef to a complaint. Or at least to a request to get a refund on the fees.

      • @[email protected]
        link
        fedilink
        English
        15 days ago

        How bad is this on a scale of sad emoji to eggplant emoji.

        Children are replacing us, it’s terrifying.

    • @[email protected]
      link
      fedilink
      English
      86 days ago

      My god, that’s a lot to process. A couple that stand out:

      Comments proposing to use github as the database backup. This is Keyword Architecture, and these people deserve everything they get.

      The Replit model can also send out communications? It’s just a matter of time before some senior exec dies on the job but nobody notices because their personal LLM keeps emailing reports that nobody reads.