• @[email protected]
    link
    fedilink
    English
    61 hour ago

    For me, writing like this is always read with the AI voice from Satisfactory. And it slowly gets more and more corrupted.

  • @[email protected]
    link
    fedilink
    English
    164 hours ago

    I could see myself having conversations with an LLM, but I wouldn’t want it to pretend it’s anything other than a program assembling words together.

    • @[email protected]
      link
      fedilink
      English
      83 hours ago

      I made mine act like a 30s style paper editor with a cigar in his mouth. If I was going full bit I’d have him mention how my papers don’t have enough Spiderman.

      • @[email protected]
        link
        fedilink
        English
        112 minutes ago

        “Stop the presses! Send my wife some flowers and bring me an Advil! What do you mean you don’t work for me? You’re hired! Now that you’re hired, you’re fired! Now that you don’t work here, we can be friends! Now that we’re friends, how come you never call? Some friend you are!” hangs up

        “God, I love this business!”

  • TacoButtPlug
    link
    fedilink
    English
    174 hours ago

    It’s not even noon and I am so done with the internet for the day

  • Binette
    link
    fedilink
    English
    124 hours ago

    I don’t think schizoid is the best word to describe this behaviour

    • @[email protected]
      link
      fedilink
      English
      94 hours ago

      Honestly, cringy nomenclature aside, this is just porn that got a little too real. Some people are into the narrative, after all.

      To me the story begins and ends with some user that thinks the LLM sounds a little too life-like. Play with these things enough, and they’ll crawl out of the uncanny valley a bit from time to time. Trick is: that’s all in your head. Yeah, it might screw up your goon session and break the role-play a bit, but it’s not about to go all SkyNet on you.

      • @[email protected]
        link
        fedilink
        English
        73 hours ago

        The building that has my workspace has this great food court/library/work hybrid area where people who work remotely tend to go. a sort of third space. It has fantastic free wifi so it makes sense why people would use it and sit there all day working.

        Everyday there’s this older guy who sits there talking to his phone about some of the most random subjects ever. I originally thought he was just talking to a friend that seemed to have extensive knowledge on everything until one day I walked by him and glanced to see that he was talking to chatgpt. Everyday. Just random conversations. Even had a name for it, “Ryan”.

        Now? he’s frustrated. He doesn’t know what happened to Ryan and keeps screaming at his phone to “bring Ryan back!” or since GPT5 can’t maintain a conversation anymore it’s “You’re not Ryan!”. Granted the guy wasn’t mentally all there to begin with but now it’s spiraling. Got to the point yesterday he was yelling so loudly at his phone security had to tell him to leave.

          • @[email protected]
            link
            fedilink
            English
            21 hour ago

            happened with Replika a few years ago. made a number of people suicidal when they “neutered” their AI partners overnight with a model update (ironically, because of pressure because of how unhealthy it is.)

            idk, I’m of two minds. it’s sad and unhealthy to have a virtual best friend, but older people are often very lonely and a surrogate is better than nothing.

            • @[email protected]
              link
              fedilink
              English
              128 minutes ago

              To me it’s always just been a tool, nothing more. The sentences have a certain feel to them that I can’t describe. It’s always the same structure, the same kind of forced humor… granted I’ve spent quite some time with unfiltered LLMs but it looses its magic once you’ve “learned” it. Pattern recognition is quite an overpowered feature we as humans have. It is the reason why we fall for conspiracy theories also.

      • @[email protected]
        link
        fedilink
        English
        64 hours ago

        I don’t see this as sexual, it’s emotional and codependent behavior, not a sexual fantasy roleplay

        • @[email protected]
          link
          fedilink
          English
          33 hours ago

          When you really get into it they’re kinda the same thing for a lot of people, though. The entanglement between those (often unspoken of) elements of emotional/physical intimacy is rampant in our media-conditioned societies (esp. in the US)

  • @[email protected]
    link
    fedilink
    English
    114 hours ago

    So so tired of how utterly fucked things are getting on so many levels at once. More and more I think I really do need to invest in a back 50 acre lot and try the survival route while society just fucks itself into oblivion.

    • @[email protected]
      link
      fedilink
      English
      64 hours ago

      This kind of thins is just moral panic. Funny moral panic, but still pointless. There’s always been a tiny fraction of the population that is completely out to lunch, and there always will be.

  • @[email protected]
    link
    fedilink
    English
    9710 hours ago

    “My husband is voice his own thoughts without prompts.”

    She then posts a picture of her saying “what are you thinking about”

    Thats a direct response to the prompt hes not randomly voicing his thoughts. I hate ai but sometimes I hate people to

    • Mitch Effendi (ميتش أفندي)
      link
      fedilink
      English
      379 hours ago

      FWIW, this is why AI researchers have been screeching for decades not to create an AI that is anthropomorphized. It is already an issue we have with animals, now we are going to add a confabulation engine to the ass-end?

      • @[email protected]
        link
        fedilink
        English
        3
        edit-2
        51 minutes ago

        LLMs are trained on human writing, so they’ll always be fundamentally anthropomorphic. you could fine-tune them to sound more clinical, but it’s likely to make them worse at reasoning and planning.

        for example, I notice GPT5 uses “I” a lot, especially saying things like “I need to make a choice” or “my suspicion is.” I think that’s actually a side effect of the RL training they’ve done to make it more agentic. having some concept of self is necessary when navigating an environment.

        philosophical zombies are no longer a thought experiment.

      • Cethin
        link
        fedilink
        English
        32 hours ago

        People have this issue with video game characters who don’t even pretend to have intelligence. This could only go wrong.

      • @[email protected]
        link
        fedilink
        English
        45 hours ago

        Yeah apparently even Eliza messed up with people back in the day and that’s not even an LLM.

        • @[email protected]
          link
          fedilink
          English
          64 hours ago

          I’m starting to realize how easily fooled people are by this stuff. The average person cannot be this stupid, and yet, they are.

          • @[email protected]
            link
            fedilink
            English
            54 hours ago

            I was once in a restaurant and behind me was a group of 20 something year old people. Overheard someone asking something like:"so what are y’alls thoughts about VR? (This was just before the whole AI boom.) And one guy said:“ith’s kind of scary to think about.” I was super confused at that point, and they talked about how they heard people disappear in the cyberspace and people not knowing what’s real and what’s just VR.

            I don’t think they were stupid, but they formed a very strong opinion about something they clearly didn’t know anything about.

      • @[email protected]
        link
        fedilink
        English
        25 hours ago

        Personally, I hate the idea of not doing something because there’s idiots out there who will fuck themselves up on it. The current gen of AI might be a waste of resources and the whole concept of the goal of AI might be incompatible with society’s existence; those are good reasons to at least be cautious about AI.

        I don’t think people wanting to have relationships with an AI is a good reason to stop it, especially considering that it might even be a good option for some people who would otherwise just have no one or maybe too many cats for them to care for. Consider the creepy stalker type that thinks liking someone or something gives them ownership over that person or thing. Better for them to be obsessed with an LLM they can’t hurt than a real person they might (or will make uncomfortable even of they end up being harmless overall).

  • @[email protected]
    link
    fedilink
    English
    39
    edit-2
    9 hours ago

    One thing that comes to mind is that prostitution, no matter how you spin it, is still a social job. If you get a problematic person like that in prostitution, there are good chances that said prostitute would be able to talk their customer out of doing some nonsense. If not for empathy, for the simple fact that there would be legal consequences for not doing so.

    Do you think a glorified spreadsheet that people call husband would behave the same? Don’t know if it happened but one of these days LLMs will talk people into doing something very nasty and then it’s going to be no one’s fault again, certainly not the host of the LLM. We really live in a boring dystopia.

    Edit: Also there’s this one good movie which I forgot the name of, about a person talking to one of these LLMs as a girlfriend. They have a bizarre, funny and simultaneously creepy and disturbing scene where the main character who’s in love with the LLM, hires a woman who puts a camera on her forehead to have sex with his LLM “girlfriend”.

    Also, my quite human husband also voices his thoughts without a prompt. Lol. You only need to feed him to function, no internet required.

    • @[email protected]
      link
      fedilink
      English
      126 hours ago

      A problem with LLM relationships is the monetization model for the LLM. Its “owner” either receives a monthly fee from the user, or is able to get data from the user to monetize selling them stuff. So the LLM is deeply dependant on the user, and is motivated to manipulate a returned codependency to maintain its income stream. This is not significantly different than the therapy model, but the user can fail to see through manipulation compared to “friends/people who don’t actually GAF” about maintaining a strong relationship with you.

    • @[email protected]
      link
      fedilink
      English
      147 hours ago

      The movie you’re thinking of is Her with Joaquin Phoenix and Scarlett Johansson, and in the story she’s a true general AI.

    • @[email protected]
      link
      fedilink
      English
      178 hours ago

      Also, my quite human husband also voices his thoughts without a prompt. Lol. You only need to feed him to function, no internet required.

      Sometimes, with humans, I’d say the problem is quite the opposite: they voice their thoughts without a prompt far more often than what would be desirable.

      On a less serious note, that quoted part made me chuckle.

      • @[email protected]
        link
        fedilink
        English
        47 hours ago

        They shouldn’t be so harsh to LLMs. They have something in common with humans after all. If you stick a patch cable with internet up theirs, they will become very talkative very quickly.

  • @[email protected]
    link
    fedilink
    English
    369 hours ago

    How does anyone enjoy this? It doesn’t even feel real. No spelling mistakes? What the fuck is a skycot?

    I may have never had a match on a dating app that wasn’t a cryptobot or only fans girl, but I also don’t swipe right on every single woman on it. You’d think my loneliness would attempt me to try and pretend it was real or something, but it just doesn’t work.

    LLMs are going to make the world stupid, I guarantee it.

    • @[email protected]
      link
      fedilink
      English
      156 hours ago

      This reminds me of the people who genuinely fall for “romance scams”, and the scammer has all the personality and vocabulary of a wet paper bag.

      And yet somehow someone will believe they’re some hot (barely literate) U.S soldier stuck in Kuwait until they can get a flight home to meet the victim for only $2000… Wait, $1000 more… But then there’s a $500 fee… And then…

      Blows my mind…

    • @[email protected]
      link
      fedilink
      English
      138 hours ago

      LLMs are going to make the world stupid, I guarantee it.

      Waaaaaaay too late for that…

        • @[email protected]
          link
          fedilink
          English
          14 hours ago

          I think the constant brain rot from the cradle will have a much worse effect in that regard. In my uneducated opinion, LLMs are scary due to cultivating delusions of every kind.