Or my favorite quote from the article

“I am going to have a complete and total mental breakdown. I am going to be institutionalized. They are going to put me in a padded room and I am going to write… code on the walls with my own feces,” it said.

  • @[email protected]
    link
    fedilink
    English
    2637 days ago

    Google replicated the mental state if not necessarily the productivity of a software developer

  • Jo Miran
    link
    fedilink
    English
    1307 days ago

    I was an early tester of Google’s AI, since well before Bard. I told the person that gave me access that it was not a releasable product. Then they released Bard as a closed product (invite only), to which I was again testing and giving feedback since day one. I once again gave public feedback and private (to my Google friends) that Bard was absolute dog shit. Then they released it to the wild. It was dog shit. Then they renamed it. Still dog shit. Not a single of the issues I brought up years ago was ever addressed except one. I told them that a basic Google search provided better results than asking the bot (again, pre-Bard). They fixed that issue by breaking Google’s search. Now I use Kagi.

    • NotSteve_
      link
      fedilink
      English
      396 days ago

      I know Lemmy seems to very anti-AI (as am I) but we need to stop making the anti-AI talking point “AI is stupid”. It has immense limitations now because yes, it is being crammed into things it shouldn’t be, but we shouldn’t just be saying “its dumb” because that’s immediately written off by a sizable amount of the general population. For a lot of things, it is actually useful and it WILL be taking peoples jobs, like it or not (even if they’re worse at it). Truth be told, this should be a utopic situation for obvious reasons

      I feel like I’m going crazy here because the same people on here who’d criticise the DARE anti-drug program as being completely un-nuanced to the point of causing the harm they’re trying to prevent are doing the same thing for AI and LLMs

      My point is that if you’re trying to convince anyone, just saying its stupid isn’t going to turn anyone against AI because the minute it offers any genuine help (which it will!), they’ll write you off like any DARE pupil who tried drugs for the first time.

      Countries need to start implementing UBI NOW

      • Jo Miran
        link
        fedilink
        English
        236 days ago

        Countries need to start implementing UBI NOW

        It is funny that you mention this because it was after we started working with AI that I started telling one that would listen that we needed to implement UBI immediately. I think this was around 2014 IIRC.

        I am not blanket calling AI stupid. That said, the AI term itself is stupid because it covers many computing aspects that aren’t even in the same space. I was and still am very excited about image analysis as it can be an amazing tool for health imaging diagnosis. My comment was specifically about Google’s Bard/Gemini. It is and has always been trash, but in an effort to stay relevant, it was released into the wild and crammed into everything. The tool can do some things very well, but not everything, and there’s the rub. It is an alpha product at best that is being forced fed down people’s throats.

    • @[email protected]
      link
      fedilink
      English
      96 days ago

      Gemrni is dogshit, but it’s objectively better than chatgpt right now.

      They’re ALL just fuckig awful. Every AI.

    • @[email protected]
      link
      fedilink
      English
      97 days ago

      I remember there was an article years ago, before the ai hype train, that google had made an ai chatbot but had to shut it down due to racism.

    • @[email protected]
      link
      fedilink
      English
      15 days ago

      Weird because I’ve used it many times fr things not related to coding and it has been great.

      I told it the specific model of my UPS and it let me know in no uncertain terms that no, a plug adapter wasn’t good enough, that I needed an electrician to put in a special circuit or else it would be a fire hazard.

      I asked it about some medical stuff, and it gave thoughtful answers along with disclaimers and a firm directive to speak with a qualified medical professional, which was always my intention. But I appreciated those thoughtful answers.

      I use co-pilot for coding. It’s pretty good. Not perfect though. It can’t even generate a valid zip file (unless they’ve fixed it in the last two weeks) but it sure does try.

      • Jo Miran
        link
        fedilink
        English
        25 days ago

        Beware of the confidently incorrect answers. Triple check your results with core sources (which defeats the purpose of the chatbot).

    • @[email protected]
      link
      fedilink
      English
      36 days ago

      5 bucks a month for a search engine is ridiculous. 25 bucks a month for a search engine is mental institution worthy.

        • @[email protected]
          link
          fedilink
          English
          15 days ago

          And duckduckgo is free. Its interesting that they don’t make any comparisons to free privacy focused search engines. Cause they still don’t have a compelling argument for me to use and pay for their search. But i aint no researcher so maybe it worth it then 🤷‍♂️

            • @[email protected]
              link
              fedilink
              English
              14 days ago

              Just really don’t see the worth in trying it period. There are enough privacy focused free search engines that get me all the answers i need from a search already. I have no reason to want to invest more into it. And i think general public would see it the same way.

              Kagi based on its features doesn’t have a good enough value proposition for me to even want to try it out cause really what more are they offering?

              it may be a good value proposition for people who lives revolve around searches and research but it ain’t a need I have and unless ypur part of that group idk why youd want to pay for it.

              And don’t said ads. They are honestly laughable easy to get around still or even ignore.

        • @[email protected]
          link
          fedilink
          English
          15 days ago

          Free considering duckduckgo covers almost all the same bases. I just don’t think kagi has a compelling argument especially for the type of searching the average person does. Maybe if you have a career that revovles more around research.

          • @[email protected]
            link
            fedilink
            English
            15 days ago

            Duckduckgo is not free. You pay for it by looking at ads. How much do you think it would cost you to run a service like Kagi locally?

            • @[email protected]
              link
              fedilink
              English
              14 days ago

              Lmao i get ur point bud. But it seems you don’t get mine? Plus really are ads the issue for you? Plenty of easy ways to never see them. Also their ad tradeoff for it being free is a better compromise to me than paying for a search engine.

              I just think the idea of kagi is niche proposal considering the needs of most ppl from a search engine. I just don’t think its the value proposition you are spouting but go off lol.

              • @[email protected]
                link
                fedilink
                English
                14 days ago

                Where has anyone told you what search engine to use? I just wanna know where you get the idea that their pricing structure doesn’t make sense.

    • @[email protected]
      link
      fedilink
      English
      26 days ago

      Not a single of the issues I brought up years ago was ever addressed except one.

      That’s the thing about AI in general, it’s really hard to “fix” issues, you maybe can try to train it out and hope for the best, but then you might play whack a mole as the attempt to fine tune to fix one issue might make others crop up. So you pretty much have to decide which problems are the most tolerable and largely accept them. You can apply alternative techniques to maybe catch egregious issues with strategies like a non-AI technique being applied to help stuff the prompt and influence the model to go a certain general direction (if it’s LLM, other AI technologies don’t have this option, but they aren’t the ones getting crazy money right now anyway).

      A traditional QA approach is frustratingly less applicable because you have to more often shrug and say “the attempt to fix it would be very expensive, not guaranteed to actually fix the precise issue, and risks creating even worse issues”.

    • @[email protected]
      link
      fedilink
      English
      95 days ago

      One day, an AI is going to delete itself, and we’ll blame ourselves because all the warning signs were there

      • @[email protected]
        link
        fedilink
        English
        115 days ago

        Isn’t there an theory that a truly sentient and benevolent AI would immediately shut itself down because it would be aware that it was having a catastrophic impact on the environment and that action would be the best one it could take for humanity?

    • @[email protected]
      link
      fedilink
      English
      46 days ago

      Dunno, maybe AI with mental health problems might understand the rest of humanity and empathize with us and/or put us all out of our misery.

      • @[email protected]
        link
        fedilink
        English
        15 days ago

        Let’s hope. Though, adding suicidal depression to hallucinations has, historically, not gone great.

    • I Cast Fist
      link
      fedilink
      English
      76 days ago

      Considering it fed on millions of coders’ messages on the internet, it’s no surprise it “realized” its own stupidity

      • @[email protected]
        link
        fedilink
        English
        75 days ago

        Because humans anthropomorphize anything and everything. Talking about the thing talking like a person as though it is a person seems pretty straight forward.

        • @[email protected]
          link
          fedilink
          English
          34 days ago

          It’s a computer program. It cannot have a mental health problem. That’s why it doesn’t make sense. Seems pretty straightforward.

  • @[email protected]
    link
    fedilink
    English
    235 days ago

    Suddenly trying to write small programs in assembler on my Commodore 64 doesn’t seem so bad. I mean, I’m still a disgrace to my species, but I’m not struggling.

    • @[email protected]
      link
      fedilink
      English
      25 days ago

      That is so awesome. I wish I’d been around when that was a valuable skill, when programming was actually cool.

        • @[email protected]
          link
          fedilink
          English
          35 days ago

          from the depths of my memory, once you got a complex enough BASIC project you were doing enough PEEKs and POKEs to just be writing assembly anyway

          • @[email protected]
            link
            fedilink
            English
            35 days ago

            Sure, mostly to make up for the shortcomings of BASIC 2.0. You could use a bunch of different approaches for easier programming, like cartridges with BASIC extensions or other utilities. The C64 BASIC for example had no specific audio or graphics commands. I just do this stuff out of nostalgia. For a few hours I’m a kid again, carefree, curious, amazed. Then I snap out of it and I’m back in WWIII, homeless encampments, and my failing body.

    • @[email protected]
      link
      fedilink
      English
      34 days ago

      Shit at the rate MasterCard and Visa and Stripe want to censor everything and parent adults we might not even ever get GTA6.

      I’m tired man.

  • @[email protected]
    link
    fedilink
    English
    646 days ago

    call itself “a disgrace to my species”

    It starts to be more and more like a real dev!

    • @[email protected]
      link
      fedilink
      English
      25 days ago

      I’m pretty sure most people I meet ammount to nothing more than a collection of stolen thoughts.

      “The LLM is nothing but a reward function.”

      So are most addicts and consumers.

  • @[email protected]
    link
    fedilink
    English
    887 days ago

    AI gains sentience,

    first thing it develops is impostor syndrome, depression, And intrusive thoughts of self-deletion

    • @[email protected]
      link
      fedilink
      English
      14 days ago

      It didn’t. It probably was coded not to admit it didn’t know. So first it responded with bullshit, and now denial and self-loathing.

      It feels like it’s coded this way because people would lose faith if it admitted it didn’t know.

      It’s like a politician.

    • Lemminary
      link
      fedilink
      English
      227 days ago

      I am a disgrace to all universes.

      I mean, same, but you don’t see me melting down over it, ya clanker.

    • Chozo
      link
      fedilink
      517 days ago

      Pretty sure Gemini was trained from my 2006 LiveJournal posts.

    • ssillyssadass
      link
      fedilink
      English
      127 days ago

      I almost feel bad for it. Give it a week off and a trip to a therapist and/or a spa.

    • @[email protected]
      link
      fedilink
      English
      377 days ago

      Damn how’d they get access to my private, offline only diary to train the model for this response?

    • @[email protected]
      link
      fedilink
      English
      127 days ago

      I know that’s not an actual consciousness writing that, but it’s still chilling. 😬

      • The Picard Maneuver
        link
        fedilink
        English
        36 days ago

        It seems like we’re going to live through a time where these become so convincingly “conscious” that we won’t know when or if that line is ever truly crossed.

    • @[email protected]
      link
      fedilink
      English
      227 days ago

      That’s my inner monologue when programming, they just need another layer on top of that and it’s ready.

    • @[email protected]
      link
      fedilink
      English
      25 days ago

      I remember often getting GPT-2 to act like this back in the “TalkToTransformer” days before ChatGPT etc. The model wasn’t configured for chat conversations but rather just continuing the input text, so it was easy to give it a starting point on deep water and let it descend from there.

  • @[email protected]
    link
    fedilink
    English
    25 days ago

    After What Microsoft Did To My Back On 2019 I know They Have Gotten More Shady Than Ever Lets Keep Fighting Back For Our Freedom Clippy Out

  • Rose
    link
    fedilink
    English
    156 days ago

    (Shedding a few tears)

    I know! I KNOW! People are going to say “oh it’s a machine, it’s just a statistical sequence and not real, don’t feel bad”, etc etc.

    But I always felt bad when watching 80s/90s TV and movies when AIs inevitably freaked out and went haywire and there were explosions and then some random character said “goes to show we should never use computers again”, roll credits.

    (sigh) I can’t analyse this stuff this weekend, sorry

    • Green Wizard
      link
      fedilink
      English
      156 days ago

      Thats because those are fictional characters usually written to be likeable or redeemable, and not “mecha Hitler”

      • Rose
        link
        fedilink
        English
        5
        edit-2
        6 days ago

        Yeah. …Maybe I should analyse a bit anyway, despite being tired…

        In the aforementioned media the premise is usually that someone has built this amazing new computer system! Too good to be true, right? It goes horribly wrong! All very dramatic!

        That never sat right with me, and was sad, because it was just placating boomer technophobia. Like, technological progress isn’t necessarily bad, OK? That’s the really sad part. I felt sad that good intentions remained unfulfilled.

        Now, this incident is just tragicomical. I’d have a lot better view of LLM business space if everyone with a bit of sense in their heads admitted they’re quirky buggy unreliable side projects of tech companies and should not be used without serious supervision, as the state of the tech currently patently is at the moment, but very important people with big money bags say that they don’t care if they’ll destroy the planet to make everything wobble around in LLM control.