• @[email protected]
    link
    fedilink
    English
    23 days ago

    I ran the tests with the thinking model. It got them right. For these kind of tasks, choosing the thinking model is key,

    • @[email protected]
      link
      fedilink
      English
      153 days ago

      the thinking model

      Ugh… can we all just stop for a moment to acknowledge how obnoxious this branding is? They’ve already corrupted the term “AI” to the point of being completely meaningless, are they going to remove all meaning from the word “thinking” now too?

      • Lemminary
        link
        fedilink
        English
        3
        edit-2
        3 days ago

        They’ve already corrupted the term “AI” to the point of being completely meaningless

        Did they? Afaik, LLMs are an application of AI that falls under natural language processing. It’s like calling a rhombus a geometric shape because that’s what it is. And this usage goes back decades to, for example, A* pathfinding algorithms and hard-coded decision trees for NPCs.

        E: Downvotes for what, disagreeing? At least point out why I’m wrong if you think you know better.

        • DefederateLemmyMl
          link
          fedilink
          English
          22 days ago

          I think the problem stems from how LLMs are marketed to, and perceived by the public. They are not marketed as: this is a specific application of this-or-that AI or ML technology. They are marketed as “WE HAVE AI NOW!”, and the general public who is not familiar with AI/ML technologies equates this to AGI, because that’s what they know from the movies. The promotional imagery that some of these companies put out, with humanoid robots that look like they came straight out of Ex Machina doesn’t help either.

          And sure enough, upon first contact, an LLM looks like a duck and quacks like a duck … so people assume it is a duck, but they don’t realize that it’s a cardboard model of a duck with a taperecorder inside that plays back quacking sounds.