• jeeva
    link
    fedilink
    12 days ago

    That’s just… Not how they work.

    Equally, from your other comment: a parameter for truthiness, you just can’t tokenise that in a language model. One word can drastically change the meaning of a sentence.

    LLMs are very good at one thing: making probable strings of tokens (where tokens are, roughly, words).

    • @[email protected]
      link
      fedilink
      1
      edit-2
      2 days ago

      Yeah, you can. The current architecture doesn’t do this exactly, but what I am saying is a new method that includes truthiness is needed. The fact that LLMs predict probable tokens means it already includes a concept of this, because probabilities themselves are a measure of “truthiness.”

      Also, I am speaking in abstract. I don’t care what they can and can’t do. They need to have a concept of truthiness. Use your imagination and fill in the gaps to what that means.