• @[email protected]
    link
    fedilink
    31 month ago

    LLMs have flat out made up functions that don’t exist when I’ve used them for coding help. Was not useful, did not point me in a good direction, and wasted my time.

    • @[email protected]
      link
      fedilink
      English
      21 month ago

      You need to actively have the relevant code in context.

      I use it to describe code from shitty undocumented libraries, and my local models can explain the code well enough in lieu of actual documentation.

    • @[email protected]
      link
      fedilink
      11 month ago

      Sure, they certainly can hallucinate things. But some models are way better than others at a given task, so it’s important to find a good fit and to learn to use the tool effectively.

      We have three different models at work, and they work a lot differently and are good at different things.