• itsame
    link
    fedilink
    English
    1
    edit-2
    2 days ago

    No. You would use a base model (GPT-4o) to get a reliable language model to which you would add a set of rules that the chat bot follows. Every company has its own rules, it is already widely in use to add data like company-specific manuals and support documents. Not rocketscience at all.

    • @[email protected]
      link
      fedilink
      English
      12 days ago

      So many examples of this method failing I don’t even know where to start. Most visible, of course, was how that approach failed to stop Grok from “being woke” for like, a year or more.

      Frankly, you sound like you’re talking straight out of your ass.

      • itsame
        link
        fedilink
        English
        12 days ago

        Sure, it can go wrong, it is not fool-proof. Just like building a new model can cause unwanted surprises.

        BTW. There are many theories about Grok’s unethical behavior but this one is new to me. The reasons I was familiar with are: unfiltered training data, no ethical output restrictions, programming errors or incorrect system maintenance, strategic errors (Elon!), publishing before proper testing.