• @[email protected]
    link
    fedilink
    English
    314 days ago

    The thing that gets me is they’ve apparently done multiple rounds of “correcting” Grok for being too “Woke” and it just keeps happening!

    Reality has a well known liberal bias headass.

        • Tar_Alcaran
          link
          fedilink
          44 days ago

          The big problem with training LLMs is that you need good data, but there’s so much data you can’t really manually separate all “good” from all “bad” data. You have to use the set of all data, and a much much smaller set of tagged and marked “good” data.

      • @[email protected]
        link
        fedilink
        5
        edit-2
        4 days ago

        No. They don’t need to generate data to train on data. There is PLENTY of white supremacist hate shit out there.

        The issue is one of labeling and weighting. Which is a pretty solved problem. It isn’t 100% solved and there will be isolated cases but “grok” breaks under even the most cursory of poking.

        Don’t believe me? Go look at the crowd who can convert any image or text generating model into porn/smut/liveleak in nothing flat. Or, for a less horrifying version of that, how concepts like RAG and the like to take generalized models and heavily weight them toward what you actually care about.

        Nah. This, like most things musk, just highlights how grossly incompetent basically all of his companies are. Even spacex mostly just coasts on being the only ones allowed to work on stuff (RIP NASA and, to a lesser extent, JPL) and then poaching the talent from everyone else to keep them from showing that.