Well, I hope you don’t have any important, sensitive personal information in the cloud?

  • @[email protected]
    link
    fedilink
    English
    171 day ago

    These weren’t obscure, edge-case vulnerabilities, either. In fact, one of the most frequent issues was: Cross-Site Scripting (CWE-80): AI tools failed to defend against it in 86% of relevant code samples.

    So, I will readily believe that LLM-generated code has additional security issues, but given that the models are trained on human-written code, this does raise the obvious question of what percentage of human-written code properly defends against cross-site scripting attacks, a topic that the article doesn’t address.

    • @[email protected]OP
      link
      fedilink
      91 day ago

      There are a few aspects that LLMs are just not capable of, and one of them is understanding and observing implicit invariants.

      (That’s getting to be funny if the tech is used for a while on larger, complex, multi-threaded C++ code bases. Given that C++ appears already less popular with more experienced people than with juniors, I am very doubtful whether C++ will survive that clash.)

    • @[email protected]
      link
      fedilink
      41 day ago

      If a system was made to show blogs by the author and gets repurposed by a LLM to show untrusted user content the same code becomes unsafe.