The question is simple. I wanted to get a general consensus on if people actually audit the code that they use from FOSS or open source software or apps.

Do you blindly trust the FOSS community? I am trying to get a rough idea here. Sometimes audit the code? Only on mission critical apps? Not at all?

Let’s hear it!

  • @[email protected]
    link
    fedilink
    English
    2
    edit-2
    2 months ago

    If it looks sketchy I’ll look at it and not trust the binaries. I’m not going to catch anything subtle, but if it sets up a reverse shell, I can notice that shit.

  • @[email protected]
    link
    fedilink
    English
    312 months ago

    I know lemmy hates AI but auditing open source code seems like something it could be pretty good at. Maybe that’s something that may start happening more.

    • @[email protected]
      link
      fedilink
      English
      72 months ago

      ‘AI’ as we currently know it, is terrible at this sort of task. It’s not capable of understanding the flow of the code in any meaningful way, and tends to raise entirely spurious issues (see the problems the curl author has with being overwhealmed for example). It also wont spot actually malicious code that’s been included with any sort of care, nor would it find intentional behaviour that would be harmful or counterproductive in the particular scenario you want to use the program.

      • Semperverus
        link
        fedilink
        English
        1
        edit-2
        2 months ago

        Having actually worked with AI in this context alongside github/azure devops advanced security, I can tell you that this is wrong. As much as we hate AI, and as much as people like to (validly) point out issues with hallucinations, overall it’s been very on-point.

        • @[email protected]
          link
          fedilink
          English
          12 months ago

          Could you let me know what sort of models you’re using? Everything I’ve tried has basically been so bad it was quicker and more reliable to to the job myself. Most of the models can barely write boilerplate code accurately and securely, let alone anything even moderately complex.

          I’ve tried to get them to analyse code too, and that’s hit and miss at best, even with small programs. I’d have no faith at all that they could handle anything larger; the answers they give would be confident and wrong, which is easy to spot with something small, but much harder to catch with a large, multi process system spread over a network. It’s hard enough for humans, who have actual context, understanding and domain knowledge, to do it well, and I’ve, personally, not seen any evidence that an LLM (which is what I’m assuming you’re referring to) could do anywhere near as well. I don’t doubt that they flag some issues, but without a comprehensive, human, review of the system architecture, implementation and code, you can’t be sure what they’ve missed, and if you’re going to do that anyway, you’ve done the job yourself!

          Having said that, I’ve no doubt that things will improve, programming languages have well defined syntaxes and so they should be some of the easiest types of text for an LLM to parse and build a context from. If that can be combined with enough domain knowledge, a description of the deployment environment and a model that’s actually trained for and tuned for code analysis and security auditing, it might be possible to get similar results to humans.

          • Semperverus
            link
            fedilink
            English
            12 months ago

            Its just whatever is built into copilot.

            You can do a quick and dirty test by opening copilot chat and asking it something like “outline the vulnerabilities found in the following code, with the vulnerabilities listed underneath it. Outline any other issues you notice that are not listed here.” and then paste the code and the discovered vulns.

    • @[email protected]
      link
      fedilink
      English
      1
      edit-2
      2 months ago

      I’m actually planning to do an evaluation of a n ai code review tool to see what it can do. I’m actually somewhat optimistic that it could do this better than it can code

      I really want to sic it on this one junior programmer who doesn’t understand that you can’t just commit ai generated slop and expect it to work. This last code review after over 60 pieces of feedback I gave up on the rest and left it as he needs to understand when ai generated slop needs help

      Ai is usually pretty good at unit tests but it was so bad. Randomly started using a different mocking framework, it actually mocked entire classes and somehow thought that was valid to test them. Wasting tests on non-existent constructors no negative tests, tests without verifying anything. Most of all there were so many compile errors, yet he thought that was fine

    • @[email protected]
      link
      fedilink
      English
      172 months ago

      Daniel Stenberg claims that the curl bug reporting system is effectively DDOSed by AI wrongly reporting various issues. Doesn’t seem like a good feature in a code auditor.

      • @[email protected]
        link
        fedilink
        English
        92 months ago

        I’ve been on the receiving end of these. It’s such a monumental time waster. All the reports look legit until you get into the details and realize it’s complete bullshit.

        But if you don’t look into it maybe you ignored a real report…

    • @[email protected]
      link
      fedilink
      English
      42 months ago

      I’m writing a paper on this, actually. Basically, it’s okay-ish at it, but has definite blind spots. The most promising route is to have AI use a traditional static analysis tool, rather than evaluate the code directly.

      • Semperverus
        link
        fedilink
        English
        32 months ago

        That seems to be the direction the industry is headed in. GHAzDO and competitors all seem to be converging on using AI as a force-multiplier on top of the existing solutions, and it works surprisingly well.

    • @[email protected]
      link
      fedilink
      English
      28
      edit-2
      2 months ago

      This is one of the few things that AI could potentially actually be good at. Aside from the few people on Lemmy who are entirely anti-AI, most people just don’t want AI jammed willy-nilly into places where it doesn’t belong to do things poorly that it’s not equipped to do.

      • @[email protected]
        link
        fedilink
        English
        122 months ago

        Aside from the few people on Lemmy who are entirely anti-AI

        Those are silly folks lmao

        most people just don’t want AI jammed willy-nilly into places where it doesn’t belong to do things poorly that it’s not equipped to do.

        Exactly, fuck corporate greed!

        • @[email protected]
          link
          fedilink
          English
          32 months ago

          I don’t hate AI, I hate how it was created, how it’s foisted on us, the promises it can do things it really can’t, and the corporate governance of it.

          But I acknowledge these tools exist, and I do use them because they genuinely help and I can’t undo all the stuff I hate about them.

          If I had millions of dollars to spend, sure I would try and improve things, but I don’t.

        • @[email protected]
          link
          fedilink
          English
          142 months ago

          Those are silly folks lmao

          Eh, I kind of get it. OpenAI’s malfeasance with regard to energy usage, data theft, and the aforementioned rampant shoe-horning (maybe “misapplication” is a better word) of the technology has sort of poisoned the entire AI well for them, and it doesn’t feel (and honestly isn’t) necessary enough that it’s worth considering ways that it might be done ethically.

          I don’t agree with them entirely, but I do get where they’re coming from. Personally, I think once the hype dies down enough and the corporate money (and VC money) gets out of it, it can finally settle into a more reasonable solid-state and the money can actually go into truly useful implementations of it.

          • @[email protected]
            link
            fedilink
            English
            52 months ago

            OpenAI’s malfeasance with regard to energy usage, data theft,

            I mean that’s why I call them silly folks, that’s all still attributable to that corporate greed we all hate, but I’ve also seen them shit on research work and papers just because “AI” Soo yea lol

    • @[email protected]
      link
      fedilink
      English
      22 months ago

      It wouldn’t be good at it, it would at most be a little patch for non audited code.

      In the end it would just be an AI-powered antivirus.

  • @[email protected]
    link
    fedilink
    English
    22 months ago

    Packaged products ready to use? No.
    Libraries which I use in my own projects? I at least have a quick look at the implementation, often a more detailed analysis if issues pop up.

  • @[email protected]
    link
    fedilink
    English
    54
    edit-2
    2 months ago

    For personal use? I never do anything that would qualify as “auditing” the code. I might glance at it, but mostly out of curiosity. If I’m contributing then I’ll get to know the code as much as is needed for the thing I’m contributing, but still far from a proper audit. I think the idea that the open-source community is keeping a close eye on each other’s code is a bit of a myth. No one has the time, unless someone has the money to pay for an audit.

    I don’t know whether corporations audit the open-source code they use, but in my experience it would be pretty hard to convince the typical executive that this is something worth investing in, like cybersecurity in general. They’d rather wait until disaster strikes then pay more.

    • @[email protected]
      link
      fedilink
      English
      5
      edit-2
      2 months ago

      My company only allows downloads from official sources, verified publishers, signed where we can. This is enforced by only allowing the repo server to download stuff and only from places we’ve configured. In general those go through a process to reduce the chances of problems and mitigate them quickly.

      We also feed everything through a scanner to flag known vulnerabilities, unacceptable licenses

      If it’s fully packaged installable software, we have security guys that take a look at I have no idea what they do and whether it’s an audit

      I’m actually going round in circles with this one developer. He needs an open source package and we already cache it on the repo server in several form factors, from reputable sources …… but he wants to run a random GitHub component which downloads an unsigned tar file from an untrusted source

  • @[email protected]
    link
    fedilink
    English
    132 months ago

    I generally look over the project repo and site to see if there’s any flags raised like those I talk about here.

    Upon that, I glance over the codebase, check it’s maintained and will look for certain signs like tests and (for apps with a web UI) the main template files used for things like if care has been taken not to include random analytics or external files by default. I’ll get a feel for the quality of the code and maintenance during this. I generally wouldn’t do a full audit or anything though. With modern software it’s hard to fully track and understand a project, especially when it’ll rely on many other dependencies. There’s always an element of trust, and that’s the case regardless of being FOSS or not. It’s just that FOSS provides more opportunities for folks to see the code when needed/desired.

    • @[email protected]
      link
      fedilink
      English
      32 months ago

      That’s something along the lines I do as well, but your methods are far more in depth than mine. I just glance around documentations, how active the development is and get a rough idea if the thing is just a single person hobby-project or something which has a bit more momentum.

      And it of course also depends on if I’m looking for solutions just for myself or is it for others and spesifically if it’s work related. But full audits? No. There’s no way my lifetime would be enough to audit everything I use and even with infinite time I don’t have the skills to do that (which of course wouldn’t be an issue if I had infinite time, but I don’t see that happening).

  • @[email protected]
    link
    fedilink
    English
    22 months ago

    Having gone through the approval process at a large company to add an open source project to it’s whitelist, it was surprisingly easy. They mostly wanted to know numbers. How long has it been around, when was the last update, number of downloads, what does it do, etc. They mostly just wanted to make sure it was still being maintained.

    In their eyes, they also don’t audit closed source software. There might also have been an antivirus scan run against the code, but that seemed more like a checkbox than something that would actually help.

  • @[email protected]
    link
    fedilink
    English
    122 months ago

    It’s not feasible. A project can have 10s or 100s of thousand lines of code and it takes months to really understand what’s going on. Sometimes you need domain specific knowledge.

    I read through those installers that do a curl gitbub... | bash. Otherwise I do what amounts to a “vibe check”. How many forks and stars does it have? How many contributors? What is the release cycle like?

    • @[email protected]
      link
      fedilink
      English
      62 months ago

      Contributors is my favorite metric. It shows that there are lots of eyes on the code. Makes it less likely of a single bad actor being able to do bad things.

      That said, the supply chain and sometimes packaging is very opaque. So it almost renders all of that moot.

  • Vanth
    link
    fedilink
    English
    12 months ago

    I don’t because I don’t have the necessary depth of skill.

    But I don’t say I “blindly” trust anyone who says they’re FOSS. I read reviews, I do what I can to understand who is behind the project. I try to use software (FOSS or otherwise) in a way that minimizes impact to my system as a whole if something goes south. While I can’t audit code meaningfully, I can setup unique credentials for everything and use good network management practices and other things to create firebreaks.

  • @[email protected]
    link
    fedilink
    English
    772 months ago

    Let me put it this way: I audit open source software more than I audit closed source software.

    • @[email protected]
      link
      fedilink
      English
      5
      edit-2
      2 months ago

      I have also looked at the code of one project.

      (Edit: Actually, I get paid for closed source software… So I can not say the same)

  • @[email protected]
    link
    fedilink
    English
    132 months ago

    Of course I do bro, who doesnt have 6 thousand years of spare time every time they run dnf update to go check on 1 million lines of code changed? Amateurs around here…

  • @[email protected]
    link
    fedilink
    English
    3
    edit-2
    2 months ago

    About as much as I trust other drivers on the road.

    As in I give it the benefit of the doubt but if something seems off I take precautions while monitoring and if it seems dangerous I do my best to avoid it.

    In reality it means that I rarely check it but if anything seems off I remove it and if I have the time and energy I further check the actual code.

    My general approach is minimalism, so I don’t use that many unknown/small projects to begin with.

  • @[email protected]
    link
    fedilink
    English
    62 months ago

    I do not, but I sleep soundly knowing there are people that do, and that FOSS lets them do it. I will read code on occasion, if I’m curious about technical solutions or whatnot, but that hardly qualifies as auditing.

  • irmadlad
    link
    fedilink
    English
    5
    edit-2
    2 months ago

    I do not audit code line by line, bit by bit. However, I do due diligence in making sure that the code is from reputable sources, see what other users report, I’ll do a search for any unresolved issues et al. I can code on a very basic level, but I do not possess the intelligence to audit a particular app’s code. Beyond my ‘due diligence’ I rely on the generosity of others who are more intelligent than I and who can spot problems. I have a lot of respect and admiration for dev teams. They produce software that is useful, fun, engaging, and it just works.

  • @[email protected]
    link
    fedilink
    English
    242 months ago

    I don’t audit the code, but I do somewhat audit the project. I look at:

    • recent commits
    • variety of contributors
    • engagement in issues and pull requests by maintainers

    I think that catches the worst issues, but it’s far from an audit, which would require digging through the code and looking for code smells.

    • @[email protected]
      link
      fedilink
      English
      7
      edit-2
      2 months ago

      Same here, plus

      • on the phone I trust F-droid that they have some basic checks
      • I either avoid very small projects or I rifle through the code very fast to see if its calling/pinging something suspicious.