Our partner, Sonatype, recently released their latest annual State of the Software Supply Chain report and in it provided new evidence that DevOps practices deliver measurable improvements. It also kickstarted another conversation between us.
One of the things we offer organisations is a free scan of their software to identify a bill of materials (of the open source components within an application) and a summary of the security vulnerabilities and licence risks that exist therein. Applications these days are decreasingly coded and increasingly composed from open source components available in online artifact repositories. It's not difficult to understand why developers would take this approach, as Sonatype's Derek Weeks says:
"Innovation is king, speed is critical, open source is at centre stage. Because speed is critical, any developer or CIO or CEO will say if you can do something in one second versus fifteen minutes, choose the one second option. This is why people are choosing the download from the internet option rather than the build from scratch option."
And actually, there is a security advantage to this approach: if you are coding from scratch, you may be building vulnerabilities in that you, and mostly everyone else is unaware of, but a hacker could exploit. If you are using common artifacts where vulnerabilities are being identified, recorded and fixed by the security community you are in a better place. The problem lies in our ignorance - developers rarely check to see if the open source component they have chosen has a vulnerability. Again, you can understand why - it's not easy or quick to do and we haven't created a culture where security is their job. The pen tests are done by the security team later, right?
There are so many things in my last statement that are not very DevOps ('not my job' culture, not having security and quality engineered in and part of everyone's job, the waterfall approach of pen tests after, the silo of the security team) but these aren't the points I am making in the title statement of this post: 'ignorance is dangerous bliss'.
We know from daily narratives with people we work with that there is frequently a 'head in the sand' approach to security. Having a scan done that highlights issues means that we go from an ignorant to a cogniscant state - the 'cat is out of the bag' moment. This gets all the more interesting when we start to consider the legal implications of this. I'm not a lawyer, nowhere near it, and any lawyers out there who have answers, experiences, knowledge or ideas about this, please do chime in, but when Derek and I started to have this conversation, one of the first things he pointed me at was this:
"A consumer advocacy group in Germany has filed a law suit against a retailer in Cologne that sold an inexpensive smartphone made by Mobistel.The Mobistel model Cynus T6 was sold in Media Market stores for just 99 euros. Sounds like a great deal, right? Not so much.You see, the phone’s software came with 15 critical and known security vulnerabilities which were not disclosed to the consumer at the time of purchase.
Instead, these security flaws were later identified by investigators from the Federal Office for Information Security (BSI). Unfortunately for Mobistel and Media Market, consumer advocacy groups tend to fight back when manufacturers and retailers sell products to consumers without disclosing “essential information” such as known security defects in a smartphone.
Although the complaint is at an early stage, it points to the possibility that companies manufacturing software applications could be held liable for selling defective products to consumers — in exactly the same way that auto makers have long been held liable.'
Who can be sued/fined for what by whom when what happens? In other industries, notably automotive, the manufacturers are liable if they ship products with manufacturing defects. The software industry doesn't yet work in the same way - most software vendors push the liability onto the consumer using their End User Licence Agreements (EULA). But should software companies be liable for security breaches? This Techcrunch article raises some important points, and all of it's worth a read, but I'll leave you with this thought from Bruce Schneier:
Are you ready if the laws change and you are liable for the flaws in the software you or your organisation produces? You might want to consider studing DevSecOps more closely and learn how to eradicate vulnerabilities and risk as early as possible in the software lifecycle.
"Today there are no real consequences for having bad security, or having low-quality software of any kind. Even worse, the marketplace often rewards low quality. More precisely, it rewards additional features and timely release dates, even if they come at the expense of quality."