DevOps lets developers innovate faster. But some normal DevOps processes can create the opportunity for bad actors or dangerous code to enter your DevOps toolchains and your software applications. Where are the security risks and how can DevOps teams prevent attacks without slowing down delivery? We’ll provide some easy tips and best practices to secure your toolchain while keeping your development moving. This session is presented by Anchore.
CTO & Co-Founder, Anchore
Senior Solutions Architect, Anchore
Hello. Uh, thank you all very much for joining the session today. We're going to be talking about supply chain security. My name is Daniel Nurmi. I'm the CTO and co-founder of Encore. And I'm joined here today by Paul Nova. Reese is our senior solutions architect, also here at Encor and core is a company that provides users, source technology, products, and services that really are targeted at enabling users to bring continuous security and compliance enforcement directly into the dev ops tool chain. For the presentation today, we're going to be focusing in on a topic that's been, uh, very prominent, uh, recently in the security space, uh, which is really centered around supply chain security. First, we're going to be discussing what supply chain in the context of software really looks like and why malicious actors are starting to target supply chain, uh, situations, uh, as, uh, in order to implement some of their successful attacks.
Um, we'll start off by that discussion. And then once, once we get through that, that, that piece of the presentation, we'll move over to have Paul then, uh, jump in with some practical recommendations and examples of how we can bring security and compliance enforcement into your dev ops tool chains today to try to prevent some of these tacks. So, uh, starting with a lot of what we've been seeing, uh, these days, uh, in terms of the, the bad headlines, right? We see a lot of really prominent and serious security incidents over the last several months. Um, namely, you know, some, some of the more prominent ones, solar winds was definitely something that caught the attention of the security world. And in addition, more recently, there was another one from, from, from another company called code club. And we see a lot of the, uh, these incidents using a phrase associated with what's going on and that's supply chain risks, supply chain security, and a number of customers that are impacted by these events.
And we wanted to have a discussion about really exploring what, what that means and what's happening, uh, in these incidents. And we can start with this kind of typical iceberg view and that what we're seeing above the surface is a lot of discussion about what happened, you know, the, the sort of top level hack and the exploits and things that are, are compromising, you know, customers and, and pretty large enterprise organizations associated somehow to some other elements in the so-called supply chain. And really though, if we look into what's, what, what actually started the whole incident, a lot of those attacks, and especially in the supply chain of security world are targeting software suppliers and, or, uh, open source dependencies, something very much earlier in the story than where the attack actually happened, where the damage those cost.
And we can, we can see this by evaluating, you know, what is a supply chain and look at it from a couple of different perspectives. The first perspective is from the consumer view, that is any organization that is actually deploying production software. We might have something internal that's that we're running in production, or it might even be internet facing. I'd be a website and application, whatever that may be at the end of the day, that's the software that's executed. And we, we, we call those the consumers. And from a consumer's perspective, they know, you know, this is my application that that's actually running in production. And in order to get that application to run, I depend on a number of different software elements that are all composed together to provide that production environment. Those can be other software suppliers that can be open source software.
The companies in that all in aggregate, uh, comes together to form the actual application. Interestingly, though, from a supplier view, if we, if, if we, as a software supplier, look at what we have visibility into, it looks a little different and it starts to expand out. And this is where we start to see a chain sort of forming. We know, as a software supplier who our consumer is. We also know that as a software supplier where we ourselves are in the sense to consumer, cause we're bringing in other software from other software suppliers and open source projects to build our own software, and then deliver that to the consumer. If we zoom out another level and say for just an Oracle, and we're not taking from any particular perspective inside the chain, but just in general, the chain or the graph starts to look something like this, which, you know, every software supplier is themselves a supplier.
And there's a lot of open source elements in here, a lot of different pieces of software. And, you know, this is the kind of thing where even this view is pretty, pretty oversimplified and you sufficiently complex or sophisticated application running today. It's going to have even more independent elements than what we can fit onto a slide. And so this is interesting from a functional perspective. And we might even look at this as a consumer or a supplier and try to derive some information from it or understand what what's going on. But from an attacker's perspective, if we look at this from a malicious actors perspective, any time we see a view like this, where there are dependencies between various elements and they all kind of funnel into something that we want to attack. I, the consumer application, we realize as a malicious actor, all we need to do is compromise any of these elements.
And if we are able to actually get our malicious code or attack successfully put into one of these environments, then that attack can actually flow all the way through to the consumer. And that would look like this, right? If an attacker were to say, if I compromised this open source element or this software suppliers environment and get my malicious code in that malicious code is going to make its way into the consumer's application. And that's where I'm going to cause the damage. And then an interesting subtlety and characteristic of attacks like this is often that while the software supplier or the open source project, you know, one of these elements in the graph is the actual initial place where an attack takes place. Oftentimes those organizations won't even see it because the attacker isn't actually trying to, um, you know, cause damage at that point, they want to do that in a way that's hidden or secret because the actual attack is taking place in the consumer. And for that reason, sometimes these texts can be, uh, around and, and very hard to, to notice for, for a long period of time. We've seen that with some of these recent incidents. So what we're going to do is kind of zoom in on, okay, as a software supplier, we don't want to be the one to, to actually have a compromise that causes damage to our consumers, even if they might be several steps away.
So in order to do that, let's kind of zoom in on what a typical software supplier in a modern environment, what their infrastructure and what their mechanisms typically look like. And it looks something like this. So we've got as a software supplier where we're creating software, we have our own application source code, that stuff lives on the left-hand side of a typical process like this. And on the right hand side, we have the software is ready. It's built, it's executable our customers or consumers have access to it and they can, they can pull it down and run it. But in between, we've got a number of steps. Um, especially as we're starting to see modern systems add more and more automation where every time an application developer might make, uh, a new feature or a bug fix security update, that kind of thing, this whole mechanism kicks in and takes that source code moves on to a build phase where an executable is created.
And then typically there's a testing phase of staging where artifacts are signed and configurations are made. And finally that that element gets published. And again, from a T on a malicious, uh, actor's point of view, these are all elements. And if there's a weakness in any one of these, they can sort of identify a weakness and put some malicious code into this environment, ultimately compromising that publish software. And so from the malicious actors perspective, it looks something like this at if any of these elements are weak and can be compromised. There's a number of different actual mechanisms or methods that can be used, um, to, to form an of successful attack, which ultimately results in that supplier software being compromised. Thus, the consumer being compromised, not when it comes to containers, right? Specifically when we're looking at software suppliers and other organizations who are starting to really leverage container technology, um, in order to facilitate this automation of taking new application code and building something, deliverable executable by the customer, um, containers are actually a very compelling way to do this, right.
Uh, functionally, you get a lot of power by being able to take your application code, bundle all of its elements, its dependencies data, maybe, uh, oftentimes even, uh, a fairly full featured operating system, all inside of a container image of which encapsulates your application gives it everything it needs to execute. Um, very, very convenient and powerful mechanism to do that. But because it's not just the application code that's being shipped anymore and a container environment, this is also a potential source of security risks. And again, if we look at this from the malicious actors perspective, we see several elements and if there's a weakness anywhere, that's a potential avenue for getting malicious code in certain.
So when it comes to what those actual elements are, right, there's a couple of categories that we're going to walk through, uh, to show some practical examples of how we can begin protect ourselves against these types of compromises and, and protect ourselves and our customers, uh, from being victims of, of supply chain attacks. The first category generally is something that, that, um, uh, that, that we see a lot and that's protecting our software from known software vulnerabilities, right? This is something that we should all be doing is making sure that as we're building containers and as we're bringing in new operating system packages, new language, ecosystem, dependencies, et cetera, that all of that software is checked for known software vulnerabilities, because if it isn't and the attacker might notice that maybe it's been a while since you've updated your, your base operating system packages, and there could be known critical vulnerabilities.
Attacker might be able to figure out well, the consumer's going to have those same vulnerabilities. That's an avenue. The second category is really around the concept of injecting malicious code into existing software. And typically that looks like malware or church in horses where either, you know, an attacker's malicious code is inserted into an existing executable inside of a container image, for example, where every time it executes that malicious code can be called or a Trojan horse approach, where an executable can be replaced by something that works the same way, but includes malicious code as well. The third general category is a little bit more subtle, but we see this as a vector of attack, which is around software overrides. And these things can happen at that interface in your dev ops tool chain between having source code and that source code becoming an executable that compilation or that build step can, can be hijacked in, in funny ways.
And one of the examples we see, uh, attackers using called Typosquatting, and that's taking advantage of a human error, where as a developer or a dev ops, uh, uh, a person in that role, you might accidentally misspell the name of some, some dependency like post for SQL. Maybe I, I replaced the, or switch DQ in the L and just spelling it out in my dependencies. And then the attacker might actually publish a misspelled version of that dependency, such that when I go to build the misspelled version is brought in from the attacker rather than what was intended. There's a number of other types of attacks that all kind of fall into a similar category. And finally, there's this notion of credentials. This is where oftentimes during a development phase or a testing phase, the container image or the surrounding metadata might contain internal organizational credentials in order to do some testing or some internal work might be hard-coded in the code or in configuration files.
Sometimes they can accidentally be left inside of a container image, which then an attacker will watch to see all of the public stuff that's being published by an organization. They're constantly looking to see if credentials are left in these artifacts and if they are, they can take those credentials and then perform some other type of attack and maybe in one of these other categories using those credentials. So I think with that, we're going to switch over to Paul. Who's going to give us a demonstration and some practical advice about how to protect ourselves as an organization. That's building software, uh, from some of these types of attacks.
Okay. Thanks, Dan. Um, right now that we've seen the kind of conceptual ideas behind these types of attacks, we'll try to turn it into some practical, um, measures that we can, uh, take to not only detect these types of attacks, but prevent them in some cases, right? So we'll start right here, keep this in mind. I'll refer back to this slide, these different categories of attacks. Um, before we go too far, though, I want to start by showing how inquiry enterprise actually does scan these container images and keep track of the components inside. So we'll talk a lot about the software bill of materials. When we scan an image, we think we're enterprise. The first thing that happens is our analyzer opens up the image and builds a list of facts about the image, right? Everything we can measure or record about the image will be recorded in a software bill of materials.
And that includes things like metadata contents of the image. Um, not only in this case, Alpine packages, we're just looking at a pretty standard Ingenex image year, but individual files in the image, the metadata about those files such as permissions, the sizes, et cetera, all the way down to things like layer by layer, how the image is constructed, right? So this is just a real quick layer of history. Once we have all that, that is recorded in our catalog, and then we will evaluate the image. And when we evaluate the images, it's a very lightweight operation we can do very rapidly, so we can do it continuously over the images, lifespan, two things happen in that evaluation. One, we take our current view of vulnerabilities of, you know, what we know based on the vulnerability feeds, we're consuming from different sources and show a list of vulnerabilities that this particular image is affected by, right?
This is completely objective. There's no judgment here. This is just what we know, what vulnerabilities we know of that this image is affected by more importantly, the policy compliance is where we take not only the software bill of materials, but also that list of vulnerabilities and convert this into a judgment, uh, view of policy rules and what violations we've seen. So in this case, things like, oh, there's some Docker file instructions here that we don't like, or there's some licenses that we're not permitting, right? So once we have those, we can give a, a pass or a fail score to an image, right? And let it decide at that point, whether we want it to proceed in our pipeline or whether we'll let it deploy into production or whatnot, several different decision points we can make there. What are those policy rules actually look like?
So going back to Dan's list of different categories, we can build rules for each of those categories and other things too. Um, and then we can measure each image against those rules. So in this case, we'll start with this first category of vulnerabilities, right? Vulnerabilities are pretty, pretty much table stakes at this point. A lot of tools do this kind of thing. If it could be a very simple thing, like looking at vulnerabilities and saying, Hey, if this image has a vulnerability with a severity greater than or equal to critical, um, what kind of action would we take when that rule is triggered? In this case, I've got a rule that says we will stop at that point, right? We will basically, uh, fail the image, right? There's other things we could do here, though, right? If we want to, uh, look at CVSs scores instead of severities, or if we want to look at whether a fix has been published for this particular vulnerability, or how many days has it been since this advisory was published?
How many days has it been since the fix was made available, et cetera, a lot of knobs to turn on vulnerability. So we don't, we're not limited to just the severity of a particular vulnerability. Um, so we can build pretty sophisticated here, you know, measuring a couple of different things. Um, the second category, uh, malware, Trojan horses, et cetera, you know, the easiest way to scan for these is to use an existing, um, malware scanner or something like we've intergrate with clam ID. If we see that a client may be, find something, we'll stop the image, right. More particularly though we can do really detailed checks for particularly known, um, malicious code. So in this case like crypto miners are pretty hot right now, crypto miners have a tendency to have extremely reliable fingerprints, right? So we can look for those traces and say, Hey, we see something here.
You know, it could be something as obvious as a checksum of a particular binary that we could, we could zero in on to be things like what directory structure they use. They tend to be again, very, uh, reliable and consistent in these things. So there are a lot of fingerprints we can track. Um, some sophisticated attackers might change some of these, but we can, you know, the more things we know about it, the more likely we are to catch them there, right? So we can layer more and more fingerprints as we find them moving into the, uh, category of, you know, software confusion. Um, there's a couple of things there, image Typosquatting right. This would be a case where you say, I want engine X and you type engine Z, and you accidentally get a compromised image. A couple of things you can do to prevent that.
Um, you know, the easiest thing would be to prevent people from using public repositories, like in this case, Docker hub, but other public repositories as well, and funnel them into private repositories. In this case, I've got internal.harbor.example.com. Um, if someone tries to pull an image from Docker hub, I will stop the image. If someone does not pull from this internal, uh, repository or registry, uh, we will also fail the image. Um, a couple of things to note here, you know, attackers obviously will have a hard time pushing images into an internal registry. A lot of times this will be behind the firewall and they won't have access to it anyway, or they may not know what this, uh, where this repository is. So there's a, a couple of other things to consider though, how do you move images once you've vetted them and check them out into this internal registry? You know, there's a lot of different methods there, you can mirror things or you can just individually have a particular images approved and put in that repository.
Uh, likewise package Typosquatting is a little more focused instead of entire images. Uh, individual language packages will be targeted. So something in, let's say in Python, right, as an example, um, a lot of methods to prevent this have revolve around hardening the package managers. So in the Python case, it's PIP, right? There's a thing called PIP sec, which aims to help people stamp down on this, right? Uh, in this case, I'm going to require any image that uses PIP to also install the PIP sec, um, hardening alongside it. Right. And if they don't, I can fail the image, uh, uh, another common attack, not just Typosquatting, but what we call dependency confusion. So this is, instead of putting a typo named packages into a repository, this is a method where attackers try to get you to use a different repository than you think you're using.
And then they'll put in a package that has the exact name of what you want in that repository. And you'll get that packet, the compromise package, uh, instead of the, uh, legitimate package from the trusted repository. So again, this varies from language to language. I'm using Python as an example here, but you'll see things like, uh, with PIP, there's an index URL option that tells PIP which repository to pull from. So we can do things like, uh, forbid installation from a public repo and, uh, force it to use an internal repo. And if we see somebody using this option, this index URL option to pull from a different repo, we can fail the image. Um, another thing we can do is look into configuration files. Um, and core enterprise has a, uh, secret scan facility, which is really just an arbitrary regular expression, uh, search engine.
So if we see arbitrary, regular expressions in files, we can, uh, generate a rule violation and blocking image. In this case, I've put in a, uh, a, uh, red, regular expression for the extra index URL option in Python, in configuration violence. The secret scheme facility will come in handy in a minute when we talk about credentials, but for here, we're just looking for, uh, not credentials, but for configuration flags, if we see these, we can fail the image. Um, a couple other things here, we can look for things like if someone is piling multiple repositories in a configuration file, we can stop them a few other things here. The secret scans, as we talked about credentials, um, you know, anchor looks for regular are arbitrary, regular expressions out of the box. We supply a bunch of these things like AWS, access, keys, and secret keys. Also things like SSH private keys, right? So any of those that we see in a file we can flag and take action on.
So what does that look like in practice? Um, let me just go to another image that we've got here, and we go to the policy compliance view and you can see I've got a bunch of violations, right? All of these things, malware scanner found crypto miner, uh, traces or secret scan found extra index URL for PIP or secret scan found, um, AWS access, keys, and files. All of these are causing the image to fail right as this H private key things like, um, the, from directive in a Docker file using a base image from Docker hub instead of from the internal repository. And in this case, I actually found two of them because it was a multi-stage build, uh, violations here such as the vulnerabilities. I found a critical vulnerability in a Ruby gym. Um, again, this is all in our web UI, uh, but you might want to integrate this into your CICB pipeline.
And what would that look like? So in this case, I'm using Jenkins, I've got to build, I built this image, I pushed it to a repository, had an anchor analyze it. And we had a failure right here. If we look in the logs, we can see, oh, looking at successfully, queued it for analysis and got a fail result back. Uh, what, what does that mean in particular? Right. So maybe my developers need to get some feedback. Our plugin we'll put right here in their workspace and anchor a report that gives them a numerator, all of those violations, right? The same stuff we saw on the web UI, the developer doesn't need to change tools and log into another, into our web UI just to get this feedback. So you can see, Hey, the secret scanner found this, the malware, scanner, family, this, and, uh, that developer has a roadmap to remediate.
There's also tools for pushing things, into things like JIRA tickets or get hub issues, et cetera. Uh, anything that basically has an API, um, you know, generic web books we can hit and give them that, that feedback. Um, okay. So for a practical point of view, what else can we do for takeaways here? Right? How can we, you know, what are some quick wins to get our supply chain security, a little, little, uh, more up to snuff? First thing I would say is make sure everything is centralized. All your CICB processes are in one place and that all of your software goes through it, right? Not only the software you're building, but software you're consuming from the outside world. Right. Um, too, you want to do things like build images from trusted sources, right? That means things like using a small as possible base image, you know, Alpine or red hat, a UBI minimal, um, and only adding the things to it that, that are absolutely necessary.
Um, you know, a couple of other things here, make sure your Docker files are tight and, you know, well-written and, and comply with best practices as well. Uh, and also when you're building inventory, everything you're doing, right? So build those software bill of materials, using something like inquiry, uh, enterprise to, uh, construct those and store them. So you can refer back to them later, um, next tier, uh, number three, automate your security testing and enforcement, right? So make sure that you're incorporating security checks at every stage of your pipeline, right. Uh, scan the image first. And then, you know, once you have that software bill of materials, you can evaluate it very frequently, um, to see if things have changed, you can push that feedback to the developers sooner rather than later. Um, a couple of other things, you know, just make sure that you're looking at the differences of these artifacts over time, right? So if you build up that repository of software builds of material, you can go back. If you need to do a forensics and see when something bad was introduced, and then finally here, number four, deploy only trusted images into production, right? So don't just rely on the first scan you do, right. As you go into production, we can do a last second evaluation and use something like a Kubernetes admission controller to make sure only images that are passing and that have been through the entire process are deployed into production.
Okay. So that's everything really quickly, um, want to thank everybody for their time. We are going to take questions in the slack channel here, uh, track for, um, and also we have our booth at the expo dash Inc, or channel in slack.
Unlimited users from organization
Gene Kim's How Organizations Started Playlist
Patterns for Enterprise Success: The DevOps Journey at Nationwide
Carmen DeArdo, Nationwide Insurance; Hayden Lindsey, IBM
How DevOps Can Fix Federal Government IT
Mark Schwartz, US Citizenship and Immigration Services (USCIS)
Breaking Traditional IT Paradigms to...
Ralph Loura, HP Enterprise Group; Olivier Jacques, HP Enterprise Group; Rafael Garcia, HP Enterprise Group