VendorDome: Putting the Sec in DevOps

Whether you are a DevOps expert or just getting started, security now needs to be fully integrated into the DevOps process. In this discussion DevSecOps experts will share what to look out for as you implement your own DevSecOps strategy. We’ll debate top-of-mind topics including learnings from Solar Winds and CodeCov attacks, how “shift left” security fits in your DevOps processes, and pros and cons of tool diversity vs standardization. And of course, we’ll take your questions live during the session.


This session is presented by GitLab and Anchore.

NL

Neil Levine

VP of Product, Anchore

SW

Sam White

Sr. Product Manager, GitLab

PH

Paul Holt

General Manager, EMEA, Anchore

Transcript

00:00:14

Good afternoon. Good morning. Good evening. Wherever you happen to be listening to this from, uh, my name is Paul Holt. I am ankles general manager here in AMEA, just based outside of London. Uh, and welcome to this, uh, this, this bender dome, which is putting the security in DevOps. I'm joined by two of my esteemed colleagues today, uh, Neil Levine from ankles and Sam white from get lab. And we are going to spend the next hour or so debating and asking some questions around what it means to have a dev sec ops, uh, how you want to implement that. And, uh, some of the most recent attacks and the implications that has on, um, just the supply chain and just generally taking questions as well from your good selves. If you want to post questions, we are in ask the speaker track number four, um, as I say, we're live. So we are going to be taking questions as they come, come through. So please do not hesitate in posting the questions in there. So we'll kick things off. And, um, you know, the first thing which I want to pose to our speakers is essentially what dev sec ops means to you. And, uh, perhaps Sam, do you want to take the, take the bathroom first and kick things off for us?

00:01:33

Yeah, absolutely. So dev sec ops really means that security is incorporated entirely into that software development life cycle. So, um, you know, rather than kind of the traditional approach of developing some code, throwing it over the fence to security, having them run some scans against it, and then eventually, you know, some of those results make their way back to the development team to action on. It means that it's really tightly integrated with the end, the entire process, so that developers are able to continuously scan their code. They're able to see results right there as they get them. And that, that security continues both, you know, not only during that development process, but really before the development process and hardening the developers workstations all the way through afterwards, when you ship it into production and you continue to secure it. So DevSecOps to me really means that security has become part of that entire process. And it's just a piece of what we do as we develop software.

00:02:34

I'll pass it over to you.

00:02:36

Yeah. I, you know, the, I remember when dev ops came up as a, uh, a term of art in the industry that, um, I think Adam Jacobs who's one of the people that sort of helped promote it, or he said, it's, it's a cultural movement as much as anything else. It was about creating culture where for dev ops is about having the developers and operations collaborate together and not see each other as, you know, siloed organizations and that they should collaborate together. Um, so, you know, he always emphasized the cultural aspect, which was that organization that you had to change things up and see yourselves working as a group. And so I think, you know, DevSecOps, if you extend that sort of, uh, sort of, um, con concept or sort of way of looking at it is absolutely about bringing security into the process and settings, having developers and operations and see them as part of the working group to ship an application and not as a group to throw something over to.

00:03:26

Um, so, you know, the sub-tabs obviously right. I mean, in practice, what that means is you, you know, you need to have security checks implemented everywhere. Security needs to advise, um, on, you know, where you need to be doing security checks and helping structure those such that it's, it is part of the day-to-day workflow. Um, but it's, it's, you know, it's, it's important not to overemphasize the tools and the technology cause I mean, we're vendors, we tend to do that, of course, but it's, it's absolutely about a collaboration thing where even when you're sitting down and you haven't written any code and you're having your, you know, design meetings, everything else that security there as part of it. So it's important not to sort of strip out the human elements, but yes, you know, what it means to most organizations today is that they're integrating security into development and not leaving it right until the end, which is traditionally where it's been implemented.

00:04:12

Yeah. And I think, uh, on, on that very point in terms of, you know, some of the challenges that we have seen, um, you know, uh, as vendors as, as we talk to organizations about implementing security into their CRCD and their software development processes, what, what, what are the sort of the typical challenges that we've seen as, you know, as, as organizations have tried to embed security into these, into these, uh, you know, develop workflows, uh, Neil please. Yeah.

00:04:41

Um, so there is still, I think this is where, you know, when I say some organizations and not doing DevSecOps is because we still interact with security teams who are sort of playing catch up to where the developers have been. So often the developer or on the operations side has moved faster in terms of building automation and building that whole end to end pipeline. And the security team are playing catch up to that and working out where they can add things. Um, and the telltale sign that we often see as a security team who don't know what's going on in the development process, but have seen like a container registry, a pair of a whole bunch of content, and they've got to go and work out what's inside it. Or, you know, they're starting to, um, be asked to produce reports or audits. And so, you know, often that's a sign of, well, you're not quite doing DevSecOps, even if you do have a few security checks implemented in the system, if the security team don't know what was going on, then you don't have that cultural integrations.

00:05:35

So, um, that's definitely been, you know, one of the markers where, where there is a problem. And I think it just reflects the fact that inherently developer teams, you know, do move faster and security teams. And so whether it's picking new tools or just writing code, they tend to be far more, um, you know, nimble and less constrained than the security teams. And so I think that's one of the frictions is just inherent, you know, burdened, the security teams have a lot to look after they can't unshackled themselves from things once they've, um, you know, that they've evolved in application. That is an ongoing process for them.

00:06:10

Yeah. Sam, any, any, any, any additions to that?

00:06:14

Yeah, I would say that's absolutely true. I find that from a, just a practical standpoint, most security organizations report up through an entirely different reporting structure from the development team. And so, you know, where you have a security team and application security team reporting up through a CSO or CIO, you know, they're just not talking a lot with development, at least in most traditional organizations. And so, you know, they may go throw things over a development and say, Hey, you know, we found this critical vulnerability. You need to go fix it. But I find that traditionally that's really where the collaboration starts and that's where the collaboration ends and, you know, and it becomes kind of this, uh, unfriendly relationship where development says, you know, the security team is trying to make us go do this work, and they're trying to make us fix things.

00:07:03

And the security team's perspective, as you know, the development team has, is creating these vulnerabilities that now we have to deal with and they're not fixing them. So it kind of becomes this a little bit of an unfriendly relationship. Um, you know, at least the way that I find that most legacy organizations work, um, and I get a big part of that is just a function of that reporting structure. Um, it's those organizations that are a little bit more mature in their security development. I find that they've broken down those barriers that broken down those walls between the two organizations where there's more collaboration than just, you know, Hey, you've got a vulnerability, go fix it, right. There's actually a dialogue happening there on a more regular basis. There's education happening around best practices regarding coding and security, how to code properly to avoid these things in the future.

00:07:54

And there's a two way dialogue where development is also informing security about vulnerabilities that may or may not actually be applicable. Um, depending on the nature of the code, you know, perhaps if there was a finding, but the code isn't actually exercising that area of the product. And so it in practice, it's not really an exploitable finding even if there is, uh, an underlying vulnerability there. So I, when I see, you know, organizations tend to those more mature organizations tend to have more of that collaboration there, but a lot of the legacy organizations that struggle, um, you know, it's almost like they're from two different companies trying to interact.

00:08:34

Yeah, yeah, no, it's interesting. I mean, you know, the, uh, one of the challenges of dev ops was obviously to get developers and operations to work closer together. And now with DevSecOps it's, I think developers security and operations to work together. And I guess if you've been successful in adopting DevOps practices, then you should have the cultural foundations on which you can do DevSecOps hopefully. Um,

00:08:54

Yeah. Although I think, I think there's, there's, there's an interesting challenge though, which is that developer developer and operations, I mean, to Sam's reporting point, um, you know, do I think there was, I think it was probably easier to get alignment between those, those teams because they're using the same. Ultimately I think that, that the, the center of gravity was closer to them, whereas security has a much broader remit. It's inherently always carrying a lot of the, the legacy stuff too. Um, so I think, you know, th th there may be some additional challenges with security in particular. Um, also it's it's mindset is, you know, tends to be very, very risk averse and obviously for obvious reasons, whereas it's a matter of, you know, less operations maybe, but certainly the dev ops teams have learned that you can move fast, you can move fast and break things is an okay model to have. If you've got the automation, you can resolve it. Whereas security sort of teams, I don't think have quite, they don't want to absorb a break things, attitude because the cost can be so high. So I think there are some specific cultural sort of, um, you know, maybe a resistant factors, uh, the security teams, which means I don't, even if you've got dev ops, it's still maybe a challenge to sort of bring in security in some sense.

00:10:06

Yeah, I get, I guess they could also be different. Just think how organizations arrange themselves. There could be different reporting structures where a security develop operations could ultimately report up into the same hierarchy where security typically would end and end up with a CSO type.

00:10:20

Yeah. Although it is interesting. I mean, I think there are, once again, what's an interesting signifier is, is whether they have sort of, um, you know, security people tasked with security responsibilities inside, you know, the, um, actually outside the security organization. And it's like, sometimes you have champions, right? You have a security champion inside the developer team or the operations folks who are not trained as security experts, but are essentially there as a, as a front line and a sort of a, um, have, have taken on the responsibility of trying to triage things for the security team. And, um, trying to actually be the sort of first among equals when it comes to security security with, within the developer audio, uh, operations organization. So I think, again, that's, that's another sign that you're doing DevOps well is actually where you've kind of recognized that you need to, you do need to sort of vary things up and you can't just have a homogenous sort of group of, uh, developers and operations folks that some of them do actually need to take on that become true security experts, at least some security responsibilities within that team.

00:11:16

So again, I think that that's a good sign that you'll do your overall well along the path, or sort of maturing towards, you know, DevSecOps.

00:11:24

Yup. Yup. No, that makes sense. And we've heard this term, you hear the term of shift shift left, you know, as, as organizations shift the re you know, re security further left into the development process, um, you know, just thinking that through and what that means for all organizations. Um, how would you, you know, how, how, how do we see that manifesting itself within the developer organizations themselves, as we try and get developers more familiar with with security? I don't know, Sam, if you want to take that one.

00:11:52

Yeah, sure. I would say really shift left is all about giving you the ownership for security at the point of where that vulnerability or, you know, where that security is a problem is potentially introduced. So if it's the developer writing the code, you know, that's where we want to have that visibility into the security impact of what they're doing and any potential vulnerabilities that saves the developers. A lot of, you know, potential embarrassment, potential conflict. It reduces a lot of that back and forth with the security team. If you're actually able to bring that visibility into what the security impact of their code is right there in the forefront of it, the developers as early in that process as possible, you know, while they're writing the code, when they're submitting their merge requests, when they're going through those review processes, all of that can happen well in advance of them actually being ready to ship the product. You know, there are some aspects of security that don't necessarily belong with the actual development team, but I would say the underlying principle of shift left is really, you know, get the ownership for the, the security in alignment with the group. That's actually owning that area of the product, whether it's the code, whether it's the, you know, the infrastructure, you know, whatever that may be.

00:13:12

I think one of the, um, mid shift left as a, it was obviously a marketing term, so it can be variously interpreted, but you know, sometimes, you know, identified, um, the essence of it. But I think the thing that is most compelling for certainly for senior members of an organization is it's essentially an economic argument, right? Which is if you identify things early, it's tends to be cheaper and quicker to fix them. And so ultimately shift left is an F is an efficiency and economic argument, which is just that the cost of effort is much lower. Um, just because, you know, the fix is closer to when it was a time of when it was created. So it's just, you know, developers, who've gotten either haven't context switched out of that chunk of code or whatever it is they're working on. Um, so it tends to be, you know, you can resolve it faster.

00:13:57

Um, obviously, you know, if you identify it earlier in the pipeline, you prevent things going into production and everybody ultimately wants to wants to do that too. Um, I think one of the, again, coming back to the previous question about sort of maturity and, you know, what is DevSecOps, I think for a lot of people though, they've they think if I just do eCheck in CACD, I've done shift left, and then I'm doing DevSecOps. And I think there is, there's a little bit of a, um, uh, you know, uh, lowest, common denominator effort here, which is, you know, I've, I've shoved some things into my, my pipeline checks at the CIA and, you know, so, Hey, I'm doing DevSecOps, but you know, th the it's not doing the check, it's actually fixing it in doing it in a, in an efficient way, such that the automation is that that's, you know, the, the speed of response is that, so if you haven't actually put in the backend organizational infrastructure to actually do something with the data, then you've just got more data and you haven't, you know, you haven't, you know, you've may have failed fast, we haven't resolved fast.

00:14:54

And so you, haven't got your automation and, you know, the, the true goal of what you want from a DevSecOps pipeline, which is, you know, still ship fast with high quality. So, um, but yeah, it's, it's shift left is definitely as, you know, Samsung get, get, get the check where it's, you know, so the team who induced the problem gets it as quickly as possible.

00:15:13

Yeah. Yeah. And as, as two organizations, you know, with ankle and get that right at the center of this sort of this, this, this shift left mentality, what are a few practical examples of, you know, organizations doing, you know, trying to shift, let the security security into that different, just give us a few examples of what you can do.

00:15:35

Yes. So, I mean, from our end, you know, the examples that I see obviously are organizations using the get lab product to do that. But typically that would be, you know, not only running those checks as part of the CIA pipeline, but actually surfacing those forefront to the developers as they're, um, you know, as they write it as they submit those, those MRRs for review. And so they are able to see those results in line, in their natural workflow that they would go through to submit an Mr and have that reviewed and eventually merged into the code base. They can see that up front and they can make a decision, you know, is this added risk that I've introduced newly into the code because of the changes that I've made? Is that something that I'm willing to stand by and we wanted to continue moving forward with that.

00:16:26

And I want to recommend that to the security team, to still be merged into the code base anyway, or was that just a mistake on my part of coding? And I want to go back and fix that, and, you know, I'm going to correct that before I even start the review process before anyone else has to get involved. And to Neil's point earlier, you know, the time and efficiency savings, we've seen some companies that actually are implementing this, and they've saved just an incredible amount on the backend because those vulnerabilities start to be resolved by the developer who created them right upfront. And no one else has to get involved. So you say, you know, five, 10 other people's time, if you don't even see that vulnerability coming out on the back end of it. And in the end, what actually is being submitted to be merged in, it goes through an approval process. And those are things that the development team is confident that, you know, there's a reason if there's a vulnerability there that's severe enough, you know, there's a reason it's there and it's justifiable either, you know, it's a false positive and that code's not being executed and it doesn't apply in this situation or it's absolutely necessary. And the risk is worth, um, the benefit that it provides.

00:17:36

Yeah. So I, I think, you know, I mean, sounds general point, which is that I think developers now are sort of more used to sort of having commits, thrown back at them. Um, and it's not just security issues now, obviously. I mean, there's lots of other things that are sort of checked for. So I think security's managed to sort of slip stream itself into some of the automatic sort of pull and merge request sort of, um, you know, bounces that the developers are used to getting. So the sort of DSMs or source code management tools have been very good at sort of implementing those. And I think just the whole pull, the merge request kind of mentality has really lent itself to actually adding security checks into it. Um, so that, that's definitely the sort of the area where developers sort of experience it. And then the second area which we touched on before, is it in, you know, the CICB systems themselves.

00:18:20

So when the build is happening and again, it's sort of interesting thing of like, how quickly did they get that back if you've got, you know, if your build takes a very long time. And again, to my earlier point, the developer may have already complex switch and they can't remember what that piece of code was. And, um, but typically I think, you know, failing builds where, you know, most matured teams who have been doing dev ops, so used to having to worry about the nightmare builds. If they parceled a test today, you know, are they good to go, can QA pull them down and test on them and so on and so forth. And so again, I think, you know, another easy area is just that the nightly build failed because of security checks. And so you, you got to resolve those that's your top priority because, you know, essentially the pipeline has been shut down.

00:18:59

Um, so I think, you know, that's, that's the other area that, um, most developers start to experience things, but when you sort of cross into the operational sort of domain, um, you know, there is, it's the, again, this is where it's, uh, it's a slight sort of shift left aspect of it is, is worrying about that deployment time, which is distinct from run time. So actually when you're about to run your application, um, w are there any security concerns at that point, because obviously circumstances could have changed that the application could have been absolutely a hundred percent, you know, passing all its tests and security tests, um, developer time and build time. And, you know, when it was stored in our artifact registry or whatever it was, but actually when you're deploying it, when I've got new issues or, um, you know, to reduce there's maybe a policy change at organization hires. So I think that deployment time is now sort of a new thing that the operations teams are sort of aware of. Um, cause there's a lot of sort of static code checks you can do on, on, on your artifacts and your deployment scripts and so on. Um, and then, then you're in traditional end points, you know, at that point it's runtime and it sits very traditional. Yeah. There's sort of three main areas I'd say.

00:20:10

Okay, fantastic. Yeah. There's a couple of points I want, I want to drill down on that, but first does it, does it relate to note, there's a question just from somebody listening in here about, is there a position to take now to take a more proactive approach to, to the security and dev ops and adopt things like penetration testing with respect to, you know, you'll see pipelines and think, think of it like an attacker, what do you, what do you, what do you think about that type of approach?

00:20:34

Um, yes. I mean, I consider those, we've got another question about code cover and all of the, um, the, uh, the sort of alarm that, that introduced, um, when it was at last week, I think a week before it's, um, I'll keep track of so many, so many breaches these days. Um, no, I mean, I think just the actual software, the developer infrastructure tools and, uh, the overall pipeline tools are now a major factor. I mean, solar winds highlight highlighted this, obviously that actually endpoint protection was as the name implies is just so focused on the end of where the, you know, the application was run and realizing that actually these, the tools which are supporting the SDLC a huge factor is just like, I think become, I mean, solar winds has just driven everybody to sort of focus on that now. And there is, you know, I think there's, this is one area where the industry is a little bit far behind on this.

00:21:29

I mean, I think two areas got surface out services adamant to trust in the supply chain. And can you trust what you've got? And there's a sort of a chain of authority that you can work out what you have and where it comes from. And, um, even, you know, once where it came from, who did, where did they get it from the sun and so forth, but then the actual developer infrastructure itself. And so, yes, I think, you know, pen testing against your CICB systems and your registries, and I mean, pretty much anything that a developer touches is just now, it's now table stakes. Um, and you know, as an industry, we're still catching up on that, but it's certainly the case that, um, you know, this, this, this, this does lend itself to a sort of everything as code approach rates. We now have, we certainly have like policy as code is kind of like fairly well understood, but actually even know the pipeline jobs which are creating and doing your builds that should be stored as code. You should be able to audit that they shouldn't be random bash scripts that are just, you know, not in a source control management. Again, I think it just, everything should ideally be stored in your SEM, so you can stand it, you know, scat statically there, uh, as much as possible, but then you have to pen test all of these platforms now, you know, everything that, everything that is potentially, you know, um, can introduce a vulnerability. You've got to go pen test it.

00:22:47

Yeah. I view a big part of that as you know, an effective pen test involves running multiple types of scans. You know, if you've got SAS running, you've got Dassault running, you've got fuzz testing, you know, you've got your traditional, like dependency scanning, running, um, all of these come together to help give a more holistic view. It's not really sustainable to run a whole bunch of manual pen tests against your application. Um, although that's good information, it tends to be pretty slow going. And it's really hard to keep that up on. It's just not scalable or sustainable. So I would say as far as pen testing goes, there is a huge place for that. I think the biggest area is in writing rules for some of these other tests that are out there and that are available. So you've got a good 360 degree of coverage on your applications that you're shipping out the door.

00:23:38

Um, you know, as Neil mentioned, you know, it, these recent attacks have brought into light that we need to consider more than just, you know, the code that we're writing and shipping. We also need to think about our supply chain. We even need to think about the infrastructure that we're building that in the workstations that we're developing the code on. So there's kind of a pretty broad surface area there. I would say the underlying principles are again, you know, run as many different tests as you can and automate as much as possible, just so that it, it becomes sustainable and scalable. And that way the focus can shift more towards, you know, how do we write new rules? How do we add new, new automated tests added additional checks into that, uh, coverage that we're already getting? Yeah.

00:24:23

Yeah. And so we start to hit down the conversation around software supply chain. So let, let's just touch on that because that, again, I think is an equally, uh, timely topic given the, uh, the executive order, which were pushed out last week in the, in the U S um, , what do we mean when we talk about software supply chain and why is that? Why is that so important for organizations nowadays?

00:24:48

Yeah. So your supply chain is really anything that feeds into what you're delivering. Um, you know, and that actually can be far more extensive than you think initially. Um, just thinking outside of the box, you know, you've got your initial code that you're developing, you're probably including some number of third-party libraries as part of that. You know, those typically undergo some degree of scrutiny, but if we see, as we've seen in these recent attacks, you know, it's more than just, that is the infrastructure that you're using to develop it. It's the infrastructure that you're using to test your code. Um, as I mentioned, it's even the, the workstations that you're using to develop the code on. And so it's a lot, you know, there's a lot around that. There's, uh, everything that goes into getting that software built and shipped and delivered is a potential point of attack. And so you've got to think holistically about the security of all of those aspects and make sure that you're adequately scrutinizing them and applying a zero trust model across the board to every single component all along the way.

00:25:53

Yeah. So I think the, um, you know, if, if supply chain is where stuff is coming from, um, uh, in general than the, you know, I mean, again, to come back to it's, it's both the infrastructure that you're using, where does it come from? Who's providing it to meet, you know, solar winds was, was an infrastructure piece essentially. Um, but I think the, you know, from an application it's, you're writing application, it's, you know, w where are the ingredients that you're putting into the application coming from? I think, you know, people always tend to think of it it's, well, it's, it's all my code, but of course it isn't, it's there's dependencies. And this is why I'm particularly at the open source community has sort of, um, had so much success over the past 20 years, but it's actually now brought, you know, realize we brought along and then just a tremendous weakness along with it around security.

00:26:41

So I think, you know, supply chain has, um, it's equally off the shelf software. Do you know, can you trust what you're given? What are the, what does the vendor that you're interacting with, do from a security perspective and understanding that, and I mean, that's a large part of the, this executive order is trying to give confidence as best as possible that, you know, your suppliers are doing adhering to best practices and, you know, having a certain level of competence, but then it's, you know, it's not just off the shelf. It's, if you have no commercial relationship, can you trust the stuff that you're putting in? And I think what's been, what's particularly challenging in our space, um, has been, as we've moved to things like containers it's and, you know, and even, you know, we're now back with compiled languages, like go becoming popular after it, you know, sort of the Python era is just the, the opaqueness of what you've got is now really, really high or low, depending on which way you look at that, I guess, but it's, there's a high opacity because, um, a developer may think they know what the dependencies are pulling cause Allison them explicitly, but when it goes into a build, a whole bunch of other stuff are getting pulled in completely, you know, um, uh, unseen to the developer and often unseen to anybody else because you know, these containers are sort of, you know, sort of conflicted with lumpy sort of, you know, monolithic, um, um, you know, binary's essentially, so I think there's a, um, you know, th th this, this sense of like what's going on with my vendors and what's going on with the open source tools and they, they have different challenges and, you know, you can have different approaches to managing the supply chain, um, when it comes to security with those two groups.

00:28:13

Yeah. I would say that these things have really exposed those organizations that have been living by zero trust and those organizations that haven't, you know, zero trust is a bit of a marketing buzzword, but it's also really good practical security principle to follow by, you know, verify everything that, that comes in. And even those things that we've learned, you know, in the past month or so here, even things that come from reputable companies, you still have to verify because there's not a guaranteed that the contents of that binary that's being handed over are safe. So I suspect we're going to see overall more, um, along the lines of the executive order that was passed recently, you know, both in other countries and even in companies individually where security teams are likely to start just applying deeper scrutiny and insecurity practices for their software vendors. And that's going to be, you know, across the board, not just dependencies for, for the code that's being developed, but the software that's being used to develop that code.

00:29:17

And even in the case of, you know, solar ones, like it management software, right? Like anywhere, any software that's being used can be a point of attack. And so, um, there's also likely a suspect to be a shift towards increased behavior monitoring where, you know, okay, we can verify that this actually came from a trusted entity, but can we verify that the contents of that haven't been tampered with, um, in a lot of ways, you know, the insider threat, if you will, is very real weather hits, um, malicious, or even just accidental or a compromised account that is able to go in and change some of that source code. And so a company can do their best to protect their own employees from that and monitor that. But that's another thing when, you know, you're accepting software from a third party, you don't necessarily have full line of sight into all of their practices and how susceptible are their emails to phishing campaign, or are their employees to phishing campaigns and, you know, just exactly what protections do they have on their networks to make sure that attackers haven't gotten a foothold. And so I think the solution to all of that tends to point more towards behavioral monitoring type approaches to these things where, you know, a piece of software, whatever that may be and whatever its purpose comes in. And it goes through some sort of scrutiny or analysis, um, before it's accepted and possibly even scrutiny analysis after it's accepted on an ongoing basis, just to monitor its behavior and verify it on a continual basis to make sure it's behaving in an inappropriate way.

00:30:52

Yeah. So, yeah. I mean, it's two things there, sorry. So I think, yes, in terms of, I mean, you're going to see a whole bunch of vendors focusing on sort of tamper detection that throughout this SDLC now, I think there's, there's no doubt about it. Um, there's a real, there's going to be a really interesting question as to how do you work out whether something's been tampered with, because, you know, software, the software changes as is going through the pipeline minute, it's just inherent, as, you know, as you're deploying things and credentials get added into a system, or, you know, does this configurations, which are added, which yeah. So this technique time brings is I think going to be a challenge, but it's obviously something that's, um, um, you know, uh, I think who has vendors at the figure to be focusing on, I think the zero trust one is in some ways, actually a harder one because there is, you know, especially as the open source sort of commons is in some instances, it's, it's going to be very difficult to actually validate, you know, to, to, to, um, to feel confidence that, you know, you know, who is, who is behind something, because often there is nobody behind it.

00:31:52

It's, uh, somebody who wrote something five years ago and they disappeared off and, you know, they're not responsible for it anymore. So I think, you know, the, there, this is where it's interesting to see the Linux foundation in particular, as you know, we have six store and the work that the open SSF is doing is really trying to sort of raise the level of the security status amongst these open source projects. But ultimately there's, there's always going to be a gap there, I think, which is going to be difficult for lots of organizations to address them with a zero trust model in us. They literally rebuild every piece of software they bring in, or they evac every line of code. I mean, it's, there's always going to be some challenges there. And so I think it goes back to Sam's point, which is even if you, uh, sort of try to go for a zero trust model, you're still gonna have to hit everything you bring into your domain with all tasks, and you can sort of raise the, your level of confidence, but it's never going to be complete.

00:32:43

And so again, I think this comes back to this sort of, you know, the, um, well, it, a slight variation or a shift left, but you know, you're gonna have to accept that there will be risks somewhere. You, you re bringing them down as much as possible, but you've got to be continuously looking at this stuff over and over again. There's never a sort of a, a total seal of approval you're going to have on the, uh, on anything. Um, and so, you know, that continuous testing and doing through shift left, as soon as you, there is a problem, that's the way that you find it immediately and that speed of responses can become, you know, that, that doesn't change in terms of urgency and importance for your, for, for your plan.

00:33:14

Yeah. I mean, security always has, and probably always will be a little bit of a cat and mouse game. You know, it's really just a question of, can you be faster and more agile than your attackers? Um, you know, it's, it's almost impossible to have a perfect perfectly impenetrable system of any kind, um, you know, almost everything is, is hackable with enough time and resources. And so it's really a question of, you know, given the events over the last month, you know, how do organizations then go and up-level their game to take that next step and regain the advantage. And, um, yeah, it'll be interesting to see how all of that unfolds.

00:33:53

Yeah. Yeah. And we'll get, you know, we'll, we'll, we'll get to that in terms of key takeaways, but I'm interested, particularly as organizations continue just to use more and more open source within their own application development, what sort of, what sort of the best practice what's sort of the minimum threshold in terms of checks and due diligence that people should be doing as they bring open source code in, into their own CICB, um, uh, workflows?

00:34:19

Well, I think one thing that seems to be on the rise and is sort of important is you actually essentially curate your own library of open-source content. I mean, I think the first thing that we see more mature organizations doing is, you know, not allowing people to just pull things out, randomly off the internet, um, which, you know, was a de facto, you know, model and her husband and still is, you know, for many organizations that, you know, th the more, um, well-resourced teams who we say this, you know, developers should be able to say, I want to use this library, but as quickly as possible, you bring it in, you scan it, and you essentially take responsibility for that, that, that open source project and, or the package or the library, whatever it is, and the hosted in the sort of a trusted repository inside your organization.

00:35:02

Um, so, and you ha you know, by taking responsibility, that means you've got to stay on top of bringing in the latest version and making sure that gets scanned and made available. Um, and then, you know, implementing the checks and the policy rules to ensure that people have to conduct, can't go out to random registries. They have to use the, sort of the secure registries that you have inside your organization. So that's, that's kind of one, one practice we've seen. Um, you know, we're starting to see a bit more, um, due diligence even before, you know, um, contents may be brought inside the four walls of actually saying, look there a security, meaning this door contact for this project is, you know, when was the last commit made? Is there a governance structure sort of checking who has commit rights? You know, there's a little bit more due diligence on the sort of the security health of an open-source project.

00:35:51

I think at the moment that's still far too manual. It's not actually easy to automate that. And I think that's, again, part of what the open SSF is trying to do is, you know, give badges and other things. So you can have a, an essentially an automated way of determining that notice this meets a certain level of competency when it comes to security response. Um, so these are, you know, these are early things to do, but, um, yeah, I think, you know, just preventing people, putting stuff down from anywhere they want to, and just getting insight into what's actually, maybe even before you do any of this, trying to see what are you even using, you know, come back, things are sort of Wednesday. Do you know what libraries you have? Do you know what, you know, third party code you've managed to pull in is often the first step.

00:36:33

Yeah. Sam, anything, anything you guys would add to that? Yeah.

00:36:37

Yeah. I mean, one of the benefits of open source is that you have access to the code. You don't get that with closed source. And so you actually can apply the zero trust model a little bit better with an open source project than you can with a closed source one, because you have visibility there, the more level of transparency there, typically you are able to see, you know, the past commits for that project, you are able to see the actual code of that project. And so, um, you know, like Neil mentioned, you know, just a good starting point. Um, if you're doing nothing else, at least do some form of security review of the project to get a sense for, you know, how security conscience conscious is that, uh, are the maintainers of that project. And once their level of security maturity, um, again, you know, just to echo what Neil said, next steps would be running some basic scans against that.

00:37:30

Are there open known CVEs against that project? Um, if you have the means, bring that code, make a clone of it and sort it in-house so that you can run, um, additional scans against it, you know, Sask scans, desk scans, you can run some of those scans because you actually have access to the source code. Um, so there's a lot of power there. And lastly, I think kind of the, this would be beyond first steps, but the long-term would be implementing some more, uh, behavior type monitoring as we discussed earlier to actually observe the behavior once it was compiled and running in your environment.

00:38:06

Yep. Yeah, no, I know we, um, you know, we, we collectively have a, uh, a very interesting mutual client in the United States, DOD who have done something very, very similar in that, in that creation of a hardened repository called iron bank. I don't know. I don't know nearly if you just want to say a few words about that, because that I'm sure will be of interest to some of the people listening in.

00:38:26

Yeah. I mean, you know, the idea is, you know, probably one of the more paranoid organizations, uh, out there, but yes, you know, they, they, they implemented a, um, methodology and, you know, created a platform to support it. And, um, actually even got reference architecture for others to try and copy that really tried to, to sort of seed this approach inside, um, the federal government, uh, in the U S and, um, yeah, I mean, they, you know, they, the iron bank contains it commercial software and off dish, uh, and open source software, but yes, essentially they try and rebuild everything. They won't, they have, uh, you know, they don't trust a container that's given to them. They want to rebuild it from scratch themselves. And so, you know, they need access to all of the components to do so. Um, so you know, that that's, that's definitely, you know, uh, it's a, it's a, quite a heavy, um, you know, there's a lot of resources to do that, cause there's an awful lot of open-source software, a lot of, you know, commercial projects out there, but, you know, they're risking resource organization to be able to do that.

00:39:26

Sometimes that's a sort of, uh, might be too heavy lift for a lot of other organizations who ultimately, um, you know, I think it's, it's what do you trust least focus on that first? You, you can't go and rebuild everything from scratch. You're going to have to accept, you know, um, some content is given to you. So again, this comes back to, if you can push the burden onto the supplier to prove their competency. And actually you have to sort of trust that, you know, their internal, you know, that, that transmission of their competence, um, uh, demonstration over to you, but the DOD is essentially saying, we're not gonna have any of that. You know, we'd, don't trust anything, we'll rebuild everything from scratch. We're gonna look at every single, you know, go inside and look at all the contents. And, um, they tried to do it and obviously that as a, um, machine driven way as possible, but they still have humans, you know, at the end of the day, you often have humans going, looking through and making a judgment call, um, around certain things. All software has some vulnerability of one type of rubber. It's just, what is the risk? Which is, you know, what do you still have few minutes essentially trying to do, but yeah, so it's an interesting model and, you know, the reference architecture is public. So people can see how they're doing it. And, you know, enterprises can maybe take, learn and take some approaches, but, um, they certainly represent the extreme end of, you know, zero trust. They really don't trust anything.

00:40:48

Um, so on a similar some of the theme, I think we were starting to head down this track. We were starting to hear this term soft software bill of materials. And I think actually once again, a couple of weeks ago in the executive order that was published in the U S the software bill of materials was starting to referee the reference as a critical, um, uh, a critical piece of data, which, which software vendors will need to provide. Um, perhaps I don't know, it's the same or near would be either one of you could take a, just an explanation of what we, what we mean by the software bill of materials of why, why is it important?

00:41:22

Yeah, sure. So I can take a pass at that. I mean, really here, we're just looking for a comprehensive listing of all of the different, uh, components that have gone into producing a software package. And so that would include any third-party libraries, any other dependencies and their dependencies that may be involved in that. If it's a containerized image, obviously you've got packages installed in the underlying, uh, container, uh, operating system. So you want to have a listing of those as well as again, the dependencies for the software that you wrote. Um, it's not a terribly hard to list to pull together, but having that list just shows that you have an awareness of everything that was included. And, you know, if you're not able to put that list together and you don't even know what's going into your application, obviously that's a, that's a red flag that's concerning. And so, you know, an initial starting point is just to put together outlets. So you have a comprehensive inventory of what's comprising that, uh, that software.

00:42:20

Yeah, that's right. And there, there are, you know, what I think where the industry is working on right now is trying to create some standards because the Samsung creating yourself a bill of materials is, I mean, it's, it's, it's not difficult. I mean, it takes effort, but it's, it's not, um, it's not rocket science, but actually it's what you do with the bit of materials is, is the most important thing. And so there's a, there's a movement to try and standardize some formats because ultimately you want to be able to track this across domains, across teams, across organizations and everything else here. Um, so, you know, there's, SPDX insightful. DX are probably the two main standards. And, um, I think you're going to hear a lot more about those, um, uh, you know, from vendors we produce, you know, uh, Psalms and in this format for, for, for customers or partners to look at, and then coming back to what we're talking about earlier with tampering.

00:43:07

I think, you know, the S bomb again, this is where yes, we'll give you a sort of a, um, something to compare and contrast between stages in the pipeline to see has, has things changed events. We are to be able to ask the question, why has it changed your S bomb won't necessarily be static from one end to the other. That's again, you know, the bill processes inherently going to change some things, but it's, I think going to be an interesting source of truth around have things changed, which we expected to change. And did they change in the right way versus there's unexpected change? You know, why, why did the, the fingerprint or the Hashanah, you know, certain binary change when it shouldn't have, you know, there's something there's something strange there. Um, so I think, you know, as moms are going to be sort of help with some of this time for detection, um, and, you know, spotting for variation over time.

00:43:53

But yeah, this is, you know, the big thing in EO is that, um, the executive order is that, you know, they, again, to try and just raise transparency and visibility about what people have is just to help organizations get better visibility. But before you do anything else, just do you know what you're running, uh, you know, in those lobby organizations, the first question is where do we run? Solar winds is agent, which has compromised. And that wasn't an easy question for many organizations to answer quickly without having to go through many lines of organization, whereas an F-bomb can sort of give you that, that answer very, very quickly. So even from a response perspective, they can be useful. Yeah. Yeah.

00:44:30

So I guess the, I mean, the SPL is going to be foundational is really to understand your software supply chain it's, it feels as though it's going to be a really foundational component.

00:44:39

Yeah. I mean, it's, you know, the analogy is, is ingredients, right? Which is, if you going to make sure that if you're producing food at any point of, you know, is going through a factory, which has got peanuts in it that that's immediately identify because you've got to advertise. If you know that that's a risk on your product. So people have allergies, you know, can know that. And so, um, again, it's just that transparency and to wend. So any risks can be, um, you know, surfaced and addressed, um, either to the end consumer or just even as you as a producer. So you get better insights. So, you know, back to our point earlier, if using an open source project, which, you know, is inherently insecure, like, you know, an old version of Liber, you know, of open SSL or whatever it is that you immediately see DACA and can respond to that if, if you're, you know, tests have failed to spot something.

00:45:26

Yep. Yep. Great. So I want to respond to a, um, well, actually one of the poll questions we did on the, um, on the slack channel, which suggests that most of the people listening are doing security scans in that overly in sci CD process, um, that that's where the majority of scans are taking place. Just let me, let me pose the question as to why you should potentially broaden that and look at, you know, look at either ends of the workflow and the importance of doing scans elsewhere within your software development processes. Sam, do you want to, do you want to take that one? I think that probably pulls together quite a few of the things we've already spoken about already.

00:46:03

Sure. Yeah. I can talk about that. So, um, that is interesting. And actually, you know, in some ways that response to the poll is encouraging because if I were to pick one place to do it, that's a really good place to do it because if you're doing as part of your CD workflow, it's automated, it's part of the development process and it's happening before it goes up to production. Um, if there's anywhere to add next after that, I would say is to make sure that you're scanning regularly in production, because, you know, it's one thing to make sure that everything you ship is clean and a trainee to go, and it goes out the door clean. The challenge is that the security world doesn't stop. You know, the attackers don't stop or abilities don't stop. And that those packages, the software that you shipped with today, you know, tomorrow, somebody's going to find a vulnerability in some component of that.

00:46:54

And you know, all of a sudden it's insecure again. And I've talked with organizations that are a little bit embarrassed to mention it, but they actually have code that, you know, is two, three plus years old and it's still out there running in production and it hasn't been touched. It hasn't been updated. It hasn't been scanned and guaranteed the number of security vulnerabilities that exist in that application are sky high. Um, and so, you know, CICB is an absolutely great place to start. Like I said, that result actually is fairly encouraging to me. I would say the next step then for most of the viewers here would be in to start thinking about that production environment and scanning that, how do we work to shift that left to make sure that the developers are aware of what's actually running today in production and what vulnerabilities exist there. So that again, that can get back to the team that's responsible for making the fix.

00:47:51

Yeah, no, I mean, I agree if you're going to see ICD is definitely, you know, if there's only one thing you can check off the list, that's, that's a pretty good one. Um, I think the coming back to, I think what we said at the very beginning or may have commented at the beginning though, is, you know, shift left doesn't mean, Hey, I'm doing end point and I've got some checks running in CACD, it's it really is a much more holistic approach. And so, you know, as long as that's not the end of the journey for organizations that they felt were doing DevSecOps we've, we've done a shift left thing. We've got a CICB check that actually ultimately, you know, it's, uh, this is where shift left. Maybe, you know, may have not been the best term, but really it's, you want to be doing security continuously everywhere, right?

00:48:31

That's the goal here is that every step along the way, you've got some security checks. Now there may be faster in some areas thinner and the others based on, you know, the context, but really you need to expand across the spectrum and just get away from just runtime, which I think we're starting to do now. And I think there is a danger that people over-correct and say, Hey, we're just doing everything in CICB. We're good. You really do need to check everywhere at all times. I think there's continuous security is actually, um, you know, it's, that's the sort of defense and depth, uh, effect really there. Whereas even if it's the same check you want to be doing it to, you know, scan a registry, the content could be tampered with in a registry scan when you're deploying things could have changed since the build time, you know, CVS are coming out at such a fast rate now, you know, even if it's like a day between the builds and the deploys, something could have happened so very encouraging, but, you know, I would hope that that's not the end of the journey for, for, for people.

00:49:26

And that's that there's, you know, at the beginning. Yes.

00:49:29

And I guess one of the frustrations that we hear, um, is obviously, um, things like false positives and, you know, just too much information and really working out what's important and what's not important. Um, you know, how, how should organizations start thinking about, you know, going, you know, really working out, what should we be focused on? And we'll actually use it is not that important. Neil, do you want to take that one? I know that that's probably, yeah,

00:49:54

It's false positives. I mean, the, the, the, the bane of, of, of security teams. I mean, they're just a fact of life. I mean, until we have, you know, sky taking care of us, there's always going to be something we're going to be dealing with here. Um, you know, I mean, look, there's, there's, there's, uh, the burden is, is definitely on the vendors here. We've got to do our best to try and reduce this as much as possible. And, um, you know, there is, there's constant, it's a constant process to try and get better at it. Um, I think, you know, and this is where I'll create a rod for my own, but I think a lot of this is around reporting and a metrics. I mean, and what I'm starting to see, which is really encouraging is as part of RFI is, is like, are the business metrics which can show how many false positives you're generating and how quickly we resolve those.

00:50:41

And if they're getting better over time. So I think there is a sense of don't just accept the pain and, you know, let the user deal with it. Um, as a natural byproduct that actually showed that you're getting better and how you're getting better, it doesn't mean you're going to get down to zero. They think that's ever practical, but at least at your, you know, your you're cumulatively working through the pain, even if it's in areas that we've got better at say Java, or you've got better with, you know, your containers or, or whatever it is. So, um, you know, there is, there's a lot you can do around this. You know, we've given customers, you know, automated policy recommendations or, you know, helping developers, you know, write things, you know, maintain their metadata in the right way, which can then ensure that we can do detection more efficiently. Um, but yeah, it's, I would say, and, you know, as my sales team was sort of already sort of guarding themselves about ask, ask your account reps, you know, it's like, what did you do around this? How do you get better? How can you show me you're getting better? That the burden is absolutely on us as vendors to, you know, to try and deliver that to users. Yeah.

00:51:45

Yeah. I agree. I think a lot of the burden is on the vendors to help improve this in the long run. I see a lot of the false positive rates dropping as users start to consolidate their scanning tools. Right now we see a pretty diverse array of tools that are out there. And there's a lot of different interfaces. There's a lot of different places to go to explore these vulnerabilities. As you bring all of that into a more centralized place where you've got all of those results in one location, you're able to start doing cross correlation and even, um, you know, pair that with observing, you know, observing things along the way of the CICB pipeline as you're executing your tests, you know, was this library ever actually utilized or is it just present there on the system because it's required to be as a dependency, but is that code ever actually even utilized right?

00:52:39

Or even in production being aware of and knowing what firewall rules are in place, knowing, you know, what, uh, intrusion detection and intrusion prevention roles you have in place. Those can be mitigating factors that may make even a valid vulnerability, non exploitable. And so, um, I don't think the industry is there yet, but in the long-term I see this converging in a way where all of that different intelligence comes together to help inform the vulnerabilities that are being found by the original scanners and to help add some, you know, just some additional light there to say, look, yeah, this is a critical vulnerability, but by the way, we saw that this code was never exercised during any of your tests and there's this mitigating factor on production. So maybe it is a, but priority wise, you know, you want to drop that really low on your list of things to take care of because it probably is in this case, a false positive, or at least a non exploitable vulnerability. Yes.

00:53:38

Yeah. There is just, I mean, I'll, we'll give the, the, the standard vendor, um, sort of magic pixie dust around. I mean, this is one area where at least we'd anchor anyway, where machine learning can definitely help with some of this, right. I mean, if there are, you know, machine learning is very good at pattern recognition and sort of classifications such things like this. And so I think there's, there are good opportunities, good opportunities to sort of, you know, look at this kind of data to see where it has been a false positive across many other customers. Cause they feel like that. And then sort of bring that back into, to, to the wider customer base. So there are some techniques where machine learning can certainly help with this. Um, both the correlation, uh, point summaries, which is absolutely critical, but, um, just, you know, how to, to see how users respond and you know, how the human brain is analyzing and doing the, the classification to determine it's a false positive events sort of use that aggregate set of information for many users to sort of flag things up earlier. So some interesting things about judging on the technology side. Yeah.

00:54:34

And I know, um, we spend a lot of time talking about vulnerabilities. Um, but you know, in, in the very short time we've got left in terms of other things that we should be looking at as, as we go through our CICB process from a security perspective, what else do we need to be considering other than just vulnerabilities.

00:54:53

I see that one. So yeah, vulnerabilities are important. Configuration is also important, right? So if with the shift to, you know, policy as code infrastructure is code, um, code is code, right? Pretty soon everything is headed towards code. That's just kind of the direction of things. And so being able to do that additional scanning and say, look, has somebody opened a network port, has somebody modified their Kubernetes instance to expose a service that really shouldn't be exposed? Um, in doing some of those basic checks, there are vulnerabilities, but there are also, um, just security best practices and hardening techniques. I would say that also need to be monitored and watched for, and that applies not only to the applications we're developing and pushing to production, but again, kind of, as we talked about earlier, that applies all the way across the board, to the tools we're using to run our testing coverage, to the, um, SDM tool that we're using to our workstations themselves, even to our IDE, does that have, you know, open loopholes, uh, you know, if my, if I'm working from home and my home firewall is wide open and I've got a service running, you know, maybe I've got a home Apache server that I'm running out of my own home, that Scott that's riddled with vulnerabilities.

00:56:09

Cause it hasn't been updated. That's a huge entry point for an attacker to come in and gain access to my account and exploit that. So, um, you know, there's a lot out there. I think you could easily start to try to boil the ocean, trying to cover all of security areas that could potentially be coverage gaps in an organization. The key there is just to prioritize them and, you know, start with the most likely the highest, uh, you know, the highest coverage areas that you can action on, um, to help reduce that, that, uh, security exposure.

00:56:43

Yeah. I think one of the, um, you know, things like, uh, you know, sass and dust, some things that, you know, it's very good at just looking for an objective fact of, you know, this is, this isn't that this is a hole, this is bad. You should not do this. I think, you know, I think Sam touchpoints with configuration is like every single software project has a set of best practices about how to secure it. And often right now they're, they're scattered across the product documentation or they're just sort of culturally known amongst its users who absorb it through, you know, through, through time. And I think, you know, with this trend to everything going into code, you know, literally everything, every configuration option being managed statically in inside your source code is going to lend itself to actually doing some, some, some good objective security testing of look, have you got encryption turned on everywhere?

00:57:31

Have you turned off all of the default? You know, have you changed all your defaults? That's really just a good one. It's like, know, have you changed your default password, all your tools? What if the past was actually held in a code file you can scan. So I think, you know, there is, it's the, again, that there's a, um, best practices as a generic category. It's like actually the configuration for the, these applications. I think that's still the challenge we see with companies. I mean, I I'm, I would hate to be sort of like, uh, you know, running an organization because there's so much technology to so many different projects. You've gotta be an expert on now is no longer the lamp stack, right? It's like a huge cornucopia of tools, understanding the security best practice for all of those is really hard. Um, and again, I think, but you know, that's, that's the area where I think if you're going to put your effort after you've done your sort of vulnerabilities and your zero trust networking, it's actually, are you doing the right things, all your applications and trying to do that in an automated way, see a scanning detector when you have bad, best practices is, um, maybe the next challenge for us to climb.

00:58:34

Great guys, we've run out of time. It feels as though actually we could probably go for another hour, but unless we're at a time's coming up, so Sam it's been an absolute pleasure, Neil, thank you so much, so much. I think, um, if, um, anybody who's listening in wants to continue the discussion, please swing by the, uh, the get lab slack channel, or we'll also be, um, ankles, slack channel. And we'll be very, very happy to pick up the, uh, pick up the conversation over there. But, um, thank you for listening. I hope it was a, it was a worthwhile hour spent. And, um, as I say, if you want to continue the conversation, please, you know, drop by the relevant slack channels. Thank you.

00:59:09

Thank you.