Las Vegas 2020

The Dirty Truth of DevSecOps: Feedback Loops, Culture Clashes, and Bionic Adversaries

Join Dr. Stephen Magill (CEO at MuseDev) and Brian Fox (Co-founder and CTO at Sonatype) for a frank, interactive, and authentic discussion about DevSecOps. This VendorDome is not aimed at product pitches around technology, but on honest discussions of what it takes for enterprises to continuously improve upon their DevSecOps practices, culture, and collaboration efforts. 

Stephen and Brian will share insights from years of discussions and guidance they've shared with fellow CEOs, CTOs, and CISOs within the DevOps community. They will share insights from enterprises on the leading edge of DevSecOps, discuss anti-patterns pursued with the best of intentions that delivered poor results, and invite attendees to shed light on epic failures and successes from their own enterprises.




This session is presented by Sonatype and MuseDev.

BF

Brian Fox

CTO, Sonatype

DS

Dr. Stephen Magill

CEO, MuseDev

DW

Derek Weeks

VP, Sonatype

Transcript

00:00:13

Hello, everyone at DevOps enterprise summit. I'm Derek weeks. I'm vice-president Sonatype, I'm also the co-founder of all day DevOps. Uh, I am really excited to be here because this is, uh, certainly this is my fourth DevOps enterprise summit. Uh, fourth year of it. I've also, uh, enjoyed going to London for a couple of the events over there. Uh, and this is by far one of my favorite events of the year. Uh, and, uh, I have the pleasure of being joined by Brian and Stephen today who we'll get to in a second. Uh, but we are going to be talking here about the dirty truth of dev sec ops and the experience that Brian and Steven have, uh, you know, in this industry, in this arena, talking with the customers, uh, and, um, people out in the community that they've spent time with over the years.

00:01:09

I think the really cool thing about this session is one, we are live, this is not a recorded session, uh, and we are taking your questions in the slack channel, uh, as we just, uh, posted. So Nick, uh, yes, the mad max reference, uh, is right vendor dome, Thunderdome, and, and we're about to, uh, have a great discussion here. So, um, when you have questions on dev sec ops, post them live in the slack channel, I have it right over here on my other screen. So, uh, I will see it there. Uh, so some of the dirty truths of DevSecOps things that we can talk about, uh, tribal wars, uh, if you will, between organizations, how these different organizations merge, uh, merge with one another, come together, provide information to one another, uh, what kind of training opportunities are there? Um, what are the CSOs asking these guys when they go and speak, uh, to teams of CSOs?

00:02:08

What are the developers asking or anticipating, uh, when we're talking about, uh, dev sec ops, um, the cool thing is we can get a CEO's view and a CTO view, uh, through this conversation. So I think that will, uh, be an interesting dynamic as well. And we'll also talk about how different practices related to dev sec ops have changed over the last five years or so certainly, um, that it has become more of a, you know, buzz word or topic of conversation or, you know, presentation topic at various conferences. So, um, with that, I am going to hand over to Brian for an introduction to him and a little bit about his background.

00:02:54

Thanks Derek. Hi everyone. My name is Brian Fox. I'm one of the co-founders and here at Sona type. Uh, my background is, uh, nearly 20 years in software development. Um, C C plus plus later Java. Um, I also was heavily involved, uh, in Maven. I'm still on the Maven, uh, PMC and I was a PMC chair for, for a number of years. And so I have, uh, a lot of experience, um, dealing with enterprise software as well as open source software and how those things intersect. Um, in the early days of Sona type, we did a lot of training and consulting around Maven, um, helping people, uh, sort of uplift their build systems and modernize it. And we saw a lot of companies struggling to, um, make effective use of open source without introducing new risks. And that led to sort of, uh, our product portfolio and a lot of things we're going to talk about today.

00:03:49

Um, you know, trying to help people, uh, leverage that open source and do do, uh, things in a better way that doesn't introduce new security risks and over those, uh, what's it been 13 years or so, um, uh, you know, the industry has moved along quite a bit. I think people generally recognize that there's risk in open source from security aspects, um, in the early days that wasn't true. People would say, we don't have to worry about that because we have a security team and a, and a firewall. So, you know, my, my stuff is cool. Uh, just don't use GPL. Right. So, so I think, um, it's an interesting, um, conversation to be had about how the industry is, has matured over those times, but that's, that's, uh, my background, uh, Steven over to you.

00:04:36

Yeah, thanks. Um, and so I'm student Miguel CEO of muse dev. Um, I've been doing software security and program analysis, research and tool building, uh, for over 15 years now. Um, ever since I did my PhD work in that space, um, and over the last few years, I've just been getting more and more interested in, in the practice of software and in getting, uh, advanced analysis tools into the hands of developers and really making security easier and lower friction. Um, and, and just understanding more of the problems that the developers face, that security teams face, um, and the interaction there. And so, um, you know, we've done, uh, we've done joint research on that with the state of the software supply chain report. Um, so in a type in muse together, uh, partnered with gene Kim and nighty revolution to, um, to do some analysis of, uh, how enterprises use open source, um, how enterprises approach governance of, of their own software production workflow.

00:05:34

Um, and, you know, at muse, we, we focus a lot on, um, the relationship between security teams and development teams and, um, you know, what can sometimes be in become an adversarial relationship, right? And, uh, where, uh, you know, the, the compliance team, uh, is, and the change advisory board and so forth is viewed as this big blocker. Right. And, um, you know, I think a theme of this conference, something we've found a lot in this community is that often when, when you have those, um, processes and, uh, workflows that people, people grow to hate, um, it can often be fixed with the right application of, of automation, um, and the right sort of standardization and, and the right, putting the right platform in place. And so we really focused on building a platform for, uh, continuous assurance, which is how we refer to this process of automating your government governance, workflows, automating compliance, and, um, and really letting letting tools manage the workflows and making things automated.

00:06:32

And self-serve from a development standpoint, so that you don't have to have these complicated human driven processes that slow things down. Um, so yeah, so that's what we focus on is, is delivering, uh, via that process really deep insights into, into the code, your dev teams are, are working on. So a Sona type focuses on, uh, you know, open source as Brian was talking about, which is a huge risk surface. Um, so much you need to understand, uh, in terms of open source usage and how that impacts, uh, security and compliance of your development, uh, process. And then, and then we look at the code itself and, uh, and what, you know, your, what you're introducing from a security, reliability, performance standpoint in terms of, uh, potential bugs and issues, uh, in your software development process.

00:07:15

Yeah. So, um, Brian wa uh, or no, I think I'm going to go Steven on this, but what, so why vendor dome? Why, why did Sonatype and muse dab come together on those vendors now? I think Steven we'll go to you and then Brian, we'll go to you.

00:07:29

Sure. Yeah. So I think it's, um, you know, it's because of those sort of complimentary focus areas, um, you know, we're, uh, very focused at news on understanding the code that you're writing and then Sonatype focuses on, um, both analysis and then all the data that feeds into that analysis, um, on the open source side, you know, what's the vulnerability surface look like, what's the CVE, you know, what CV reports have there been, what's the impact of those? And, um, yeah, Brian, then maybe you want to say more about how, how you can combine these two to get even, even better insight.

00:08:00

Yeah. What we're, what we're doing is leveraging the call flow data flow technology from, from use dev, uh, in our product. And so coupled with the bill of materials and the known vulnerabilities that we find in that we can then produce, um, prioritize lists for people to say, you not only are using these components that have these vulnerabilities, but using the muse dev technology, we can see that your code paths lead to, uh, those vulnerabilities directly. So you're, you're more likely to be directly exploitable to the specific vulnerabilities. So that's, that's how we're working together to take these two pieces of, uh, technologies and, and, uh, combine them for better outcomes for our users.

00:08:45

Uh, so this is the fun part of the, the, uh, the session. So, uh, I had, I had a couple of stage questions just in case the was a little slow to pick up, but Denver Martin out there had a really good question, uh, to start off, um, I'm going to go to Brian on this first, but Steven certainly chime in on this as I'm sure you have experience in this space as well. Uh, so Denver says, how do you get InfoSec info, security, or InfoSec to help fund projects like web application firewalls and other crossover items that dev ops wants to use? However, it's an additional cost, right? So this, this is the, you know, I have a dev ops team. I know we need security. Maybe we don't have as much budget, but the security guys have a ton of budget over there. Can we go and borrow some of that, uh, leverage some of that. How does that conversation, um, how's that conversation go or any advice for Denver and others out there?

00:09:52

Um, yeah, it's a great question. I think, you know, we've seen, you know, we did some round tables last year, right? With, with, uh, primarily CSOs who were describing the problem, um, that they can't get more money to do better, uh, security and better security by meaning, hire more people to, to do more scanning, to help development teams and, and some of the more enlightened conversations we had. Whereas when those, um, CSOs recognize that if they were able to use their budget to help fund the platform, in some cases, some big banks, um, we had seen that the security organization had basically funded the entire, uh, SDLC, the modernization of the tool chain, because they recognize that by helping to do that and helping development do a better job of, uh, picking the components and turning the components around and being able to do releases more quickly, um, not only did that mean, um, that development could do a better job of fixing memory leaks and bugs and things like this, but also when those bugs turned out to be security vulnerabilities, they could turn those fixes around, um, in the same way.

00:11:03

Right? So, um, being able to make that case to your, your security team, that if we work together, everybody wins that it is not there, there isn't a natural reason why you need to think only about security vulnerabilities as this like second class citizen. They're ultimately just bugs bugs that have different impacts, um, sometimes profound impacts, but from a development perspective, the way you fix it, it's often the same as you would do for these other, other types of, um, uh, of defects. And so recognizing that, helping to combine those, uh, those budgets can, can provide a much better outcomes versus trying to buy more, uh, more playing defense and, and spending more money, trying to chase the stuff out after the software is built and trying to, you know, what ultimately will lead to, uh, a bigger, um, a battle between the two tribes that doesn't work. Right. Um, so I think it's a great question, but it can be hard to change people's minds into a more progressive stance, but when it happens, we've seen it, uh, be tremendously effective.

00:12:14

So Steven, if development or dev ops wants security tools, and there they're going to a security drive or security organization saying, I actually want some security tools, right. I want to embed more security into my development life cycle. Um, isn't that security is dream development, never wants to use security tools. Like why wouldn't you be behind this? Maybe you can shed some light on conversations you've had, you've had, uh, in that regard,

00:12:50

I think that's right. And I think, um, we see that more and more, um, as, uh, you know, teams become more enlightened and, um, and security, especially, um, sort of recognizes that, um, the better, the better relationship they can have with the development team, you know, the, the smoother that process can be the better. And, and, you know, I think in a sense that's what dev sec ops is all about. It's, you know, it's not just tools, but it's, you know, how do you set up a culture where security is on everyone's mind? And so it's, you know, yes, developers have, you know, security tools that they want to put in place and have opinions about how those should integrate into their workflow and, and, and be part of their daily work. Um, and security has, has their opinions there too. And everyone's working together to really come up with a process, um, that everyone can unify behind that, that addresses everyone's needs, right.

00:13:40

Uh, you know, cause security and developers and compliance and audit and QA, and, you know, there's, there's a lot of different, um, stakeholders in this process. And so, you know, we, we try to encourage everyone to like, you know, when we, in terms of what we put together, the analysis that we focus on, um, security is an important piece of it. Um, but we also focus more broadly than that because, uh, you know, developers care a lot about, uh, reliability, you know, no one wants to get paged at 2:00 AM because something went down, right. And so you can address all of these, uh, issues, uh, with one platform that's obviously the best. And I love Brian's point that, um, you know, even things you don't necessarily think of as traditional security issues can certainly be security issues and the right application in the right context. You know, if you care about denial, denial of service, then performance issues suddenly become a, uh, security issue.

00:14:32

Yeah. Uh, Denver, thank you for your question. Hopefully we answered it. If you have, follow-ups certainly chime in, I see a number of people chatting in the channel. I'm going to go to mark Fuller's question. Uh, next. So, um, many of the, and this is a really good, interesting question. I think there, there will be different perspectives on, on how to answer this, uh, cause I don't think it's tightly tied to either Sonatype or amused as portfolio per se. Um, but many of the latest dev sec ops tools work poorly, we're not at all with legacy systems, for example, call ball on a main COBOL on a mainframe at the same time, these systems run critical applications that we use every day. Do you see this as somewhat of an enforced strangler pattern or do you think dev sec ops tools will fill in the gap and keep legacy systems running for years to come?

00:15:26

So this is part of, you know, we've had this discussion over, uh, over time and other conversations about there's some legacy there's legacy infrastructure out there, but there's also legacy security tools that are out there. And then there's kind of this new modern dev ops experience out there, you know, process practice out there. And there are new tools that are being developed specifically to address that. Do these stay as two separate tribes, uh, or two separate ecosystems, parts of your infrastructure or, um, does one kind of expand to fill into the gaps? Do the legacy guys catch up in the security game and come into a dev ops practices to, you know, expand the solution offerings or does a more modern dev sec ops tool expand and to cover legacy environment either, either one of you want to go first on that.

00:16:24

So, I mean, my thoughts there are, um, you know, you certainly can do DevOps, uh, you know, in a mainframe context, um, Rosalind Radcliffe and, uh, IBM has, has spoken at it. Does events about, uh, doing just that, um, had some great talks there. Um, I think so I think, you know, dev ops practices and the, um, the value that those can bring from a reliability and security standpoint and, um, and speed of delivery standpoint, you know, those are, those are all within reach. Um, I think it's, um, I I'm, I'm, I guess I'm less optimistic that we'll see really like, uh, uh, sort of flowering of many different tools, uh, targeting, you know, uh, security issues in, in say the COBOL ecosystem. Um, it may just be not a large enough focus area, but, um, I think there's, there's a lot of value you can get from sort of very narrow specialized tools, right?

00:17:18

And so we've actually talked to a lot of companies where they, you know, they see a gap in the tooling space. Uh, they see a gap in the market and, um, and their developers, right. You know, a linter or something themselves. Right. Do some simple checking of properties of the code that they care about given their, uh, coding style and patterns. And, and you can certainly do that sort of thing. Uh, even if you're on, you know, more of a legacy technology base. Um, and I think then if you can adopt those DevOps practices and have, have some sort of, um, you know, dev ops and platforms, uh, then you can start to incorporate those the same way you would, you know, a tool from a vendor.

00:17:56

Ryan, what are your thoughts on, uh, you know, is Sonatype going to do anything to cover the kind of legacy environments or is just covering the kind of more modern DevSecOps, uh, practices, uh, challenge enough.

00:18:13

Yeah. Yeah. I mean, I think there's, there's certainly a, a market element that works against you, right? Nobody's going to take a new tool seriously, if the only thing it supports is cobalt. So everybody has to start with the modern languages, um, which then means you're, you're working down the long tail. Um, you know, I think the other, the other element is some of the older languages and, and, and this applies not just the older languages, but even like you could have the same thing with really old Java, like the way it's being developed. Isn't really compatible with a, with a, more of a modern, agile kind of thing. Not without a huge lift that may not be warranted for an older application where you're, you're largely in maintenance mode. Right. So specifically on COBOL, I don't think there's a lot of open source modules that people are out there picking.

00:18:59

So you don't have, you don't have that same problem. You certainly have challenges with vulnerabilities in the code, right. Where other tools might be more well-suited. So I think that's the problem is that, and it's not just a problem with legacy systems, it's all of these ecosystems, the ways that developers work with them is just different. Um, and you try to find common patterns and build tools for that, um, which, which, uh, creates sort of this dynamic. So if you're, if you're, you're supporting a significant outdated legacy portfolio, um, I honestly don't think it gets easier.

00:19:38

Yeah. Uh, you know, maybe not the Andrew Barber dirty truth of dev sec ops spend, we have a positive answer for everything. Um, well, the other questions that came in, uh, that got a couple of thumbs up was, uh, Steve Jones, he said is Dade data, privacy, data, security classification becoming more important to your organizations and customers. So this is about, you know, w where is the data? How are we protecting it? How are we protecting secrets within, uh, our infrastructure? Um, how are we protecting the customer's data that's in these applications? Um, how much is data privacy, data security becoming, uh, an issue or a standing point in the conversations?

00:20:39

Yeah. So, I mean, we hear a lot about that. I mean, it's a huge compliance issue for one thing, right? If you're dealing with credit card information or customer data, and you're PCI compliant, you know, you there's those requirements in terms of how you handle this and stuff. And, um, you know, so we've, we've done certainly a lot of looking into, you know, what can you check when it comes to the code? You know, how can, how can you enforce policies during development on, um, you know, logging of particular transactions or encryption of sensitive data, um, and sure that, you know, certain types of data don't go to certain end points, um, and that's all, that's all doable and, you know, is definitely a best practice I would recommend in terms of ensuring that that's in your code scanning pipeline, that you are looking for those sorts of, you know, the, the key word is information flow analysis or data flow analysis, uh, there, um, and then, you know, there's security in depth, defense in depth is important too. So there shouldn't be the only thing you rely on, but, you know, it can definitely be an important part of the strategy,

00:21:42

Uh, Brian, anything to add there, or you want to go to the next question?

00:21:47

No, I don't, it's a concern. It doesn't directly come, come into play with some of the things that we're doing specifically, but it's definitely a concern both of our customers and how we're handling it, how we're thinking about it, we're thinking about their data. Um, but that's about all I'd have add, I guess.

00:22:06

Yeah. Part of it is add Sonatype. We certainly have, uh, we have data from people that we, uh, from our customers that we are collecting, but we also have a policy that we don't keep customer data around for very long, because it is, you know, we're, we're sometimes touching on security related issues or security vulnerabilities on things that they know. So we have very explicit policies as a vendor that when we're collecting specific customer information, that we, um, uh, we don't hold on to that in the house for a long time, because we actually see it as a liability to hold it for too long. Like someone had this open source component that they used in their code. And we we've known about that for the last two years. Uh, if anyone got ahold of that, um, you know, that would be difficult. So we try and purge that information on a frequent short term kind of a window so that we don't hold that and leave our customers more vulnerable as a result. I know it's part of that.

00:23:11

This, this brings up one thing I want to ask the audience about, which is so, like, as you say, Derek, the best way to protect data is to not hang onto the raw data at all. Right. And, um, I'm curious if anyone, uh, is using differential privacy, which is a sort of like, uh, it was a few years ago. It was just a research topic, uh, in the academic community, but it's been, it's been deployed by like, uh, Google and apple as part of their, uh, privacy protection schemes, um, when it comes to like Chrome, Chrome data collection and, you know, apple device collection. I'm curious, I'm curious if it's made, making its way, um, sort of more broadly into the ecosystem of things people look at when it comes to protecting a sensitive, sensitive data. So if you're, if you've looked into differential, privacy, are you using differential privacy? Uh, message me, I'd be interested to hear about it.

00:23:59

Yeah. Okay. Uh, I'm going to go to, it was a R S uh, had I asked you a question, um, if I'm interpreting this question, right? He said, I'm usually seeing security teams going to development saying, how can we help out? We have tools, we have practices, we have training, uh, and so forth. But, uh, unless he's mistaken, he's not seeing a lot of development teams going to security saying, Hey, we actually need your help. We want to engage you in this. Do you guys see it as it kind of know, one direction thing, uh, development is controlling and building these applications, building out the infrastructure that they need, does security have to come, come that way? Are you guys seeing other situations where development is actually going to the security team saying help? We don't know exactly what to do, but we need some help with, you know, security into our practices.

00:25:02

I see, I see these challenges a lot. I think there's a history that we have to recognize, even if we wish it, it, wasn't true that a lot of, um, security teams in organizations are perceived as, um, you know, friction points or blockers to get in getting in the way of, of getting things done. Um, and, and there's been a history of bad tools producing lots of findings, right. And, you know, I I've heard examples of people saying, well, we had a mandate from security that we had to fix everything that our SAS tool provided, which is just kind of ridiculous on its face. If you understand that, like the tools are not perfect, but if somebody is using that data incorrectly, weaponizing it and then expecting us to, to fix all of these things, it's going to create new problems. And so there's a history of this.

00:25:52

So it was not surprising to me that development teams don't go asking security. I think it's sort of, um, they'd rather just fly under the radar. Um, but the reality is, like I talked about before working together on those things is what provides the best outcomes. Um, and, and, you know, fortunately I think we've seen over the last, you know, handful of years, specifically that development has become generally more accepting and more aware that there are, um, they can do a better job around the component management that will reduce the number of security findings they have to deal with later. Um, but it comes from, it, it, it almost has to come from the top because, you know, when somebody can say security is friction and they're just slowing us down and stopping us from doing our job, there's an implied definition that their job doesn't include security, which is just wrong, right. But many development organizations have, for whatever reason, that culture where the only thing they're gold and measured on is producing features and shipping stuff and fixing bugs. And if the definition and visibility of that, doesn't include all of these other dimensions, then that's what you're going to get. You get what you measure. Right. And so there, there, there can be subtle misalignments that then end up with these magnified culture wars that I see so many different things. I'm sort of sure. Steven you've seen similar aspects. Right. Okay.

00:27:15

Yeah. Yeah. And, um, you know, I th I think one, one really important area, and I don't know how much this is happening again, I'd be interested in stories here, but I think the most important time for developers to go to security is before any code has even been written. Right. You know, when you're planning a new feature, you're planning a new component and you're, uh, you want to make sure that from, you know, from an architectural and design perspective, you're setting things up to be secure, to be easily auditable, to be, you know, to have sort of separation of concerns and the small, uh, piece of critical code that, you know, handles security. Like there's a lot of, there's a lot of aspects of that initial design process that I think security has a lot of, uh, insight into. Uh, and it can be really useful to, to make sure you cover those early, rather than you get to the end and you find out, oh, you know, because of how we designed this API, it's actually really hard to, you know, meet our requirements.

00:28:10

So, uh, I'm going to try and interpret Brad Appleton's, uh, question here. Um, and that's about it. He said, anyone having any luck with a combination of get ops diff ops, and I think this is pointing to kind of, uh, and as a means of eliminating manual change approval bottlenecks. Um, so may I, I'm interpreting this as where you're combining, uh, security of code with secure, uh, uh, security as code, along with infrastructure as code where these two concepts are merging together, we care about the application, or we care about the infrastructure and, you know, can I do this in a way that eliminates more manual checks and approvals, uh, within the infrastructure or any of these concepts coming together out there to maybe use fewer, uh, solutions or vendors cooperating together, et cetera, Or even teams cooperating together, not just vendors.

00:29:15

Yeah. I think infrastructure is code helps a lot. And, you know, in terms of like DevOps and get ops, you know, like the thing we focus on the most in terms of recommendations and how, you know, how muse works and, and just, you know, how the workflow that we recommend is, is exactly sort of diff focus, right? You know, it's like, you're, you look at the code changes that are coming in. You look at, uh, issues, security issues, other sorts of issues that are potentially being introduced by that code. And you sort of handle it in the moment as the code is changing so that you don't have this lengthy process at the end, and you can sort of keep certified that process and keep your release cadence up and, and not, uh, hit, hit these, uh, barriers that slow down work. Um, and I think infrastructure is code helps that too, because it just brings more, it brings more of the platform or the software within the purview of, uh, you know, being able to be addressed by that sort of diff oriented workflow,

00:30:11

Brian, any, uh,

00:30:13

Yeah, I would, I would say, you know, the infrastructure is code an element of this is kind of the, the newest part to this, uh, dimension, but so many things, you know, first for us, it was, um, licensing, then security, then architecture and quality and infrastructure is code. These are all dimensions of things that ultimately development needs to be concerned about. It's best solved within development. Um, especially if, if the new paradigm for how you're building and deploying your applications is in the cloud, then something like infrastructure code is kind of your build system. There's none, there's no package, right? You don't produce a installer or zip or a war anymore. You produce configuration and Amazon that affects running of your application. Um, which means in order to test it, the development has to do it. Um, the way to solve those, all of those problems is to be able to provide the insight and the guardrails and the information to development, as real-time as possible in the environment they're doing right.

00:31:16

Coming later with tools, whether it's a security SAS tool or an infrastructure as code tool that basically says, oh, you got it wrong, do it again, um, is both ineffective and super frustrating, right? Um, and especially in an infrastructure as code scenario, by the time you scan it after the fact might be too late, it might not be manifesting itself as a security issue. Although we see this all the time where companies have databases that are wide open to the world or S3 buckets that have, you know, data dumps in them for anybody to consume that certainly could be caught with some type of checks up front, but it could also be, you know, you did you really mean to spin up 10,000 extra large instances and leave them idle and cost us a ton of money. Right? Those kinds of checks done early can prevent, uh, both security and just simple, uh, misconfiguration problems. But the, but, but again, it has to be done, uh, in a way that works with the agile development environment.

00:32:14

Um, I, uh, I'm going to go to one of the questions that I had before. I always find this kind of interesting. Um, so you guys are out talking to people in the market all the time and, and having conversations around how to bring more security into dev ops practices. Um, Steven, is there a standout conversation that you've had with on this topic, whether it's a developer or someone at a conference that you sat down with and, you know, it happened to have like an hour and a half conversation cause you were, you know, so in thrawled by the topic that you kind of remember as like, yeah, this is the, the one standout conversation moment.

00:33:00

Yeah. There's a, there's a few things that stand out to me actually. There's, um, there's a number of conversations I've had in this area, um, that are sort of all fall under this umbrella of automated governance and, and sort of focus focusing on that. And, you know, and there's, there's now a lot of different organizations in the DevOps community that I've talked to the leaders at those organizations and, and heard about their efforts to, to automate their governance process and, you know, improve that. And that's really, you know, I think that, um, that trend, uh, is I think the most direct attempt to address this at an organizational level, right. Baking security really into the entire, uh, software development process. And, um, you know, and I think the other thing that's encouraging about that is, um, these efforts that I hear about are largely a developer or dev ops teams driven drive or driving them.

00:33:56

Right. You know, they're secure, they're bringing insecurity and they're bringing in the auditors and they're making sure that like, this workflow works for everybody, but in particular, they're sort of, um, making sure that it works for developers, which is, you know, strangely and often neglected part of the process. Right. You know, making sure that, um, that these processes are actually helping, uh, developers. And so, um, you know, I think, I think there's some great examples there, uh, you know, capital one has done a work on that, uh, PNC bank. Um, there's others in the community too, that are, you know, that are really pushing this forward. There's some great dev ops summit talks, I'll try and find and send some links out. But yeah, the, you know, it's all, it's all, it's really the fact that there I've had so many conversations in that general area that I think is, is really exciting and gets hope.

00:34:40

Yeah. Yeah. Yeah. You're talking about like security working for developers. Uh I'll, I'll try and find the link to this, but the, um, story I was reading, um, that a developer at Google had written up how they do security around their code and how they check secure for security vulnerabilities within their code. And that they were getting open source products and they were bringing them in. And if they worked fine, they stayed in. If they didn't work, they went to the open source projects and said, here's what we need you to improve. Um, or they ended up building them themselves, but it was this whole process of like, here's how we get security to work as development wants it. And it was not like it didn't involve the security team at all. It was actually just the development teams saying, we're going to do this. And we want the tools to behave like this, then how do we want them to behave? So when they were kicking out a bunch of false positives, they were like, this tool is trash either. If the vendor open-source project can improve it fine. If we can improve it fine. If not, we're just going to turn it off because it's just this noisy thing that we don't want to deal with, but I'll try and find that.

00:35:50

Yeah. I'll send out a, I think I know the article you're talking about, I'll send that out in slack too. Um, it's, it's really great. And I think, you know, there's a couple of key things that always stood out to me about how they approach it, that I think are really important. And, uh, you know, one you touched on is, uh, really being data-driven right. Paying attention to we're running all these tools, you know, which ones are providing values, which ones aren't, uh, you know, when, when a particular tool reports a results, how likely are developers to fix that and like really tracking that and using that to adjust, you know, what tools you run and how they're configured. Um, and then yeah, taking that platform approach and running multiple tools. So I think in that article, they say, uh, as of the time that was written, which was like a couple of years ago now, uh, they were up to 146 different tools running on their code analysis platform, um, which is just really incredible. Right. And a lot of those were sort of in-house tools, Dell up velvet for various sort of niche purposes. But, um, it just goes to show that if, if you have the platform, right, and you're, data-driven in how you, you know, adjust and turn on tools and turn off tools, you can really, you can really do this at scale, you know, at Google scale. Right. So, you know,

00:36:56

Um, so Brian, I, uh, I'll give you a choice of questions that you're going to ask. One is either the standout conversation. Cause I know you were thinking of like what, what's my standout conversation, but there's also a question from , uh, about, um, AI and ML. So, uh, this person asked, is there any luck with chat ops, like automating end to end automations pipelines with integrating chat bots or any AI ML? Um, so part of it, whether it's that explicit example, um, uh, of chat ops specifically with AI or ML, or just more general, how have you seen machine learning and artificial intelligence applied to making security practices better? Um, within a DevSecOps context. So I know that we're, we're doing work at Sonatype applying, uh, AI and ML to different scenarios. So if you didn't have a specific answer for that, I think it would be interesting for the audience to hear kind of how we're engineering, um, AI and ML into kind of our construct or architecture.

00:38:09

All right. How about I do both. Um, so, you know, a standout conversation was, you know, I kind of covered it in the beginning around the CSO who actually paid for the whole central tooling, that was kind of surprising and made a lot of sense. You know, a different conversation was interesting a couple of years ago at one of our events. I think it was in Dallas. Um, you know, we I've been so used to talking about the challenge with open-source components is that development often doesn't know they don't have the tools to easily help them understand that they should be making better choices around the dependencies. You know, it's why are you still using struts after all these years? And, well, I didn't know I was vulnerable, right? Those kinds of things. I had an interesting conversation with somebody who was more on the ops side of things and they said, look, we've got all of these intelligence feeds that come in that tell us when there's a new vulnerability in one of these things take common collections, but their problem was they had no idea what was inside the application.

00:39:06

So it was sort of the situation that they were tasked with defending the perimeter. And it was sort of the analogy of like, well, I'm defending this perimeter and the bad guys are shooting at these tanks. I don't know if the tanks are full of water or if they're full of gasoline, I have literally have no idea. And so it was a really interesting, you know, insight onto the other side of things. And it's like, well, okay, that's really neat because our, our tools can help provide that, that depth of visibility into what actually is in this application. So, you know, you should be looking for cross site scripting or not, or, you know, a SQL injection or not. It really, it can make it much easier for them to do a better job of defending the perimeter when they realize where the, where the soft spots actually are.

00:39:51

Um, so that, that was a really interesting standout conversation that enlightened me. Um, you know, in terms of the MLAI, um, challenges, you know, one of the things that we've seen in the past couple of years is, is this new, um, the dimension of intentionally malicious components be injected into the, the software supply chain. There's a ton of examples that I've been talking about for awhile and, and there it it's new and novel. And in that two things that rather than waiting for vulnerabilities to be found in disclose, and then trying to exploit them at scale, which is like the struts, the Equifax problem, right? These things we've been dealing with for years, that they're introducing these vulnerabilities on purpose and that in and of itself isn't necessarily new. But what seems to be new to me is that a lot of these attacks are focused on the development environment itself, right.

00:40:45

And trying to exploit, um, the software development environment either to use it, to replicate like you can, like you saw in the octopus vulnerability, um, uh, malware, uh, or in that they're trying to steal credentials and other important things off of the development environment and, you know, to make it real for people I've started to try and describe, you know, in dev ops dev sec ops, we talk a lot about Deming and the principles there, and those are all great principles, but we have to recognize that those principles were focused on the end product, the car making a faster, uh, an assembly line that could produce cars that were cheaper, better, higher quality, right. But what it was not doing, it was not trying to protect the factory. Again, some kind of malicious attack, either a bad actor or a parts shipped in that, that, you know, blew up the factory or contaminated the factory.

00:41:34

And so when you think about security, you need to think about it in a different way. And what we've had to do is try to use MLAI technologies to look at the new components that are being released into the wild and try to understand elements about them. You know, who released it? Have they worked on this before? Did they add a dependency that's never been seen before? Like there's so many different kind of attributes that you can collect that it's impossible to build, uh, heuristics around. Like if this happens, then this release of this component is inherently dangerous, but the machines are really good at detecting that just like our credit card companies are for detecting, uh, credit card fraud at the moment of a point of sale and send you a text. Is this you? Yes or no? Right. So we've been trying to integrate technologies like that, to be able to give our customers, you know, an instant adjudication on a, on a new release to understand if they're, if it's fishy or not, and maybe they should wait until somebody can confirm if it's malicious or if it's just something we've not seen before.

00:42:36

Right. So, um, so that's, that's what we've been doing with MLAI techniques.

00:42:41

Yeah. No, that's, it's, it's super cool. Uh, and in terms of that scenario and the others that you explained, um, Steve, I'm going to throw a question out to you that, uh, I, I I've heard, uh, pontificated around the industry, but is, uh, is security going the way of QA? So we don't have any QA departments anywhere because dev ops figured out how to automate QA and build testing and as is security going that same way, does security as a practice or application security as a practice disappear in, into dev ops or development?

00:43:22

I, um, I'll say I, I hope that we're moving that direction for the portion of security that can be addressed by automated tooling. Right? So I, I don't think there's a lot of value to a tool reporting something to a security team that then has to file an issue with the dev team and participate in a prioritization process to get something fixed, right. There's, there's no reason that a result can't be communicated, uh, in a way that makes sense directly to the developer and, you know, and, and enables a fix to that issue without all those people being involved. Right. And so, you know, my hope is that security teams shift to, to being, and I think, I think security professionals would, uh, you know, by and large, uh, ho hope this happens as well. Right. You know, they don't want to be combing through thousands thousand page lists of, or of, uh, you know, results and, and filing issues.

00:44:14

You know, they'd rather be focused on pen testing and red teaming, architectural decisions and things like that. Um, so, you know, my hope is that, that, you know, we do get the tooling to the point where those teams can focus on, on those sorts of issues. Um, yeah. And to the chat ops issue, I think, uh, you know, a good example of this is, and the Google paper mentions this too. Um, this process of, of reporting issues directly to developers in the code review, uh, system, because that really is, you know, sort of one of these key areas where it is conversation driven. It's a social process. The team is coming together to decide, uh, what changes need to be made to the code before it's merged and, and deployed, uh, upstream. And so, you know, it's the perfect place for automated tool results to participate. If you can do it in a low noise, sort of very actionable manner. And so we always try and do that, those results there.

00:45:08

Okay. Brian, I'm going to shift a question to you. This is an interesting one. Um, so, uh, I'm not going to read the whole thing, but, uh, uh, is there a place for chat ops to be applied where an application vulnerability would come up, chat ops surfaces that vulnerability in the code to a developer, AI ML supports this with saying, Hey, by the way, we've spotted this vulnerability there, it looks like there is a potential fix out there. Do you want, you know, is this chat ops, the trigger for troubleshooting or remediating this to say, you know, problem came up. Computer told me there was a potential resolution path. I'm just going to click accept and move on. Do we see that as kind of the path to automated remediation where machines are just remediating these code vulnerabilities for us, or is it developer always involved and kind of, let me verify that change before we go through and, you know, place a new version of an open source component in that code, because we update it to remove that vulnerability and AIML is, uh, intelligent enough to know there's not a breaking change in that great fast.

00:46:30

Um,

00:46:33

Yeah, I think I, I think the natural conclusion of that is it, you get to a level of full automation for certain things, but the, the, the question becomes, how do we trust it? And I think chat ops is that mechanism, right? And it's, I've read, you know, the story of elevators right back in the day when they first installed, you know, the automated systems for elevators, people didn't trust them. So what'd, they do, they'd put a, uh, a bellhop or whatever on the elevator to push the button for you, but that's literally all he did. He wasn't driving the elevator anymore. And then eventually it got to the point where people were like, why is there just a guy here pushing the button? I can push the button myself. Right. And it was more about us as humans trusting the new technology. And in the beginning it felt too much like magic. We didn't believe it. Right. But then when it becomes natural and obvious and redundant, then you get to the full automation. So I think the chat ops, uh, element, um, in the pull request is, is, is the bell hop in between the technology and getting to the point where we just go, yeah, that's obvious machine you're right. Just do it for me.

00:47:40

No, that's, that's, that's a great story. I haven't heard that, uh, that one before, but the great perspective,

00:47:49

Because we do now that are just button someone else pushing the button for us.

00:47:56

Yeah. So, um, a question I had, Brian, you touched on, on this early, uh, earlier a little bit, but you know, part of, I I've said part of dev sec ops and security is it's only around in here because we have adversaries. So what, what are adversaries doing that we need to be aware of? There's all this, like, can we bring teams together and security and dev work closer together? Can we use different new evolution of tools in the dev ops space to find vulnerabilities, but what are adversaries doing that we need to be aware of kind of outside of our walls, that's kind of a crafty move that you've heard of lately and Steven, if you want to chime in after Brian, uh, please do so as well.

00:48:44

Yeah. Well, I think it's a lot of the stuff I was talking about before, like recognizing that there is this tribal warfare that, that security often is focused on defending the end result and not defending the factory and then using that to exploit it. Right? So if, if you can insert some malicious code into a component, um, especially in an ecosystem where people tend to grab the latest version like NPM as an example, um, it means you get it in there. You instantly have users that are running your bad code. And if you know that most security practices might run a scan daily or weekly, or only just before release, that's a long time in a, in a, in a, in a soft target that is unprotected for a long period of time where that stuff can go undetected, right. That's why we're trying to use the MLAI technology to detect that something's fishy to prevent that from happening upfront, um, without putting in, in, in place a blanket statement that you can't use any code that's less than what pick a number a week a month, how old does something have to be before you can trust it just across the board?

00:49:51

There is no good answer to that. Um, so that, that's, I think one of the things that the adversaries are looking at, they're understanding that our desire to evergreen and keep up to date actually can be a weakness because of the way, uh, traditional security is looking at the problem. So that's, I think the, the newest biggest kind of challenge that I feel like not enough people recognize because they're so focused on just doing a better job of making that car safer and not looking at leaving the factory door wide open on the other end. Right.

00:50:27

That's interesting that, uh, different, uh, you know, different communities, different ecosystems have sort of different, uh, vulnerabilities look different from a supply chain perspective in each, right? So like in the supply chain report, we found that in, in the Maven ecosystem, the by and large, the sort of dangerous behavior is not being up-to-date enough. Right. You know, there's a lot of libraries, not, not, not updating their dependencies, but NPM looks quite different. And so your threats look quite different as well.

00:50:56

Um, okay. So I think we only have a couple of minutes left. And one of the questions that I had down was, um, uh, can you automate all application security practices? Uh, w which ones might be the easiest, which ones may be the hardest, can you automate everything in regard to application security or, you know, I'll extend that to infrastructure as code, you know, security as well, security compliance.

00:51:29

Yeah. I, I think it, uh, you can't, and there's, there's a few things that work against you in terms of automating everything. Right. So,

00:51:38

But no one wants to hear that.

00:51:42

Uh, the . Yeah. Um, I mean, I think you can, you can automate a lot of it, right. And automation obviously helps you do more with the, the manual resources that you then have left over, but, um, yeah, there's, you know, there's the things that just can't be automated, at least not right now to be in the distant future. Um, things like, uh, questions around design and how security is baked into the system from an architectural perspective. Um, a lot of, you know, just sort of logic errors, like application or domain specific logic errors that can lead to security issues. You know, there's not there aren't general scanning tools for those because, you know, they are so specific and, uh, and ad hoc. Um, but then there's also a, you know, on the infrastructure as code topic, I think this is an example of this. Like, there's, there's the gap between, um, how things are described in the code base and then how they look when they're deployed, how they're ultimately deployed.

00:52:42

Right. And so you, you know, like, uh, if you, if you really want to take a broad view of the threat space, you know, you have to worry about things like, um, problems with the VM, or, you know, a particular version of, uh, of Docker, you know, uh, or a particular version of, of, um, uh, Linux, you know, providing some vulnerability that, that maybe isn't reflected in your configuration, that's your configuration plus the deployment infrastructure, plus the versions of things that you're running that all sort of work together for that. And, you know, I think we might get, we'll get closer and closer to that from an automation perspective, but it's something that's not always recognized in terms of like threat space.

00:53:19

Yeah, no, it's, it's a, there, there are lots of, um, there's lots of specific tools there that are solving different parts of these problems, but you're right. That kind of in the end is this combination. It's like going to the pharmacy and like, okay, I need this drug because my doctor said, I, you know, I needed some inflammation medicine, you know, but does that interact with the two other medicines I'm taking? Right. And that's kind of the same thing in these environments is, um, just because you might have solved one problem doesn't mean it's not creating another or hiding another problem within infrastructure, another layer of the

00:53:59

Yeah. But also, you know, it's off, it may be the case that the problem you can solve with automation is the most important problem for you right now, in which case, you know,

00:54:09

I was going to say that, uh, I wanted to say to Derek it almost, no, you can't, and almost it doesn't matter. Why does it not matter? Because I see so many organizations that are so far from that Nirvana, that they can't even answer, what applications do I have and what components are in them, so that they're not even in a situation where any automation can even help, um, in, in that automated, um, kind of remediation sort of way, but they can automate, um, understanding where they are. Don't rely on the humans to report what components you're using as an example, that is something that you can automate. So don't let the fact that you can't get to a hundred percent solutions stop you from taking the obvious first steps and move further along that spectrum, because the horrible truth is so many organizations are so horribly broken in. So many of them regards, we talked about that, um, that that's what needs to be fixed first, not automating that pull request at the end of the day. Right. But as engineers, we like to focus on the, the, the hard to solve problems, not the obvious need to be solved problems sometimes.

00:55:18

Yeah. It's, uh, you know, it made me think back to a conversation with Shannon lead to, into it. Um, you know, the, that they went through and they solved a whole, whole bunch of problems and kind of applied state of the art technologies and into their infrastructure, uh, to look at how do you protect into its, uh, applications and environments. But once they did that, they, they took a huge focus on the adversaries. What are the adversaries doing? What, or what tooling do they have available to them? What are their attack patterns? And they were, they were, they began to measure things like adversary return rate. So they said, if we're doing a really good job and we kind of know adversary attack patterns and can identify those, if this person comes and tries to attack us today, do they actually come and try and attack us tomorrow or next week?

00:56:10

And if we are hard enough to break into they'll leave, right. If we afford to fight ourselves and protected ourselves enough. So it's interesting to kind of see this dynamic from, you know, once you've kind of automated as much as you can, even if you can't get to everything, look toward how do we know if we're being successful and how do you apply metrics into these environments? And part of that thing is like the adversary return rate. If you don't have adversaries, you don't need to invest in security, but if you know how your adversaries are performing, uh, and if they're coming and visiting you often or not, then you're going to know how successful you are. So I'm being told from our hosts that we are out of time. Uh, but I know Brian and Steven and myself around, uh, at the conference, we have a birds of a feather session, uh, later this afternoon on, uh, security compliance governance, uh, that you guys can drop into as well.

00:57:11

I hosted that conversation last night, there were a lot of really good questions and experiences shared, uh, with the team. So, uh, add that birds of a feather session to your calendar. Uh, and that's, uh, B O F dash sec dash audit dash compliance. So you'll see that, uh, pop up later this afternoon in the agenda, but thank you, Brian. Thank you, Steven. Always, uh, great as usual to have your experience and share it with the audience and thanks everyone for submitting your questions along the way for these guys and them doing it live so perfect.

00:57:44

Yeah. Thank you. This was fun.

00:57:48

Alright. And come take a look at the, uh, Sonatype and muse to have booths. If you haven't, uh, you know, take a look there. I know they have a number of offers, white papers, live demos and other giveaways. So, um, definitely head to the Sonatype and muse to have booths as you can, and hope everyone has a great conference.