Security Differently (Europe 2021)

John has over 35 years of experience, focusing on IT infrastructure and operations. He has helped early startups such as Chef, Enstratius (now Dell), and Docker navigate the "DevOps" movement. He is one of the original core organizers of DevOpsDays and has been a prominent keynote speaker at various DevOps events throughout the years. He is also a co-author of The DevOps Handbook along with Gene Kim, Jez Humble, and “the Godfather” of DevOps, Patrick Debois.


John Willis

Senior Director, Global Transformation Office, Red Hat



Hello everybody. I'm John Willis, I'm senior director of global transformation at red hat. Now this presentation is called security differently. And so I work for red hat. As I said, I've been there a year and a half. If you recognize some of these people about 18 months ago, Andrew Clay Shafer on the left, invited us to come into build this team at red hat and it's been just a blast. So Andrew cliche from the left to the right hand is Kevin Behr. You probably know him as co-author of the Phoenix project. That's made short guy, if you don't know me and Jay bloom has been working with Kevin Behr for years, working on his PhD in design transition. He really anchors the team brilliantly. So, uh, so one of the things I've been trying to ask this question, you know, I've been looking at this DevSecOps. If we go back, we look at dev ops dev ops is, you know, probably 12 years old.


I like to say rounded down to 10, but it's still 12 years old. And by the, by the name, the definition of the name and, and DevSecOps is probably like seven years, but I like Shannon. Leitz generally to coin the term dev ops. So I wrote up, you know, the DevSecOps manifesto in 2015. And one of the things I've been thinking about like this meta question is what would dev sec ops look like if dev ops never existed, right? And like, hold on for a second. I know that's a terrible way to express this, but I wonder if all the good we've done in DevSecOps, did we try to solve a security problem using the lens of dev ops? In other words, did we take this square peg and just jam it into a round hole where a lot of things look like they work we've advanced, but maybe we really haven't like changed the behavior that we wanted.


Right. You know, I'd love to talk more about this if this is confusing to you or not. But, but one of, you know, as I look at, as a visit companies, I look, you know, people who have good hygiene or what we would call high performing organizations. And I'll see a lot of companies that have, you know, high performance traits in terms of how they deliver software. And then they've been very serious about bolting on or abstracting, a security overlay on their dev ops, you know, modern delivery model. So they might be doing their SAS and DAS and all of those things. And I say, that's a great thing, right? Like we, the, you know, so the class is definitely half full in terms of DevSecOps, but then those same organizations, when I talk to them, they'll talk about these structures, like the three lines of defense.


And sometimes it's an updated two lines, but it's it's lines, right? And it's this idea where you're, you're creating these walls or firewalls by design, you know, like there's a team that implements the control, the team that owns the control. And then if they don't catch it, you know, the first one who doesn't get it second catches it and, you know, internal audit. Right. And I think about like the original problem state dev ops, right. Which was, you know, what Andrew Schafer clay Shafer introduced in velocity 2009, right. He had a presentation called agile infrastructure. And he basically talked about this development, wanting change this wall of confusion between the two from operations, you know, and we all know this sort of story here. Like we want to bust down the wall confusion. So I also think about when you think about the three lines of defense, I think about the Conway's law, right?


We, a lot of times in your presentation, as we talk about commons, Laura's moving from waterfall or not just waterfall, but monoliths to microservices, but it also has an effect on just general organizational design, right? So the adage stating that organizational science systems mirror the communication structure, I give you a great example. If you ever had the chance to look at the Equifax breach, Congress did a really good report and write up on this. And one of the things, so many things that were very interesting, one of the things was the actual CSO reported to the chief legal officer. So under testimony, when the CSO was asked, how come you didn't report the breach to the CIO? The answer was, I didn't think about it. I mean, there is that organizational design and communication structure was set up the fail because the CSO is now reporting back to the chief legal officer.


And that's the lens of, of, you know, and again, there were many other things there. So if I look at the third line of defense, I'll take this Liberty of saying possible that there are walls of confusion between these two. And I didn't like possible. I know a lot of organizations, this is the case, you know, what do we want to do? You know, and DevSecOps, you know, we're striving to take some of the opportunities that we had and created in dev ops, but in, in the cases where we're falling back to these, not changing the security model, just putting the square peg in the round hole, we are, uh, we're not getting the full value of collaborative. You know, there were, you know, where we really want is internal. You want internal audit. And then we got a second line of defense control owners.


Like we want you to be all together in the sort of the initial story and requirements and design. Right. And I can go on and on. But so again, I, and that's just one of many examples. I see that like, that made me raise that question about, you know, would we be doing DevSecOps differently if there wasn't a dev ops? And so a lot of the things that I've been working on, we're going a couple of working groups, and I'll tell you, this thing came up and wanted to work in groups that were doing focused on cloud, but it's a general way to look through what we call the minimal viable security posture, or just minimum viable security. And the question that that needs to be asked is how do you prove that you're safe? And then how do you demonstrate that you're secure and how do you do both?


And it seems easy. But if you think about how we do things mostly today is the way we prove we're safe is typically change records or what I would call implicitly defined or subjectively defined information like humans describing stuff, and those humans, facially verifying other humans. And then, and then, well, the way we demonstrate really is audits, right? When you audit and those are, you know, pretty much subjective, you know, high toil. And so is there a way that we can actually get to, I want to prove that we're safe and demonstrate, and I'm going to propose that there are some models. And so this idea I've been playing with for a while now called modern governance. And so the idea in general, if we're going to do security differently, then we need to think about any place we see implicit security models. Can we moving them to explicit proof-based models along those same lines, in order to transition, we should be thinking about anything that we're doing from a subjective perspective to moving to objective and even verifiable.


And I'll show you an example of that here. So one of the things I've worked up is, you know, and then this is going to be a terrible phrase, sorry, but I certainly think that there has to be a definition that that includes something like, you know, a post cloud, native modernization. I mean, it's like terrible, terrible, but, but it's a way for me to think about what are the, what are the problems related to this place we're in, where we're using all this new technology and we're changing our behaviors with dev ops and you know, all those things. So I started thinking about what are really the three most important things. If we're talking about this host cloud native harmonization and it's risk defense and trust. And so how do I think about, and I told you, I want to go from implicit to explicit or subjective to objective.


What I really want to do is subjective to objective and verifiable. So if we look at each one of these, if we look at risk, we want to move from like change managers and humans telling stories in a change record to auditors, reviewing those and asking for further clarification, the stories are pointing to log records or looking for screen prints and, and creating, you know, high levels of toil to maybe a more objective, like a digitally signed evidence where no human is really involved, except the audit is just looking at, I like to say, think blockchain basically don't use blockchain, but something like that, of all the evidence that happens like your, you know, your commits, your, you know, the human things, the review on a, on a commit or view on a pull request or pairing on a pull request, or, you know, all the things, the, the SAS log, and then moving even further into, if we can create objective evidence, could we then validate through something like chaos engineering, archaic themes, verification, like for example, if the port, this port should have never been opened, let's just open the port.


Or if there's no way this image should have been started with this vulnerability, let's start it. So start attacking from outside. In, in defense, we move from, you know, detect and respond to building more intelligence, cyber data lakes. And then we move into something that if you've ever, if you've filed shadow, lead spoke with, um, uh, back in 2019 here at DAS, and she had this amazing presentation, unfortunate enough for her to be a mentor to me, but she talks about adversary analysis and this sustainer yard folks, you know, she's going to sooner or later, she's going to publish some of this stuff she does, but, and it's heavy stuff, but it's really totally looking at the fence from outside in like, what are adversarial opportunities? What are metrics like adversarial retention, time, just brilliant stuff. And then last I didn't trust. We moved from perimeter base to zero trust architecture.


I think everybody knows that, but then we go ahead and start moving further into what I would call distributed trust models. And I'll show you some examples of that. So basically three novel ideas on maybe what we, you know, the expansion of what we did. And also you, the Dallas automated governance, reference architecture, automating, changing, implicit, explicit, um, looking at how we can get a cyber data lakes and intelligence built into our response and honeypots and adversarial analysis. And then thinking a little bit different about how we build trust models. And I'll leave you with this on the trust model idea is most of the trust model say, even though all this post-modern post cloud native modernization, and I'm sorry, I'm terrible. But most of the models even for trust today are still north south. And we really need to start thinking more about east west and I'll show you some good examples because the, one of the things is service request for, to URI or account takeovers in the cloud world.


It's just, there's a lot of shared accounts and, and, and sometimes zero trust architectures are not enough because if I, if I take over her account or the there server-side request forgery, is there a trust is going to trust me? So let's look at risk. So what we want to do in risk is we want to reduce audit toilets. Like I said earlier, what could the goal be, uh, moving subjective to objective? Could we actually turn 30 day audits into zero days where the evidence is digitally signed lists? So they're immutable, you know, the audit should be, look at the immutable evidence. Nothing could be tampered with, and then therefore increasing the order of effect efficacy. Right? In other words, you know, today in this, you know, this sort of post smart plus cloud-native modernization, I don't know anybody better way to say this, but is that it's very hard to get accurate audit data.


I mean, think about it. Like if you're running a femoral container, as in, in pods that are moving in clusters, and then you've got service mesh configuration definitions. And, and then, you know, even if you're getting even further into functions and serverless, right, like, honestly, it's your, what I call security and compliance theater, right? It's, you know, your people's audits are not even close. There's such a gap between the modern technologies that people are using right now and the way they either the tools or the behaviors that they use for audit. So we moved to an automated objective mutable evidence. And like I said earlier, we start thinking about continuous audit and this, this idea of continuous verification. So I think about security, chaos engineering, and a lot of this, um, you know, there's been my involvement, right? And, you know, Jean, everybody knows if you've probably seen the dev ops forum papers is probably 75 80 of them.


Now over the years, going back to 2014, I think, but, but in 2015, one of the first publications was an unlikely union dev ops and audit. And then in 2018, a Deere audit of which was an apology to audit is like in about two pages. And then the rest of the pages were promise for very detailed regulatory control. And in 2019, I ran with a team of people where we actually tried to sit down this first idea of this objective explicit evidence in the pipeline. It was called dev ops, automated governance, reference architecture. And a lot of this came from an original paper in 2017, from capital one, where they were designing their pipelines. And they said, these are the 16 gates that they needed or development team needed to do. Just bypass the cap basically. And, and some of the discussions that were had, which was, well, if you're going to do that, why can't we use this for a mutable evidence?


And really that, that really created the initial discussions to create that first dev ops automated to covenants reference architecture. And you can see like source code, optimum branch, and started you all of the things that you would expect that healthy delivery fly chain. And so in that 2019 paper, we defined like seven stages. And we created, I think seventy-five attestation definitions. It's, it's a creative commons book. And it really, it really, we really break down. Uh, if you look at the authors on their own genre, his tasks, he about pal Courtney kisser, you know, I'm probably leaving some people out, but between homes. And so, and I won't go through all these, but these are examples of things that were in the original at gestation or ones that we've picked up, you know, the community, but setting things like your, you know, just all these things that you could set, pass, fail, or at least put in an audit stuff like change size, a unit test execution unit test coverage, your cyclometic complexity, a brand optimum branching strategy and our growth, all of these, some of the things you'd see in the build stage.


Again, if you get the guide, you can see the longer version with detailed descriptions on the packet stage, right? We want to make sure that our artifacts are version there's code signing. There's container scanning. There's the right package metadata, right? This hygiene stuff. And in pre-prod there's some more. And then ultimately what we got to, which was, you know, after the first paper in 2019, we realized there was really a good discussion. We have about, like, if you could have this kind of engine, could you then create an interface like a human, a human readable machine, interpretable virginal control to actually drive those automated systems. So risk is code or a policy DSL, whatever you want to call it, that you can inject. And it will, we have more discussions about this in this new paper. So in 2020 20, we didn't do anything related to this, but we, we just a couple of weeks ago started the second version.


And what's interesting about this. So we're going to cover a lot of the stuff that we've learned. Some of the companies that have implemented this, what are we've learned about from the original reference architecture? And we realized a lot of people were like, it was, I thought when we wrote that 2019, we have, everyone's going to be like, oh my God, it's the greatest thing ever. Not because we're brilliant, but just cause I thought it was a brilliant idea. And, and, and, and we've realized that we weren't fun. So we're going to try to do a little fun this time. So we're going to actually use, take a page of the fetus project. So we're going to create a narrative called investments on limited. We're gonna have some characters. I mean, it's gonna start with a big failed audit and then we're going to then use a lot of lesson learns that we have that model, uh, that we used in 2019.


So it should be fun. It will probably be out in September before the, uh, the virtual Vegas. And one other thing that was interesting is I got, um, fortunate enough. I was pulled into talk to solar winds about how could they use automated governance? So a friend of mine who was a big fan on me, governance brought me in to talk to some executives, solar winds. And, and so there's some really good data on from CrowdStrike and, and, and then micro has just a whole brilliant documentation about how they think about it. And in wonder, the thing is, if just for those, you're not aware, we're talking about the original breach and solid wins, not what happened to everybody else in the, in the RD breach software or the compromised software. So basically both those, uh, the CrowdStrike and the miter did a really good job of this one actually was from the crowd side where you defined, if you're familiar with the MITRE attack framework.


And what I just wanted to show is that, you know, automated governance, it doesn't solve everything. But when I looked at all the explanations of what happened in solo wins and how they were able to sit in, you know, find their way, sit on top of Emma's bill, there was just a number of things that just the general policy structure of the things we've been talking about. Autonomy governance would have helped, like first off, you know, just pipeline code, the, you know, basically building the infrastructure with something like Ansible or shaft or puppet D there was a lot of examples of masqueraded logs. So again, immutable stores for this stuff, which, you know, I'll talk about this, a couple of products, their code signing. It was, it was really interesting to see that there were all these in the logs. I mean, it's easy to be a Monday, Monday quarterback, but, but again, when you look at you look, all these code signing and hash mish-mash Mitch, Mitch matches, Mitch matches, mismatches to get for trying to talk so fast.


The, you saw those all over the place. Again, I wouldn't say it would have solved everything, but these things would have turned up as red flags and control game. And so in general, what risk, you know, we look at, you know, can we use automated governance to create these digitally signed out to stations, a red hat? We have a fabulous product called playoffs. You know, not because I'm a red hat employee. I can go, I do a whole presentation on like, why I think this thing is cool. There's some interesting ways to do the evidence store graphic. This is what we used in the first paper, but a six star is an interesting project, which is a collaboration between red hat and, and Google. It was originally built for a certificate transparency, but it actually can be used for audit data. It's a Merkle tree.


It's immutable, you know, whether it's OpenShift or red out there, there's some really good compliance operators out there for talking about Kubernetes. I saw for billing material. I'll talk about that in a minute. Continuous verification, I wasn't even S bomb is, uh, I want to say S bomb differently, which is I've done a lot of research recently about the SIS software bill of material. These are the different, the ones that are most prominent. You have, oh, wash cycle on DX SPDX and Linux foundation. You got the NTIA and natural telecom information associated, something like that. And then you have miter. Who's trying to take all these in one place. The issue here with this is, and I'm trying to work a little bit miter and maybe be able to do that, you know, through red hat, but is that it's all packaged and Roland, Billy focus.


There are so many other things that need to be in an S bomb. And, and then again, I can go on, but I think the idea of using sort of text-based each step of the way might be the wrong approach might just create digital evidence and then a final S bomb that actually have the link list of the evidence along with like license and all this. Anyway, I'll be talking more about, I'm going to write a blog about this pretty soon, just quickly. I want to talk about defense differently here. We want to reduce the same thing, reduce the tar related to our defensive posture and increase the efficacy. Are we doing things that are really creating high advocacy? I wanted a group. So I spent a fair amount of time on is a group where we're really focusing on SIM and sores from multi-cloud providers and creating intelligent data lakes.


And then you have a deception technology. And like I said, Shannon, Lietz his work with adversary analysis. So one of the papers that we did last year was automated cloud governance. Part of where we got some really large cap companies together to look at these multi-cloud vent problems. Again, you can see the list, you'll have the slide deck, really big companies were working on the second version now, which is interesting. We've got a fair number of the cloud providers. Now we have IBM, we have Google, we have, and Microsoft. And so, you know, I think hopefully we'll have Amazon at some point and trying to create this unification of, you know, cloud control. So the, the problem statement is that you get all these different events that really the same have different context, different wording. And it's very hard not only to unify that, but then add layers of common metadata.


So that's, it's a really interesting problem. We call it the cloud security new notification framework go to Onaga take a look at it. It's it's really cool. It's actually modeled after the original, how SNMP solved the original networking problem. So we're not using SNMP, but we're thinking like this is the way to solve all this sorta complexity from, without any here's like an example of we create a decorator. And the decorator is interesting because the idea of decorators creating these common event structures. And then one of the cool things is we would like anchor, we called the decorator was a NIST definitions, and then the MITRE attack. So you get this whole picture, it's a work in progress. Again, go look at ONR and see where we're at. It's I really enjoy or ping me and I can fill you in. So ultimately for defense, what we're looking at is, you know, intelligent data lakes, the work we're doing with automated cloud governance, CNSF the MITRE attack framework just fits everywhere.


I mean, I'm really a big fan of that. If you haven't seen S cap ocean cap, open ASCAP, some interesting models for creating a Metta around and in cyber ranging, if you haven't looked at billing honeypots and then here again, customer, there are really no tools for adversary analysis. This is stuffs. Shannon is work. Then finally, I want to talk about trust differently, but sort of the third anchor of the, of the three things I talked about as opposed to native, so modern trust, right? Certainly zero trust architecture. So I show you that slide in that transitioning slide, right? We want to move, but we also need automated control based assessment. So if you haven't looked at Oscoe one of the things that it's very heavyweight, but for people who had that fed ramp different technologies, you know, FedRAMP certain technologies, the, this OS is very helpful.


It's a, it's a really good self-documenting system. It's still a little heavy distributed secrets management and distributed trust. So you have in, in the trust model, you have the NIST 802 0 5, 2 0 7, sorry, zero trust architecture. One of these, I want to say about zero trial. I said this earlier, right? Like I just trust architecture, but if you aren't like, again, if you're getting compromised, you still have the problem, right? Like in other words, if somebody gets into a cluster or a structure and they're able to, to, to do a server-side request, forgery or server count take, or, or there's many shared accounts, like the metadata server and the cloud and different structure, there are by design shared structures for authentication and authorization. So two things I'm really interested in. So spiffy, you know, is, is an interesting project in that it was born out as a service mesh sidecar model for containers and node to node, you know, femoral authority or short-lived certificates.


And also there's a lot of people looking at this for a possibility of, of use you get for a new version of secrets mines. So like vault is, is the hot, cool thing. And I think Vault's a great product, but there's still discussions now about, could we actually be using these models of distributed trust or distributed identity? Well, there should be trust from a secrets, your identity. Also, I told you about six store. I mean, again, not because our hat, I think 600 is like, like I've got to say the shit. I, I think it has incredible potential certainly for audit for a mutable audit. So it's a way to, it's like everything that you'd want out of blockchain from the automated governance structure I talked about earlier, but much lighter weight cleaner. You think of the whole thing is a Merkle tree. It's immutable cannot be mutated, and then it has a good, it has this other thing.


That's really cool. It's it is an adventure of an architecture on the large structure. So now you can really do some cool stuff like looking for dial service or any type of anomalous behavior. It, I, I have, and I think there's the possibility, again, this is all emergent that like you could also do secrets management in this model, just very similar to what we're doing with spiffy. So, so anyway, that's, that's my presentation. I know I went really fast, but you know, it's a lot, you know, either I give you a presentation where, where I, like, I cover one subject in gory detail, or I give you a survey where I can at least teach you some of the things I learned in, in the chat, uh, during the presentation I'll list all the links and I'll be there in the chat as well. So you're actually, as you're listening right now, anyway, and as always, it's very reachable, just tweet


So thank you so much. Again, if you have any questions or you want to discuss this, right. I'm really interested in the idea of a secure. So yeah. So Jean always asked us to ask, I want to start more discussions about security differently. I think what we're hearing from what I'm hearing from CSOs is we're doubling and tripling our cyber defense budgets, but we're not getting better. And this is, this is just, you know, uh, positions for disaster. So I'd love to have more detailed conversations. We're already having conversations with some really interesting clients are really friends and she'll be my friend. Anyway, thank you so much.