Audit Ready Pipelines (US 2021)

Auditors want proof that the right people did what they were supposed to do, when they were supposed to do it, and they want that information on demand. Developers tend to run for cover whenever release managers show up with an audit request in their hands. Audit-ready pipelines aim to address both of these problems. The idea behind audit-ready pipelines stems from trying to merge software delivery automation, CI, and CD together into something where you can get increased visibility and understanding of your entire process, and what it means to different stakeholders. This session is presented by Cloudbees.

breakoutuslas vegasvegas2021

(No slides available)


Anders Wallgren

VP of Technology Strategy, CloudBees



Hi, I'm a honors Walgreen and a VP of a technology strategy at CloudBees. And today I'm going to talk about audit, ready pipelines, uh, how we can use automation and orchestration to make our audit processes more predictable and easier to, uh, to deal with and what we really want to get to in terms of, uh, a desired state here. I'm just going to start with that is, you know, really kind of think about our, our pipelines. Our end to end pipelines are, are released pipelines as being kind of the core, the heart of our audit trail and, and build from there and what we want to achieve. And what I'll talk about how to achieve today is, is really using that connected tool chain, uh, with, uh, with immutable, uh, objects as the core of how we do this orchestration and then build in both, uh, uh, automated and manual approval gates, collecting evidence along the way, and then the outputs of all of our commands that we're running as part of our self or pipeline release process, that's going to be captured in a common data repository.


So that's available, uh, later for, for closer inspection, uh, all of the actions that we're taking, all of the automations that we're running, all of the artifacts that we're producing are going to be covered, uh, governed by a centralized, uh, role-based access control or access control lists. And, and part of our goal here is to provide end to end visibility over not only what is our process, but what were the results of, of, of running our process and collect data, metadata, evidence links, all of that kind of stuff along the way. So that at the end, we have a one-click automated audit report with data that we trust. So th the current state of audit for, for many of us is a little bit more complicated than manual and, and, uh, and fraught with, uh, with uncertainty, uh, than that, you know, we typically will, we'll get an email, uh, at the end of the quarter saying, guess what?


It's time to do audits. Yay. Um, and then you kind of have to leap into action. And if you have a, uh, a software pipeline that that's fragmented, you know, in, into sort of islands of automation, uh, as I like to talk about, then you've got to go manually retrace all of your steps and manually go between all the different tools that you use in your tool chain in order to build, test, qualify, release, and deploy your software. Um, and that that's going to, you know, that's a little bit of a challenge, right? Cause you've got to go into your issue tracking tool to figure out what's the change request that's involved here. I've got to go look at the, the, you know, production systems and see, what did we deploy into production? I've got to look at the, the deployment tools I'm using and make sure that everything worked fine.


There, I, you look at my CAI tooling, figure out what went into the bill, look at my Sam system to figure out what code changed, all of those kinds of things. And it's, it's a manual process, right? And, and that leaves us with kind of a challenge, right? Because with this disconnected process, disconnected sets of tools, the data scattered all over the place, you know, we, we've done a pretty good job the last 10 or 15 years with agile, with dev ops, with dev sec ops and so on at, at kind of uniting, the culture is a little bit more and, and tearing down the cultural silos, the organizational silos that, that we built up over the years in, in the old ways of doing things. But now what we still have is we're still kind of challenged by this tool, silo, the data silo, where we have to go hunt and Peck around to find all of the relevant data that we need, whether it's for audit processes or process improvement or, or what have you.


And so that makes it very difficult to, to, to get, uh, a really nice traceable set of data proof other than kind of, you know, the, the, the, the attestation, uh, methods that we use these days, which is basically just somebody saying, yeah, I did it. Um, which, you know, as long as you trust that person, that's great, but, but you know, that gestation is probably not good enough these days. And, and particularly, you know, we can do better, right? We have the ability to collect data from a specific tool saying, here's what we did. Here's the configuration we use to do it. Here's the output. And, you know, automatically prove and decide that we've passed an audit or control requirement and move on, you know, to the, to the next stage in the, in the pipeline, if you will. So the, you know, the risks that we have in, in this disconnected space that we have, and by the way, the problem is not that we're using a lot of different tools here.


The problem is that there isn't one way that we can collect and manage and govern all of the things that happen around that. So this is not an argument for, oh, just put everything into one tool and you'll be happy. We absolutely, you know, I believe certainly that, that best of breed tooling for all of the spot places where, where we need, uh, tools and platforms to, to, to, to build and deliver our software is definitely the way to go. But we do need to have this kind of, um, uh, overall kind of end to end approach, uh, you know, an Uber, our orchestrator, if you will, to make sure that we have one pane of glass through which we can look at all of this data. And, and, uh, when I, when I show you a little bit what that looks like in real life, uh, you'll, you'll see a little bit more what I want to mean about that.


So the, the, the risks that we have in our current processes, right, we're, we're gonna spend more time on this then than we want. You know, we, we spoke to a, to a release manager, one of our customers who said they, um, they collect audit data weekly. They, um, analyze it every couple of weeks to produce a monthly audit report. And, and that process takes them about 18 hours to do so. In other words, several days of, of time is spent doing this. And most of it is just sort of the, the manual grunt work of collecting the data, correlating the data, making sure we didn't miss anything, all of those sorts of things. And then, you know, that that introduces errors, that it introduces cost. And if you have to go chase down data for things that happened, you know, one week ago, two weeks ago, three weeks ago, you, you know, you're, you're gonna, you're gonna piss people off, right?


You're going to have disgruntled, uh, developers are being distracted from their task at hand or disgruntled ops people who, you know, you're pulling in, pulling, pulling them off the task at hand and, and asking them to answer questions about what happened two weeks ago. Nobody wants to do that. And, and as a result of that, you know, you may end up failing on it. You know, you may end up failing, uh, to, to, to satisfy, uh, your control requirements, because you might have incomplete data or suspect data. And that's not the place we want to be here. Right? We, we really want to be using the pipeline tooling, the orchestration tools, the orchestration platforms that we're using to create and use audit, ready pipelines. And so I'm going to go through a little bit of a real life, uh, or more real life example here, but first kind of, uh, just a quick review of what it is that we're looking for here.


We want to have a way to connect our tool chains together. You know, we've, we've got 50 or a hundred or more tools, uh, that we use in our software delivery pipelines in order to build, test, qualify, release, and deploy our software. And what we want to do is we want to connect those together. We want to automate the overall orchestration of this, including all of the approval gates, whether they be manual or automatic. Uh, and, and if they're automatic, we want to collect the right kind of metadata, the right kind of information, so that we can prove, uh, that, that, uh, that we passed a particular control or, or, or governance requirements. And if we have a manual process, a manual step in there, that's okay. I mean, that it's, well, it's not ideal, but it's okay. And then the, you know, reality says we still have a lot of that going on, but what we want to do is account for that in our process, collect that data, collect that information as well.


And if we are using kind of the manual attestation, uh, process, uh, for, for doing, uh, governance controls for, for part of this, then at least collect the evidence, link, collect as much information as we can as part of that and, and be able to collect a little bit of metadata around that. And then again, all of this stuff ends up in a common data repository where, where we can look at it in one place. We don't have to go hunt and pack through all the various systems. Some of which we may not even know where they are, some of which we may not have access to and have to get it through other people. Uh, those kinds of things, we're able to have, uh, access control, uh, across this pipeline, across the artifacts, uh, involved in the metadata, involved in this. And, and this starts to give us a much better picture of, of, of, of end to end visibility across our entire pipeline processes.


And then towards the end of that, we ended up with, you know, one-click automated audit reports with data that we know we can trust. Uh, and, and, and that's where we want to end up in. So, so conceptually, you know, what we're talking about here is, is taking the release pipeline and orchestrating across our different tool chains that, that we use all the way from, from the development CEI processes, source code management tools, Jenkins, CGI, uh, tools that are out there, uh, QA, uh, processes, doing test test automation, continuous testing, those types of things across into our artifact management tools, our, our, uh, ticketing tools, uh, those sorts of things. And as we go into the higher stage environments into pre-prod staging and production collect all the right information all the way throughout that process, so that it's available towards the end. And, and I'm going to go away from kind of the conceptual picture and show you a sort of more of an actual screenshot of, of this type of process happening.


And what you see here is, is, uh, we've, we've got sort of from left to right, our development QA pre-prod, you know, staging production environments and, and all of the various tooling, uh, that's being orchestrated our RCI tooling or our ticketing systems, our SCM tooling, or our quality assurance, uh, type tooling. SonarCube Jenkins get, you know, all of these kinds of things are being orchestrated here. And we're collecting metadata along the way, both in terms of the evidence that we want to collect, uh, for our, uh, for our audit purposes, but also for, uh, things like duration reports and so on, which w which we'll see here in a minute, and you can see already here in our, in our Devin, in our dev stage, we've, we've started collecting information, which will be useful to us later on. We, we know what issues we've got related to the CII pipeline that we're running here.


We've got our change log report from our LCM repository and various other information from RCI systems and, and so on. And we're, you know, we're kind of still just in the, in the, in the dev stage here. Now, when we, when we get to where we're done, and we're now looking at what, what are all the audits, uh, information that we had that we collected, you can see here in, in, in an example, uh, approval report, um, auditors want to know, you know, kind of what who did what in the release and the approval audit gives us visibility into, into all of the automated and manual, uh, approvals and gates, uh, that we pass through as we promoted the pipeline from stage to stage to stage, and then finally, uh, on, into production. And, and, you know, what's interesting here to me is, you know, we, we collect not just the information from the man from the automated gates, but also from, from the manual steps, if we have any so that even though we're doing some things manually, we're still collecting all the right metadata from it.


And we can use that moving forward to, to decide where, uh, where to make improvements. And then along the line, along the way, we're, we're collecting evidence links. So evidence links into other systems, uh, links into other reports, but again, we're, we're putting it all into one place where we kind of have the one pane of glass where we can see all of the evidence links that we need, uh, to, to verify that we, uh, you know, we follow, we, you know, we, we, we have a process, we followed it, right. We have the data that we need to collect here it is. And, and we're able to, uh, to collect all of that, another interesting thing that we, that we get, which is, you know, it's, it's maybe not always considered to be directly related, uh, to, to auditing, but we can collect duration, uh, information on this as well, because you almost certainly have sort of an internal goals around performance and cycle times and those sorts of things.


And that's not completely unrelated to audits because, you know, experience tells us that when we start to layer controls in, on processes, they have a tendency to slow things down. And when we, when we then, you know, further, if we have manual processes or manual controls that need to happen, that's often where we start to introduce delays in our pipeline. So, so being able to produce a, an audit report that contains duration, uh, and cycle time, uh, data from our pipelines, we now have visibility into how long the release took break downs for each task. And, you know, whether that's an automated task where, okay, it took an hour, it would be better if that took 10 minutes or, Hey, it took two minutes. That's awesome. Or we have a manual step somewhere where, you know, they were out to lunch and that didn't get done until the afternoon.


So we had a three hour delay in our process, or somebody was out sick, or somebody was on vacation, or somebody was overworked and just couldn't get to it. You know, those sorts of things. We, we now have data that we can, uh, that we can collect on this, which is, which is, uh, uh, you know, very important to, along with all of the audit and governance and evidence links that we've, uh, uh, that we've collected. So, so if we look forward from here, um, you know, kind of auditing the new way automation is auditing, you know, that's a little bit of a simplification, but basically auditing at its very simplest is document what you do prove that you do what your documents, right? And so by, by putting an orchestration layer on top of all of our, uh, uh, tooling silos, all of the individual tools that we're managing, we now have a single pane of glass where we can manage, um, the security around this, because it's important to have a secure pipeline, as well as making sure that our software itself is secure.


And we have a centralized mechanism for reaching all of the metadata that's been collected by this kind of Uber orchestrator, if you will, uh, along the way. And so to, to kinda, you know, just sort of summarize and, and, and best practices around how to get here, right? Get all of your key stakeholders involved in this, do a value stream mapping so that, you know, what this, what this overall process pipeline looks like. So that, you know, what all the stages and steps are, take a holistic approach to how we're, how we're doing this, right? There's a software delivery it's going across organizations. We know that it's growing, going across tool chains. We know that let's take that into account. Obviously watch closely for vulnerabilities in your software, run all the right scans, all of those things. You need to understand what you have in production so that when new vulnerabilities are discovered, uh, uh, post, uh, post deployment into production, you, you know, what's there, um, that's very important, secure your pipeline as well, because it really does, you know, good to do a security scan.


If anybody, you know, if any, any Jack or Jill can go in and change the way that you're doing security scans to maybe just not do much of that scan in the first place, prioritize culture, and don't boil the ocean right. To two pretty important things. You know, think about this as a cultural problem, as much as a process problem and a tooling problem, you know, agile and dev ops have, uh, to a great degree brought and, and, and it's shining the light on culture as something that's important to, uh, to get right in order for these things to function well. And don't try to boil the ocean, you know, take, take small steps here. Um, look at where your biggest paints pain points are, solve those and move on to the next one. The next one, the next one. So that's a little bit on audit, ready pipelines. And, uh, thank you very much for listening and watching my talk today.