Transforming a 40-year old company from Waterfall to Agile, implementing DevOps, and committing to CI/CD is, to say the least, a journey. Along that journey, baselining, collecting and analyzing valid metrics is key to identifying bottlenecks in the value stream so adjustments can be made quickly and continuously. In this session, you?ll learn how a company increased time spent on innovation, reduced escaped defects and improved MTTR and importantly, how measuring KPIs proved that Agile and DevOps really do provide a business advantage.
Solutions Architect, Compuware
Vice President of Product Engineering, Compuware
Good afternoon, everyone. Welcome. Thank you all for, uh, sticking around and coming to this late day session. We're glad to have you here. I am
David Kennedy and solutions architect for
Compuware and I'm David Rizzo, vice president of product engineering for Compuware. And, uh, we're excited to be able to share with you a little bit about where Compuware has been, uh, over the last four or five years. If any of you were here, uh, two years ago, or actually it would have been in San Francisco two years ago. Uh, there was a session there that I did to talk about our journey, and we're going to give the extension of that, what we've done with dev ops. And today we're specifically going to show you how we measure dev ops, how we use real data to show what we're doing with dev ops and how our engineering organization works. And we're gonna, uh, fate, the gods of live demo, and we're going to show you some live demo as well. So we'll see how that all goes.
We'll keep a good thought. Roll the dice, right? If that goes well, maybe we'll go downstairs. All right. So Compuware is a mainframe software vendor who has a mainframe. Awesome. All right. That's great. So we are a mainframe software vendor. We provide developer productivity tools and operations productivity tools to help developers and people in operations develop on mainframes. We've been doing that for over 45 years and, um, we had the opportunity five years ago. We became a private company. A lot of you, hopefully you were in the opening session this morning, our CEO and CFO, uh, gave you a little bit about what we've done. We're going to go through from a little bit from the engineering perspective, share with you, uh, what we've done, how we've got here. Some of it may be a little bit of a repeat, but we'll make sure you know who we are.
So as we think about, uh, Compuware and look where we were, the first thing we did about five years ago, as we sat back and had to look at ourselves in the mirror and identify a problem, the problem we had is we were a 40 year old company. At that point, we hadn't delivered a new product in a long time, we had been stagnant. We were just kind of a company floating along, deciding what we wanted to do or trying to find our way. And we identified the problem that we weren't being innovative. We weren't being aggressive and we made a decision that we needed to change ourselves. So what we did is said, we need to move forward. We need to find a way to get better. So we did implement it agile. We started working on dev ops and we did that by burning the boats.
So to win, you must burn the boats. And we did that. We transformed our company in a very short period of time. One morning, we woke up and said, we're gonna, we're gonna implement all of this. Within 60 days, we had it all implemented. And we haven't looked back since and being a full agile dev ops company. And we deliver new software updates to our existing technology, new partnerships, new products, and we've done several acquisitions. We do all of that every 90 days. So the first business day of every quarter, we deliver, uh, faithfully to our customers. And so are looking back at what we did and thinking about it. A great quote from mark Twain, continuous improvement is better than delayed perfection, especially for those of us in the mainframe market. We always, we want to be perfect. We know we run the world so to speak, the world depends on us to be perfect, but sometimes you have to make sure that you're moving fast enough to keep up with the world and not worrying about perfection.
Our goal is always to continuous, continuously improve. So we work hard to do that every day and how we do that as look at ourselves as we did. And we continue to do that today and how we, how we really have to look at things. When we w when we look back five years ago, and we said, okay, we went agile. We started developing faster. We started delivering every 90 days. We've brought out new technology, we've done all these things, but the reality is when we ask ourselves, we all say, we're doing a great job. How do we know it? How does anyone know what they're doing? They have to have real data, real metrics to look at and say, Hey, this is what we're doing. This is how we've progressed, or this is how we haven't progressed the ups and downs, no journey. You've heard a lot of stories here this week.
You hear a lot of people talking about transformation, moving forward. None of that is without some ups and downs, there's some good days and some bad days, if everything is always going up to the right, you have to question yourself, you try and do that, but you have to have real numbers. You'll hear a lot of people, uh, and the different organizations talk about how you do it. Surveys are nice to send out, to see what people are thinking. And we send surveys out. Uh, we do Gallup surveys. We do different surveys for our organization to see how we think we're doing. And what we've done over the last couple of years is we've actually created an entire product. And that's our terminology because we, we deliver products, uh, our Z adviser products, which is a product that measures how we're doing with our dev ops.
And we use it internally. We, uh, have created within Compuware value streams for how we deliver our different types of work. We classify our works in four different types, uh, the standard, um, you know, the types of work being enhancements, um, defects bugs. We do currency work for, uh, to keep up with what we're doing and, um, keep up with the market. So we deliver, we look at those types of works and we have value streams for each one of those. And for each one of those value streams, we have KPIs that we've created to say, how do things move through our value stream? How do we measure all of that? So that's something that we've worked on and we've come up with what we feel is, uh, some very good dashboards. We look at them at the end of every quarter, we actually do a review of how we've done. We use the numbers to say, how have we done this quarter? And always looking to be continuously better. One thing you won't see in ours right now, and don't know if you ever will, but we don't show necessarily a trend we do at times, but we look at how are we doing today and how can we be better tomorrow, continuous improvement.
All right. So three of those, the main KPIs that really focus on as an organization is innovation. Meantime, the resolution and velocity. And what you see here is our current state, uh, the quarter now and the, the trailing seven quarters. And what we're looking for from an innovation perspective is consistency. And making sure that we're constantly focusing on delivering value to our customers, um, by releasing new enhancements. So everything, that's not a bug, uh, we're considering that as innovation. Um, but also focusing on those external bugs, those defects that reach the shores of our customer. And we have a careful balance of those two. Uh, but here, uh, you know, consistent 70 to 80%, we're always working on new enhancements that are going out to our customers, meantime to resolution, how many, how many days, uh, are our customers waiting from the time that they report defect until we actually deliver that back to them.
And you can see that is decreasing. Uh, and, and like David just said, we have good days and we have bad days, but we have a constant measurement that we focus on. And from a velocity perspective, uh, how much are we closing, uh, from a work perspective in JIRA, uh, quarter over quarter. And because we have this data and because we have a consistent model, uh, within our Z adviser product, we can start to forecast and you can kind of see those in the parentheses at the bottom. You can start to see expectations, right? We can use these numbers. We can use these metrics to determine how much work we can do in the future and use those as business drivers.
So, uh, during the conference so far, uh, what you've seen is a lot around value streams and, and flow metrics. Uh, flow velocity is a key piece to understanding what's going on in our organization from a work perspective, but everyone has to have a consistent definition. If you go to anyone in our organization and you ask them what flow velocity is, and this is the definition you'll get back. Everyone has a common understanding of how we measure flow through a flow through our system, but it's the absolute kind of each type of work that's been closed in the period of time that we're evaluating. And David's going to talk about how we use this.
So, you know, we, we look at our flow velocity, how much is going through? We can break this down. We do it at an organizational level team level, and we look at two different things that we've accomplished during a time period. So if we look at a quarter or a year and you see a couple of numbers up there, there's a total, and then there's two numbers. Uh, one of those, if you can read the fine print says, requires development. Those are actual issues where we did work, and we actually delivered something out to our customer. So it went through our whole value stream. We updated our software and we delivered value to our end user. The other number is a number of issues that didn't actually result in us delivering something out of the value stream, which for us, is delivering code changes in our product.
So that is work that is done, potentially looking at what we might want to do as a new enhancement, potentially just answering some questions for people that are using the products, but it's developers time and work. That's being done that doesn't ultimately deliver what we consider to be value, which is a updates to our software. It's not all bad, but it's all things that potentially could be done elsewhere. The goal is when you have your value stream, get as much as you can, as much of the capacity within the value stream, delivering value out of the other side.
Uh, the next item we're going to talk about is meantime to resolution again, uh, the amount of time or the average time elapsed from when a customer tells us about it, uh, an external defect or report, uh, an external deep defect until it's gone through the entire cycle, uh, engineering cycle. Uh, and, and we've released that back to them. Um, so if we're going to work on something today, if we're going to receive a bug today, um, it's going to take about 56 days, uh, to get that back, uh, 56 days, 66 days to get that back, um, to our customers. Um, but why this is so important to even,
So the important, so MTTR and David mentioned external bugs, and in Compuware terms, external bug is that's a bug that an end user of our software has reported to us. It's the worst thing that we can deliver or do is to have our customers experience an issue. So this is a key metric for us to look at how long does it take us to get something back to our customers, uh, to get them back up and running or not experience that same problem?
No, get a little bit more complex. And this is his flow time. So when we do decide to work on an item, I'll, uh, how long does it take, but broken down by each of the stages that that work item is going to flow through. And we've broken it down into individual cycle times. And these are the ones that we measure today. And we're continuously looking at measuring other segments of work, but backlog cycle time, the amount of time that it sits on the backlog before engineering puts it in progress code cycle time when the developer, uh, puts that, that work item in progress. And it's hands-on keyboard time, the time that they're actively working on providing value back to the customer code review cycle time, the amount of time, once the developer feels that they're done, that it gets reviewed by a peer, a developer, and everything's been validated and validation time, the amount of time after that state until it's actually closed and released, uh, the customer. And so we roll that up to the total development cycle time accumulation of all those stages, and then total flow time. The time that a customer can actually download that fix and deploy to their, uh, development or production environments. Uh, so this view, uh, David, why's it so important to you? Well,
This is very important to understand how things move through our stages and how long it takes us again, to actually deliver value. One of the numbers, it's up there as our little iceberg that shows 79 in the middle. And that's, um, that's, uh, the number of days where it can sit and wait to in some stage when a developer finishes coding and has to wait for another developer to look at it, to do a code review. So that's waiting and that's as much as we reduce that, wait time, that means that the quicker we get something out. And w so we get to look at this and say, how long does it take us to actually from the day we start looking at something that we get a request to deliver something till we actually deliver it. How long does that take us? And in this context, um, this is our focuses on our new development, the new things that we're delivering, the fact that we're on a 90 day cadence, where we deliver every 90 days, we expect these numbers to be somewhere in that neighborhood.
And that's one of the key factors. When we look at all of these numbers, there's some business knowledge that has to come into it, just the numbers themselves don't mean a lot, unless, you know, and put it in context of your business. And what you're looking to do for us 90 days is about as fast as our customers can consume sometimes faster than they can consume new technology and updates, but that's what we've decided our cadence to be. So when we look at that, see, can we meet that cadence? And that's where, uh, that's what it means to us to see how long it takes us to do something and how much we can shrink that time. If we wanted to deliver sooner, what would that mean? Where can we take time out of the system? Where can we increase our value to our customers? Ultimately in the end,
Is this the whole automated, but it's a specific sample.
Uh, this is this for us, as an example of the whole organization would be at the organizational level.
So the last one here is innovation, and I alluded to it a little bit earlier, but it's the, the time spent, uh, time spent, uh, working on non-customer defects. So this is stories, tasks, enhancements, currency, technical deck, everything that isn't an external, um, defect as David alluded to. Um, and so we S we really focused on this enough number, uh, but we can't focus on it too much. If this gets too high, as you can see there at the, on the left part of that graph, we, we focus too much on new work. And then we, then it started to drift off and we're focusing a lot more time on, on defects. So we really have to be able to see this in time series data, and really develop a consistent measurement and really focus the teams and making sure that we're working on the right, uh, the right stuff. And then that's a careful, uh, balance and relationship that we have from an engineering perspective with product management, uh, making sure that we have both we're working on new stuff, but we're also taking care of the customers and what they already have and what they're already using.
Right. And, and that is when we talk about that innovation percentage, right? Innovation is when we are ultimately delivering new value. Unfortunately, all of us who write software, we, we write bugs and we have to fix it, and how we can minimize that amount of time that we spend fixing bugs and reduce the number of bugs we have all, ultimately, that's the goal. And we've done a lot of that. And we'll show a little bit of that too, but as we go, how, what can you do with automation? What can you do with testing automated testing? How can you reduce those bugs and ultimately go out because that innovation percent as the higher you get, that that's the more value you're ultimately delivering to the organization. And, you know, something that goes back to, uh, what they talked about this morning, Chris and Joe, when we talk about, uh, customer satisfaction, customers are satisfied when they don't get, when the software works for them and where they're getting new features, new functionality that helps them be more productive.
That's when we get the highest customer satisfaction. So driving that customer satisfaction ties directly to this innovation. And that's when we look at our numbers, we can look at, uh, customer satisfaction. And then we talk about employee engagement. Developers are happiest when they're writing new code, they love to write code. They love to create new things, and that's when they are fully engaged and want to do something, no one likes to fix bugs. We all have to, but no one necessarily likes it. So when you buy re by making it easier to get a more quality into the software, by adding automation, you get that innovation up that helps employee satisfaction. And that, uh, also helps customer satisfaction. And as was said, that then leads right into cashflow. Okay.
All right. So I'm going to switch over to a live demonstration. So we really focused on, on showing, uh, each of the individual components, right? Uh, full of velocity, meantime to resolution flow time and flow cycles. Um, and this is the view that we give all of our development managers and our product managers, the ability to look at each of those, but then look at their business specifically, um, from a work-type an agile team, uh, the product line or line of business that they're responsible for, or the family of products, if they're responsible for more, more than one. Um, so if I'm w we have a product called Topaz, this is a clip space IDE, and this is what a developer that receives our software, a mainframe developer has access to, and that's the entry point into all of our other products. So this is the main thing that we focus on at Compuware and in delivering a great experience to the mainframe developer. And so from that, from this line of business and the, and the family of products that surround that we can see flow velocity, meantime resolution, and the cycle times, uh, whether it's new work or a bug, a defect that's reported and how long that takes to go from when it's reported, or when we decide on, uh, to do that new work until we deliver that to the customer.
And we also show on this dashboard, we show on the right side flow distribution, and that's signifies the four types of work. So we categorize all of our work into four types, as I said, and that shows what balance you're doing of all four. You have to give time to all four, or something's going to get out of balance, and it's not an even weight, but you have to look at that. So this gives, uh, your, a development team, a scrum team, the opportunity to look and say, how are we balancing our resources? How are we balancing our time? Because we actually use time that they, uh, work on. So this is you're actually seeing our actual time spent working on things. Uh, so that's what we showed there. And you get one look at how you're doing
All right. So this is a little bit different view into the data model, uh, within our, within the advisor. And looking from a value stream perspective, if you decide that you want to focus on a key, specific piece of the value stream, uh, where do you start? Where do you experiment? Where can you find the bottlenecks within the organization? So what we're viewing here is really a social graph of the communication that's happening, uh, within JIRA. And so what we can quickly highlight and see are the cues in the system and a queue by definition, uh, is waste. So looking at all the different components that are connected to this queue, and being able to identify and start to drill down into, uh, where that work is going and the communication channels between each of those nodes. Um, what's unique about this too, though, is we what's most important is the people right within there, within the organization, the engineers that are contributing to the work that's happening.
So this is the relationship between a queue, the products that, that people work on, but how those people interact with those products. Uh, so you could start to see and identify those bottlenecks, but when it's in this form, and I decide that I really want to focus on what's the impact of a certain item. Um, I can start to remove items and start to see its effect on the organization or the products and how work is flowing through the system and decide what can't I lose what's important. So I can start to prioritize bottlenecks if I am able to identify them. I'm sorry. Okay. Yeah, it's a little bit washed out. So, um, there are, there are lines that are connecting all of these nodes and between each of those nodes, it's actually showing the weight, uh, of the relationship and the strength of the relationship between each of those nodes. So we can start to see how much is flowing between each of those nodes and this, the strength of that, uh, that social relationship.
Excuse me, what tool did you use to produce this?
Uh, great question. So the Z advisor product is actually built on the elastic stack. And so what we're actually using here is the graph technology that comes native, uh, with, uh, elastic. Uh, but a lot of the work that we've put into this, uh, is really around the data model. That's that, and how the data flows into that system. Uh, the last six sec allows us to visualize the decisions and how we are. We built that D that data model.
So it's a custom by yourself.
Uh, there's a group of, of individuals within, within Compuware that actively develop this, uh, every day. No, it is not something that we should consider. Yeah.
Um, so one last, a quick, a quick demonstration here. So, uh, Michael dot magnet, he's an architect within our organization and, uh, he's a peer of mine. Um, and I want to see, or I want to discover, uh, Michael's blast radius. I like to call, uh, in the organization, what products does he have an impact on and the strength, um, of relationship between the products that he's helping influence in the organization. So if I hit the plus sign here, what it's going to do is actually pull in all the products that this architect Michael, uh, has had an influence on, and what you can see by the strength of the line here, the width of the line, likely what he has had the most impact on. But also you have to consider is that potentially a bottleneck, right, is all too much information flowing between one person and one product.
It's a neat thing here that we can also do is see, and start to expand and look at beyond that product, who are the individuals, uh, that Michael is likely working with, uh, on that product. You could see here as myself, the UX engineer, the product managers who actually are sitting in the crowd here, um, and the data science team that helps build the data model behind this. So if you want to see how many people or what's the impact and the social graph of an application in respect to the value stream, if you've identified an area that is slow and not moving fast enough, you can start to experiment and understand how information and work is flowing through your organization.
And this is, this is a pretty powerful graph, uh, and pretty powerful capability within, um, the, the product that we have, but to be able to look at your organization and see what happens. If you move people, we're always planning as managers and in organizations, we're looking to say, I need to, I need to create a new product. I need to do a new application. I need extra people over here, and I'm going to take them from here. And a lot of times, it's, I have a problem here. Let's go put out the fire. Let's grab somebody with the capability there. You can actually remove a person from a product, or just take them out completely and say, if I take this person and put them on a special project, what falls apart, what isn't connected, what isn't necessarily being used. And that's a very powerful capability that, uh, I know when, when the first time I saw this, I was very excited to see it because we know our Brents in the world from the, using the Phoenix project analogy, we know our bottlenecks, and we know some of those key people.
Sometimes we don't know who they are. Sometimes we don't realize that the quiet person sitting in the corner is doing a whole lot of things for different areas. And what happens if you move them, what happens uh, to, to your organization and how does it, how does it stay together or not stay together? So it's, it's pretty cool what you can do with analytics. And I know it came up with, you know, what Z advisor is built on and the elastic stack, and we talked about, you may have missed it. We use JIRA. So all of our issues are put in JIRA. We do all of our, uh, agile tracking. All of our boards are in JIRA. This data is coming from tools that are out there and just being put together and the brains behind getting all those analytics and getting those KK key KPIs. When you look at measuring how you're doing, that's what you want to look at.
Alright, next one.
Oh, this is me. Yeah, we were doing pretty good. Um, so, so, uh, you know, the, as we see in all the, all the, uh, speakers and all the sessions, it's, what's the problem that remains. One of the key things is data, data, and more data, how we have been able to gain so much value. And from a Compuware perspective, we showed you data from a small period of time. We've been using JIRA and we've been tracking things for over 10 years. And we, we weren't actually tracking the way we track what we were using JIRA. And we realized that we could pull all that data in and analyze it in many different ways. So we had the benefit of data and that's how you get accurate measurements. That's how you can look at are things changing over time. Are you getting better or are you getting worse?
And so having more data is key and metrics, standard KPIs to measure dev ops. We hear a lot about metrics. We hear a lot about KPIs. How do you measure that in a standard way, in that for us as a community to make sure we're all measuring it the same way we L we, uh, I know there was session earlier that was done by Tasktop. They talk about, uh, their metrics, very similar. We're getting very close. There's a lot of things there, but it's key that we get that all together. And we say, what is important for us to measure and make sure that we can show our CFO show our leaders that we are doing well, and it is helping. And again, sharing. It's all. It's always about sharing in this community, in all of us, providing information to each other, a lot of organizations.
And I know we work with a lot of very large organizations. A lot of organizations find this data. They're concerned about showing it. We showed you live data. What are that's Compuware we had good quarters and bad quarters, and we shouldn't be afraid to show what it is because it gives us perspective on everybody has good times. Everybody has some failures. There's things, the adjustments that you go on, we have 30 minutes to do this. I could give you a two hour presentation on the adjustments and the tweaks that we made, where we added automated testing, where we changed, how developers work well, how we realigned the organization, all of those things that go into it. That's the sharing amongst all of us that gives us real value and really helps us to be able to move forward. As we all look at moving our journeys along, getting dev ops implemented in our organizations, delivering more value to our businesses. These are the types of things that will help you. So I encourage everyone to share your data, share your information. Don't be afraid of it. You will learn something as we've done with many of the companies that we work with. We've shared our data. They've shared their data. We look at it, we learn things from each other. We continually get better by looking at what we have. So continue to share. Don't be afraid. And, um, it will be, it will all help you in the long run to move you along your journey. All right.
So now for, uh, some, uh, exciting giveaways.
So one of the things I talked about was, um, we do value streams and we did a whole value stream exercise. And actually last week we had, uh, Gary Groover. Some of you may be, who's heard of Gary Gruver, a few people. Okay, good. So we had Gary Gruver actually come in to our organization last week. He spent a couple of days at Compuware and he looked at our value streams and we walked through and we actually presented to him what we present to each other to get a third party feedback for somebody to give us some valuable input. We, uh, he has a recent book engineering, the digital transformation, uh, just came out not too long ago. It's a great read. It's got a lot of good information in it for all of us who do this kind of thing. It's very informative. And I asked him if I could give a few away, because it was so valuable to me and helping with our value streams.
So the first three people that go to Gary groover.com and download the book and use Rizzo that's me. He picked my, for the code. You'll get a free ebook. So please take advantage of that. Also, if you're not one of the first three, you can still download it there. Um, I don't get paid for it, but it is a good, you know, in the essence of sharing, it is, uh, it is a very good book. It provides some insights into value streams and different things. And like I said, we spent two days with him was a great couple of days. So, uh, that's Gary. And, um, the last thing we'll take any questions we are out of time. We just hit the zero. Marker. We'll take any questions. We will be here afterwards to answer any questions. We will be. We have a booth in the trade show. You can stop by and see us. We'll be here through tomorrow. You can always reach out to us at Twitter email, happy to answer any questions, give you more information about it.
Unlimited users from organization
Clarissa Lucas’ Audit Playlist
Matt Bonser, PricewaterhouseCoopers LLP; Yosef Levine, Deloitte; Jeff Roberts, Ernst&Young; Michael Wolf, KPMG; Gene Kim, IT Revolution
Auditors' Workshop - What You've Wanted to Ask an Auditor but Were Afraid to Ask
Matt Bonser, PwC; Pierre Fourie, Ernst & Young; Sam Guckenheimer, Microsoft; Yosef Levine, Deloitte; Darren Orf, PwC; Topo Pal, ; Caleb Queern, KPMG; Jeff Roberts, Ernst & Young; Amy Sword, Deloitte; Michael Wolf, KPMG
DevOps and Internal Audit: A Great Partnership
Rusty Lewis, Nationwide Insurance; Clarissa Lucas, Nationwide Insurance
DevOps and Internal Audit: A Great Partnership (Part 2)
Clarissa Lucas, Nationwide Insurance; Rusty Lewis, Nationwide Insurance; Ethan Culp, Nationwide Insurance
From Your Auditor Friends: What We Wish Every Technology Leader Knew
Rusty Lewis, Nationwide Insurance; Clarissa Lucas, Nationwide Insurance