Las Vegas 2018

Measuring DevOps: The Key Metric That Matters

How is your DevOps transformation coming along? How do you measure Agility? Reliability? Efficiency? Quality? Culture? Success?!


Having the right goals, asking the right questions, and learning by doing are paramount to achieving success with DevOps. Having specific milestones and shared KPIs play a critical role in guiding your DevOps adoption and lead to continuous improvement—toward realizing true agility, improved quality, and faster time to market throughout your organization.


This session will walk you through a practical framework for implementing measurement and tracking of your DevOps efforts and software delivery performance that will provide you with data you can act on!


Anders Wallgren is Chief Technology Officer of Electric Cloud. Anders brings with him over 25 years of in-depth experience designing and building commercial software. Prior to joining Electric Cloud, Anders held executive positions at Aceva, Archistra, and Impresse. Anders also held management positions at Macromedia (MACR), Common Ground Software and Verity (VRTY), where he played critical technical leadership roles in delivering award winning technologies such as Macromedia’s Director 7 and various Shockwave products. Anders holds a B.SC from MIT.

AW

Anders Wallgren

CTO, Electric Cloud

Transcript

00:00:05

I'm gonna talk about measuring DevOps, uh, kind of talk a lot about, uh, what metrics are, you know, why you want to use 'em, which ones you want to use, which ones you don't want to use. Um, a lot of this is really about kind of continuous improvement and a little bit of the scientific method, right? Which hopefully underlies a bunch of the things that we do. Uh, but first an interesting slide. So, in the, in the last five years, um, the top five publicly traded companies by market cap, uh, kind of, uh, uh, uh, split into tech and non-tech. I suppose you could argue a little bit there, but, but, you know, go with me. Um, clearly tech has kind of started dominating a little bit there. Uh, that's, that's no secret. And we've known for a long time, and we've seen with all the companies that are out there, that software really is the primary driver of disruption and innovation.

00:00:54

Uh, whether you're making cars or you're making dishwashers or rice cookers, software is generally the way that you, uh, innovate and, and distinguish yourself from your customers. So, to stay competitive, we need to deliver better software safer and faster. All of us need to do that. Um, and then the question is, you know, do we feel like we can do that? You know, do you feel like you can release as often as you should as the business wants? You know, most of us say no, not so much. Um, which is one of the reasons why I love this conference. 'cause you get to come hear all the people that went from there to there, sometimes back down, sometimes back up. Uh, it's the, the stories that are really kind of telling, uh, telling, uh, all the interesting stories here. So, you know, we're dealing, we're dealing with a lot of challenges, right?

00:01:42

We're, we're, we want to get software out on time. We want it to have quality. We want it to provide value for our customers so that they give us, you know, money, respect, love, all of those things. We've got manually operated, non-integrated tool chains, lots of silos of automation. Even when we do automation, uh, it's difficult for, to have repeatability and predictability. And for, for those of us, or for those of you in, in regulated industries, you know, finance, healthcare, automotive, aerospace, those kinds of things, the lack of traceability and audibility just makes things even more difficult, uh, come, come out at time or at least uncomfortable. Um, we're not necessarily using our infrastructure, uh, to the best of our abilities. You know, we might have low utilization there, which doesn't make the CFO happy or the CIO, uh, shortly thereafter. And, you know, we're using a ton of different practices.

00:02:30

Now, that in itself isn't necessarily bad. Um, but, you know, when we don't know the practices that each one of us is using, that makes it a little bit more difficult to, uh, to, to share, uh, across. And that's a really kind of important thing in, in, in any sort of dev star ops, DevSecOps, dev, qa, ops, dev test ops, dev, whatever ops, um, you know, shared visibility, transparency is kind of key. So let's fix it with metrics, you know, what is it about metrics that we, that we want to use? Metrics like, do we, you know, do we wanna make ourselves look good or do we wanna make ourselves feel bad? Um, do we wanna make pretty graphs or just understand where we are? Um, you know, hopefully more of the latter and less of the, the, the former there. But obviously, um, there are some pitfalls there, which, uh, we'll talk about a little bit.

00:03:17

But so why metrics? Because it's science, right? It's the scientific method. Uh, we make an observation, we have some questions about that. We form a hypothesis, we make a prediction based on that hypothesis. We run a little experiment, we look at the results measure, uh, and then, you know, we go back and we do it all again. Lather, rinse, repeat. Um, and, and, and really, you know, when you think about it, this is really kind of continuous improvement, right? This is, and, and that whole thing of the improvement, the, the, the improvement in the daily quality of the work is more important than the quality of the work itself, right? That we're always learning. So what does this have to do with DevOps? Well, DevOps is really, I think, applying the scientific method to software innovation, right? It's, it's visibility. Open cultures, using the right tools, doing automation, having humans do the things that they're good at, and running experiments, running experiments as cheaply as possible.

00:04:13

Now, is that just DevOps? No. I mean, that's agile, that's cd, that's, you know, all of the, all of the buzzwords that you can throw in there. But so how do you wanna do metrics, right? There was a great paper written, I think it's probably three years ago now, uh, at the DevOps Enterprise Forum up in, in, in, in Portland, uh, that talks a lot about the different, a lot about metrics, uh, in, in general. And I'll just, uh, I'll, I'll throw these out here. Um, and, and, and it kind of breaks down into three categories. Um, there may be more, but these are the ones that I think are, are really the, the interesting ones. So the first one is effectiveness, right? Did the thing you build, do the thing you intended it to do? Did it provide value to the customer that they sought?

00:04:52

You know, are they happy with their purchase? Will they buy more, uh, efficiency? You know, did it cost me a billion dollars to release my free product that nobody's ever gonna pay me for? Uh, that's not necessarily the most efficient way to do things. And then kind of on the end, but also very, very important, and I know we talk culture all day long around here, but culture, right? Are the teams working well together? Uh, are, you know, is everything working fine there? And it is possible to measure culture. Uh, and we'll get into that a little bit. So, oops, can we go back when I double click there? Thank you. Now, blameless culture is key, right? Because if, if, if one of the, you know, if you're gonna go for one of the tenets of, of, of DevOps, which is visibility, transparency, and those kinds of things, then you can't have a culture where you shoot the messenger, right?

00:05:45

You just can't. You and I have another slide on this a little bit later, so I'm not gonna belabor the point too much, right? But we're gonna make things visible. We're gonna put things up, we're gonna look at them, or we're gonna make decisions based on them, right? It's pretty important to do that in a way that doesn't feel like people are being shamed. Uh, and, and, and those kinds of things. And we'll get into more of that as well. Data collection for these kinds of metrics should be automatic and unobtrusive. Why automatic? Because I don't trust people. <laugh>, you know, it's as simple as that. I mean, how long did that thing take? Oh, about 12 minutes, you know? Um, and, and also it's a subjective thing, right? I mean, you know, Frank is doing the collecting Monday through Wednesday, and Diane does it Thursday through Friday.

00:06:30

You know, maybe they don't look at things the same way. So since we're all right, doing a bunch of this automation around our CI and CD and DevOps and software pipelines and release, and all of those things, a lot of that data, in fact, most of it is already available in there, right? Take advantage of it. Use it. Um, the metadata around how long it takes you to do a build, how long it takes you to do a test, how often do you have a regression? Uh, all of those kinds of things are available as data and doesn't need to be fed to the beast, right? And even more importantly, doesn't need to be massaged, uh, on the way into the system. Unobtrusive is, is is equally important, right? You don't have to spend a week at the end of each cycle just collecting data so that you can, you know, so that you can decide where you should be spending your time next time.

00:07:18

Uh, hint not on collecting data, um, you know, make it unobtrusive, right? Build, you know, just like we want to build quality in, we want to build, uh, you know, performance in, we want to build security in. We should also build in visibility and metrics, right? So if you have important parts of your pipeline, or if you have important parts of your product or your product lifecycle emit data, right? You're gonna do that when you go into production anyway, right? Start collecting that data before you go into production, right? Shift left. I know that's an overused phrase, but for a good reason. Um, and then choose metrics that are, that are measurable, that are objective, right? You know, if, if you're gonna go and figure out, you know, do people like your user interface? Don't say, do you like my user interface, right? <laugh>, because only an asshole is gonna say no <laugh>, right?

00:08:10

Uh, you know, you're sort of biasing the sample there a little bit, right? I mean, talk to people and, and do some surveys. Were they able to accomplish what they came to the website to do? You know, did it take them about as long as they thought it would? Or was it more complicated and more lengthy than they thought it would be? Did they understand the choices that were presented to them? Or did they get stuck enough to start over, right? There's a ton, ton of different ways that you can ding into that other than just, you know, did you like it? Uh, but that's important, right? Um, and, and, and, and that also leads to things like vanity metrics, right? Avoid vanity metrics, you know, the number of dollars, uh, that you've made. Uh, that's, you know, you could probably argue that that's slightly vanity metric, but also kind of an important metric.

00:08:54

Uh, so I'll make an exception for that. But the key thing about vanity metrics often is they're not actionable, right? Billions and billions served. So we're gonna go to trillions and trillions, or do we go to billions and billions and billions served? So it's not really clear where we go from there. And, and, and, and a and a metric that doesn't have an outcome and doesn't have an action that helps you achieve that outcome is generally an uninteresting or maybe even a bad metric, right? And, and vanity metrics are, are, are good now, you know, you might want to have your knock with your fancy graphs and show your customers and, you know, but you could just use playbacks from, you know, disaster movies and that kind of stuff. It doesn't have to be real at that point. Um, so don't focus on that. Um, the other thing is, you know, focus on teams, right?

00:09:43

Focus on efforts, focus on don't focus on individuals. You know, don't, don't shame. You know, don't, don't try to use this as a performance metric for, for reviews. Uh, and, and, and those types of things. Um, focusing on individuals generally doesn't work very well. You know, the, the individuals will either figure out how to game those things, because as humans, we are essentially, you know, we've evolved to be gaming machines from the second that we looked in the mirror and realized we were self-aware. We're trying to get one over on the man. Um, and that's gonna keep going, right? So, so, so don't shame because that leads to, that leads to gaming. And, and, and you definitely don't want, uh, want those kinds of things. Now, um, <laugh> on the topic of gaming, you know, you wanna look out for unintended consequences. And this is one of my favorite Dilbert cartoons ever.

00:10:35

Uh, our goal is to write bug free software. So we're gonna have a bug bounty, and of course, you know, third square down, I'm gonna go write me a minivan. Um, and, and again, I mean, this is human behavior. It is absolutely perfectly 100% rational for that guy to go off and do that. You have incented him to do that. Uh, that, that is exactly the, you know, you've, you've, you've, you've put that banana in front of the monkey, and the monkey is gonna want that banana. That's just how we are. So, so you really want to focus on metrics that are, you know, that are objective, that are, that are, you know, that are, that are kind of relative, relevant to outcomes, right? So things like, you know, thousands of lines of code written, right? Totally stupid metric. If you can do it in one line of code by God, do it, right?

00:11:21

That is, that is not something you want measure things on. But it can get more subtle too. We, I mean, we, we even fell, uh, uh, we fell into this trap ourselves a little bit a few years ago at Electric Cloud where we started measuring how many support tickets were being closed because somebody noticed that there were a lot of open support tickets. So, you know, being the, um, efficient gameable machines that we are, uh, the number of support tickets closed, like that, that open rate went down. The problem was the customers didn't really understand and agree with the fact that we were closing some of those tickets. So there was an unintended consequence there of, Hey, look, our closed number of tickets looks great, but we got some pissed off customers 'cause we didn't solve their problem <laugh>. Um, so, so you have to be careful, right?

00:12:08

So what was the outcome we were looking for, right? We weren't looking for the outcome of we want more tickets closed that, so what, you know, if we have four times as many tickets closed this month as last month, does that mean that we have four times as many bugs, or four times as many customers, right? I mean, it tho those metrics in themselves don't necessarily mean anything much less give you something to, to action, you know, to do. So you want to focus on things that are, you know, the quality of, of the experience that the customer had, the satisfaction, you know, are they coming back? Are they, are you retaining them? What's the cost of retaining customers? You know, all of those things are more important metrics, uh, than then, you know, how many tickets, uh, have we closed? And then, and this one's really tough, right?

00:12:54

Signal to noise ratio. Um, focus on a small number of metrics, right? I mean, if, if, if, if you can pick one at a time, right? Pick one at a time or two, or three or four or five, don't pick 40, right? Because there's gonna be too much going on for people to even be able to absorb it. And, and you have the same sort of, uh, danger as you do with monitoring systems, right? And here I'm talking, I'm talking metrics here, and metrics generally on your software pipelines. And, you know, monitoring comes, you know, perhaps a little bit after that, but you have the same problem. 'cause you have the boy cried wolf and you have the false positives, and you really have to get to the point where that isn't as much of a problem, right? And there's many ways to do that.

00:13:42

Generally, it's by focusing on smaller things. Look at less metrics, you know, pick one or two or three for the next week, month, quarter, whatever the, the, the, the right timeframe is decide what the outcome is that you want to achieve, and how that metric measures that. And then act, you know, and then you can look at the metric and start to get better and better and better. Hopefully that metric rises and, and, and other unintended ones don't rise along, uh, as, as well. But the, the signal to noise ratio, uh, is, is, is a big thing. And then make sure you're communicating the right thing. And everybody sees the same thing. IE what color is the dress? I don't know if you saw that from a few months or years ago. Um, I mean, it, it, it, it, it, you know, and this, this comes to sort of things like objectivity and, uh, you know, all of those kinds of things.

00:14:30

But, but you know, if I say again, same thing, right? If, if we have four times as many bugs reported this quarter as last quarter, how many, you know, do we have four times as many customers, or do we have four times as many bugs in the code, right? So, so metrics, you have to be careful because they often have context that they have to be looked at, right? I mean, even you take something as simple quote unquote as mean time to recovery. What does that really mean? Does that include time to discovery? Right? So how long was it happening before we or someone noticed it was happening? And oh, by the way, who did notice that it was happening? Was it a customer? Was it monitoring? Was it ops? Was it dev? Was it the c, was it the CEO's nephew? Um, you know, you, you, you know, even something as simple as an MTTR comes down to, you know, detection time, mitigation time, figuring out why it happened and making sure it doesn't happen again, time.

00:15:23

So all, all of those things are, are important, uh, to think about that, because it, you know, you, you may have an incident where your MTTR in terms of from when we applied, started applying the mitigation to when the mitigation was applied was five minutes. That's wonderful, right? So you might look at that and say, well, our MTTR was five minutes. Well, bullshit, if that bug was in there for a month, your MTTR was, you know, a month and five minutes. And if you then don't put in place processes to prevent that same problem from happening again, or tests or you know, what have you, you know, then you're just adding to that. So, so you have to be, you know, again, you know, definitions are important, words are important, um, unfortunately, um, but that's, that's just how it is. Metrics that, that identify, patterns that predict impending success or doom, um, are, are pretty useful.

00:16:15

Um, this is the canary. Um, I think, I don't know if that's a canary or not, but it's yellow. Uh, it's not in a coal mine, but it ought to be. 'cause that's the example, the canary in the coal mine. Um, the sacrificial lamb, so to speak, or the sacrificial canary, I guess. Um, but, but metrics that could tell you, oh, things are gonna about to get wonky, right? So, you know, and, and that, that could be all over the place, right? It could be, well, if, if it's a backend transactional system, if my transaction commit times are slowly creeping up over the every day this week, what's going on? You know, are we losing IO capacity? Do we have more business? You know, again, that metric doesn't necessarily mean it's a bad thing. It just might mean we're being more successful and we need to plan and add some capacity, uh, and, and take care of that.

00:17:01

So, so, you know, look, not just for impending doom, but also impending success right now, your impending success or your, your, your happening success may be your doom if you don't have these kind of canaries, right? If you don't have a way to notice that you're starting to, to hit the limits of, uh, of what you've, uh, you know, specked out for your application, whether it's in terms of the architecture of the app itself or, or the, uh, the, the deployment architecture. And these things are gonna evolve over time, right? So, and this is why I think it's okay, and good to choose a small number of metrics, right? Pick a pain point, pick something that you don't like. Pick something that drives you nuts. Pick something that takes too long. Pick something that fails too often. Pick something, you know, just pick something and then figure out a way to measure that and drive a better outcome for that thing.

00:17:51

And then move on to the next thing, right? It's, you don't need 400 metrics. Less is more for sure in this kind of thing. And then expect them to change, right? You might have, when you start looking at, okay, so our database transaction times are all over the place. We need to get a, you know, we need to get a handle on that. So for the next month, we're gonna be measuring that, right? 'cause it's been really wonky and we need to get that better. And then after the month, and you've applied a bunch of fixes, that line is nice and, you know, steady and, and not growing. You could take that off the front page of, you know, whatever display or, or device or mechanism you're using to radiate that information and just make it an alarm right? Now, it doesn't get into everybody's faces unless it starts climbing again.

00:18:33

So the, the, and then you put it, you put something else on the front page, right? Something else that is now the one big hairy ask goal that we want to, that we want to solve and, and, and get better at doing. So expect the metrics to, to change over time, right? And, and, and don't come back six years later, uh, and look at the TV screen in, in, in the organization and realize, oh my God, we're still looking at the same numbers, right? Because that probably means you're not even paying attention to that screen, or you just suck at, you know, improving your numbers, <laugh>, I don't know which. Um, so expect them to evolve and, and, and change. So let's talk a little bit about which metrics. Um, now every coder knows the number of wfs, or pardon my French, what the fucks per line of code is the only true metric that that matters, right?

00:19:21

Um, so <laugh>, but I'm gonna break it down into kind of four, four pieces here, basically, or four buckets. The business values, customer value, team culture examples, and some pipeline efficiency examples. You can slice and dice these things in way different ways, right? These are, these are not the only buckets. The things in the buckets could actually probably be in other buckets as well, and so on. So, you know, there, there's definitely some, uh, some fluidity here. But if you're thinking about business value metrics, customer acquisition cost, you know, what does it cost me to get a customer in the door? Um, what kind of revenue am I getting from that? Uh, what sort of market share do we have? Uh, what does it cost to keep a a, a customer once, once we have them, right? Those are business value metrics and, and they're actionable, right?

00:20:05

If we're, if we're paying more to acquire customers than we're bringing in a new revenue, I'm pretty sure that's not gonna lead to a good outcome. So we should probably change one of those two numbers to be something else, right? And, and improve it. Uh, if our market share is shrinking, why is that? You know, we gotta go figure that out. Uh, similarly if it's growing and we don't know why, we might wanna figure that out too. Um, customer value metrics, right? Customer SATs, right? Pretty big important one, kind of a little bit nebulous, but you know, as, as, as justice, is it Justice Stewart or Potter? I forget which is the, but you know, you know it when you see it basically, right? Satisfied customers, feature lead time. You know, the, the time for when I code a feature or design a, a feature to when it's available for my customers, that's a really important metric, right?

00:20:52

Uh, 'cause that feature could be a bug fix, right? For a very important customer, or it could be a vulnerability, uh, that you're patching, right? So you, you, you definitely care about things like lead times, right? Features delivered, you know, you could do this in points or t-shirt sizes or numbers of features or whatever the, the units are are probably less important than the fact that they're somewhat, you know, uniform and objective, which is difficult to do in this case, I, I realize. And then things like release frequency, you know, how often do we release? Are we monthly, weekly, daily, on demand? Um, you know, quarterly, yearly, uh, those kinds of things. It, it depends. Um, I'm not sure I want daily firmware releases to my dive computer that I wear on my wrist, right? I know a couple of bugs I want fixed, but I'm not sure I want them to download them every night, you know, work, you know, don't, don't fix what's broken there for sure.

00:21:45

Uh, but release frequency is definitely something that, uh, a lot of us care about, uh, and, and, and will continue to care about. And then team culture metrics, employee satisfaction. You know, if you do a net promoter survey, uh, of, of the employees in the company anonymously, you know, how many of them would recommend the company to a friend? You know, uh, what's your attention like compared to the industry average or the geographical average? Uh, those kinds of things. Um, are our teams collaborating with each other or are, are we still siloed? You know, is it still, are we still kind of in ticket hell where, you know, to get anything done, I have to submit a ticket and then they have to wait till that person gets back from lunch so they can, you know, say that they've accepted the ticket and then they go off and do the work and, you know, eight hours later I get my vm, you know, those sorts of things.

00:22:32

Or are we, or are we, you know, we're collaborating across teams to say, Hey, look, you know, I can get you that VM in five minutes. I just have, I just have to have all the right approvals, right? Well, what if we set up a self-service catalog where we have all the pre-approved things in there and all the security layered on top of it so that only people who are allowed to do it can do it. And then you get your, your environment in five minutes instead of five hours, or five days, or in some cases still six weeks, right? For vm, right? Mind you, we're not racking hardware here. Um, so, so working across teams to figure out how we can deliver things more quickly while at the same time doing it with governance and, and with, you know, all the auditability and things that we want to do is really important, right?

00:23:14

Because people want us, we all wanna feel like we're solving the problem. We don't want to be the, the team that always says no, or the team that, you know, always get yelled at for being slow. Those kinds of things. Education and growth is, is really key. You know, are are we investing in employees in terms of having them learn new skills or, or relearn old ones, right? Or unlearned bad ones. That might be a good one to add there too. Um, and then one slide, like I promised on, on, on, on team culture metrics with a reference to Ron, we, who, who kind of came up with this idea of, well, did some research on, on the nature of, um, uh, bureaucracies of organizations, right? And kind of classified them into three pieces. The, the punitive cultures where the bearer of bad news is executed.

00:24:03

Uh, the bureaucratic culture where like Robert De Niro in Brazil, the bearer of bad news gets covered in paper until they disappear. Um, or the generative of learning culture where somebody who's the bearer of bad news is supported, and we start an inquiry and we figure out what, what happened and why. And, you know, should we figure out, you know, should we do something so it doesn't happen again? Uh, all of those kinds of things. So, so some really great questions to ask in terms of what sort of culture are you in? Is, you know, on my team, information is actively sought. Failures are learning opportunities. Messengers are not punished, responsibilities are shared. Cross-functional collaboration is encouraged and rewarded. And I, I mean, I've seen organizations where cross-functional collaboration is not only not rewarded and encouraged, it's forbidden and it's punished, right? I don't mean like corporal punishment, but like no career path punishment, right?

00:24:59

Um, so, so it might sound weird to be reading these off, but there are places where it doesn't work this way. Um, you know, failures cause inquiry, not, you know, finger pointing and new ideas are welcomed and, and implemented if they work, you know, if the data supports it. Uh, and the more of these that you can say yes to, the better, obviously, um, in terms of the kind of culture that you're working in. Five minutes left. I'm gonna do kind of a quick deep dive here on, uh, on, on kind of metrics. So very simple, but very linear kind of pipeline I've laid out here just with some examples of some customers and people out in the industry are doing, and the kinds of things they're looking at in, in the dev ci, uh, phase of things, things like development, lead time, rework, required by defects, build breakages, downtime, you know, basically time not on task is, is something that's important to look at for your development leads.

00:25:48

We did a survey a number of years ago and found that the average, not a scientific survey, but, but a few hundred people, uh, the average amount of time that a dev spent waiting for things like builds and test results and so on every week was 12 hours. And the average amount of time that a QA person spent waiting for those same things was 20 hours a week, which is kind of scary. Hopefully they're doing other productive things during that time. And not just Facebooking, but unless they work at Facebook, in which case I guess that's okay. <laugh>, um, idle time, important work in progress and technical debt, right? Have we built a bunch of features that we haven't tested? You know, those kinds of things. What is the cycle time on the QA side of things? Again, idle time. Are we sitting around waiting for stuff or are we actually working?

00:26:31

Are we on task? Um, how many defects were discovered and escaped and what was the impact of those defects? Right? Now that starts to get really kind of close to metrics that are a little bit scary, but hey, defects are not something we want. So we we're gonna find a way to to to look at those. And really the question is, you know, isn't so much, um, you know, discovered 'cause that's kind of your job, but what escaped and why and when something escapes, you know, we don't wanna figure out, oh, it was Joe who didn't do that testing. He's an idiot. Let's fire him. We wanna figure out why isn't there somebody who backs up Joe? Well, why don't we automate the whole damn thing, right? So, so those are the kinds of approaches you wanna do there. Meantime to discovery, um, or is, is obviously an important one in terms of uh, uh, testing, uh, qa.

00:27:17

And if I can get this thing to go one forward, there we go. Oh, back one, please. Thank you. Uh, and then we're, we're thinking about things like deployments and you know, the deployments can be not just for production, right? They can be for, for testing and, and for QA and even for developers. But again, what's the lead time? You know, from the time that I decide I need these bits deployed on this type of system or even this specific system, how long till that happens? Uh, how often do we deploy, uh, what's the duration of our deployments? Do they take us five minutes or are they 18 hour marathons that are like s sessions where you don't get to go to the bathroom? Um, what's the change success rate or, you know, flip side of that, the change failure rate. How often do we have to roll back or roll forward and how long does that take us?

00:28:00

And then that whole MTTR thing that I talked about a little bit, uh, uh, earlier on the release side, right? This is a little bit more efficiency, right? It's more like, it's more around release frequency. How much of this stuff is automated? What's the time and cost per release? How predictable are they? You know, do we hit our, do we hit our targets? Uh, whether they be, you know, quality targets or feature, uh, targets or time targets. And of course you can hit all of them all the time, right? 'cause we always get all three. Um, that was sarcasm in case anybody didn't catch that. Um, and then operate. So again, meantime to recovery cost and frequency of outages. A culture thing. You know, how often am I on call after business hours? You know, how often do I have to, you know, leave my wife and, and, and child at the, at the baseball game and, you know, hop into the data center and fix things, right?

00:28:48

That, that impacts, uh, things pretty big. And then of course, performance utilization, uh, all of all of those kinds of things. So what's next? Uh, last slide here. Just a tiny little bit of a commercial. If you guys wanna play around with this kind of stuff in, in electric flow, you can go download the community edition. It drags in all kinds of analytics for many and all of the systems that you connect it to, uh, and, and all of the, uh, processes and pipelines and con bonds and releases and so on that you run through it. Um, we have 42 seconds for questions. So I time that perfectly <laugh>, I'll leave you with that slide up there that has some resources, but I, I'll hang around here if there are some questions afterwards.