DevOps & OKRs: From Micromanagement Misery to Finding Flow

Organizations deploying DevOps share the goal of increasing continuous improvement and the flow of value to the customer. However, dated methods of measuring delivery that are not suitable to the age of software continue to enforce old behaviors. This creates a problematic mismatch, with DevOps teams constantly fighting against business metrics that do not support fast feedback and learning. The solution to this measurement dysfunction comes from the combination of OKRs and Flow Metrics. Focusing all stakeholders on visualizing and measuring the flow of value, and providing autonomy to value streams on how to optimize flow, creates the conditions for accelerating transformation while empowering teams.


In this talk, I will share the past two years of lessons learned from large-scale transformations that leveraged OKRs and Flow Metrics to create an end-to-end organizational wide feedback loop, as well as pitfalls that snapped some organizations back into the waterfall ways of measuring delivery.

DM

Dr. Mik Kersten

Founder and CEO, Tasktop

Transcript

00:00:13

Hello everyone. My name is Mik Kersten. I'm the founder and CEO of Tasktop and the author of project, the product. And I'm just thrilled to be part of this amazing learning community and to be sharing with you. Some of the main things I've learned since the sort of April may timeframe when on the last day of a talk on checking dev ops metrics around flow and really measuring objectives and key results or OKR. So I hope to share with you my latest learnings on how we go from micromanaging misery to finding flow for our organizations. And I have to say that this has been such a hot topic with many of the executive discussions have been doing as OTRs become a more and more important topic at the same rate. Over the past year, I've been seeing a lot of failure modes around OKR and a lot of organizations struggling to adopt them effectively and struggling to shift away from these ways that we've been accustomed to in terms of tracking activities instead of outcomes.

00:01:05

So the goal here is of course, to show you how you can actually use OKR to help you move from project to product from outputs to tracking outcomes and to help your organization move faster in terms of becoming a digital and product oriented innovator. Now, OKR is for a lot of people are probably started with reading books like John door's measure what matters, uh, which in 2018 defined what's objectives and key results are as these objectives, which are meant to be both qualitative inspirational. They give us a way to point that direction around driving innovation, driving value to our customers, to the market. And then the key thing is having only two or five KRS or key results, which are Indian metrics of progress, how we track, how we're doing. So it's, uh, and then an interesting coincidence this week is the, my 10th year of setting OKR for Tasktop.

00:01:51

So we're finalizing the, this particular week and it's been a learning journey of over a decade to understand how we get better and better at it. We've made some significant changes, uh, in this particular cycle and also learning to how they get adopted at massive scales and organizations that have tens of thousands of people relying on alignment around OKR. So the key thing about OKR is, is that, and again, what we want to track is outcomes, not activities. And of course we know the projects that the specified activities, rather than specifying these important outcomes for what we're delivering to the business, to the customer, to the markets, to our partners. And this is where really the flow framework came in. The flow framework originated out of my own work within, within our own organization of understanding how to connect these business metrics to actual business results or key results and outcomes.

00:02:37

And what the full basically says is that we need to understand, first of all, what we're measuring those outcomes for everyone working on that particular value stream needs to understand what outcomes are driving, who the customer of those outcomes is. Now the business key results. Those tend to be well understood already these around improving things like net promoter scores or retentions or, or financial metrics. But the key thing I realized that was missing was a way of actually tracking how we get there. How do we know whether we're improving? How do we know whether the, the dev ops initiatives that we have underway are actually making it easier for our teams to deliver those kinds of business outcomes to the customer? And so the whole idea of these flow metrics is to actually have a set of flow key results that actually tell us whether we're improving or whether we've got impediments on bottlenecks that are slowing us from improving in terms of how easily we're able to deliver those kinds of outcomes.

00:03:27

And to understand that where we need to invest are we lacking some DevOps automation? Are we lacking some important platform components? So idea of course, is that we both track these business key results as well as Floki results. So really what I've seen when used effectively is that OKR is, can help catalyze for you. This shift from project to product. The challenge is, and I'll, I'll speak to some of the pitfalls shortly is when we snap into old behaviors while renaming them with OKR. So here are a few of the bigger pitfall examples of have encountered that, how things can actually go quite wrong with OKR. So one of the main ones that I've seen is actually when we basically bring back the same behaviors. So the whole idea around OKR is to make sure that we're empowering teams to set their own targets, to understand the outcomes that they have, those align to the business outcomes.

00:04:14

However, opioids will often get used as a way for micromanaging teams, for example, to just accelerate this feature and basically, you know, get to this point where opioids are dates for things that should actually be tracked on a release plans and roadmaps. So we're moving away away from, uh, how we should be doing agile and planning and how we should be tracking those activities and snapping back into effectively waterfall ways of planning. And this is one of the challenges that we see is when they basically ochre cascades become a whole bunch of projects and activities that tell teams what to do, we're completely missing the boat. And that's exactly what we should be using. OKR is aligned to agile as a way to steer away from, uh, another really interesting pitfall that I've noticed across the board is when the only key results that are being tracked are the business and the financial metrics.

00:05:00

So metrics basically on costs. And of course, if we're only ever tracking costs, we, again, we, we fall back into that cost center trap, uh, rather than tracking actually innovation and tracking how investments are driving business outcomes and outcomes for customers. So we somehow need to compliment, as you'll see in this talk, those financial metrics. Another even more interesting one that I've been seeing as a, as a common pitfall is to take P level telemetry and metrics, uh, such as op time was such as, uh, deploys per day, or such as user story point velocity or those sorts of things. And to say that those should be organizational key results. And of course, when you do that, when you take a team's telemetry, uh, which is very important telemetry for that team, and we should be measuring each of those things, I just mentioned, in addition to, to all the other team level metrics, theatrical metrics, the service, uh, an uptime and stability metrics and the dev ops metrics.

00:05:50

Uh, but if you're setting that as a goal for the entire organization around one of those metrics, you fall into this local optimization, the valley steam trap, which we've all seen, which is that we're basically looking at, uh, uh, uh, measuring a value stream with a two inch loop. This is, uh, John Willis said this beautifully, but measuring at at 12 inch value stream with a, with a two inch ruler, we, we need both sets of metrics and we need to understand what the interplay between them is. Uh, and then another very interesting one that I've been seeing even more is when, okay, ours are introduced in feedback cycles that are much too slow. So by design, of course, okay, ours are, they came from Silicon valley around product oriented innovation rather than project management as the way that we operate, uh, vigilant innovation. Uh, and when we've got cycles that are now taking 120 days, let's say to deliver value and provide feedback you can't even adopt OKR is this is something that with, uh, the OKR guru Philipa Castro, uh, we, we realized that a webinar and then the project or podcast that we did recently is that when we're measuring organizations who can basically deliver features to customers within 90 days, and when you're deploying Oakley, that your feedback cycle is just far too slow.

00:06:56

So you can be in a situation where flow the flow within the organization today does actually support the deployment deployment of okay, ours, which is of course, somebody we want to move away from because this can be such an effective vehicle. So the main thing that I wanted to do you want to get across to you in this talk today, is that in each of these cases, we don't have a, we don't have the, the three ways of dev ops in place, the flow feedback, and continual learning needed to adopt OTRs, which in the end, our learning vehicle for the organization, they speed our decision-making that help us pivot and make adjustments faster. They help us experiment that test hypothesis faster. So in the ground, how, how do we move away from that? And I think the fundamental thing here is actually understand flow. And to understand that when we have key constraints that are not measured, where we can improve them, we fall into these traps that make it too difficult to adopt this fast paced, basically this, this agile, uh, planning and goal setting methodology.

00:07:49

And so right now I'll actually just relate to you a lot of the experiences that that was with some data. I shared some of the stories of what we've seen over the past year of organizations. Who've actually adopted the, um, of a shift to value stream thinking. So are starting to shift from project to product, but really have starting points that are often challenging. And the majority of these organizations, the majority of this data set that we collected through Tasktop. This is actually from organizations who are adopting OKR is in some cases, very similar methodologies, but here's some of the examples. So in this case that the common fact that we're seeing across this substantial data set is that only 80% of what's planned by agile teams gets delivered. And as you can imagine, you can have the best set, okay, ours. But if that's the ratio of what's being planned, what's being delivered.

00:08:36

There's just a very big mismatch between the capacity of the organization to deliver on the OTRs, uh, and the OTRs that are being set. So somehow these things have become very, very disconnected in a lot of organizations. Here's another it's. This was a very struggling one to me is I saw how, how significant this was across the customer base is that only 20% of features, sorry, 20% of features are canceled after code has been written. So what's happening is you've got these plans coming in. And these goals coming in from the business driven by customers, given by demand, and the like, and scopes are changing that frequency that 20% of actually capacity that's been started is being thrown away, making a force flow load and work in progress, WIP challenges, even bigger. So again, they're really big disconnect between how planning is being done and how we're delivering, how we're actually tracking those activities and an able to deliver on what those plans are.

00:09:29

A 35% of product value students that we're seeing have no capacity for new work. So again, really big disconnect between, uh, the capacity of value streams and the way those value streams are being planned. Because of course, there's an assumption that there is capacity for new work. Otherwise, why are we planning those kinds of outcomes? Uh, 85% of products under invest in security in debt. So again, this is another factor that just exacerbates the deployment of things like OKR is when there hasn't been the, the investment in the improvement of work to actually take on the new work that's being planned. And finally, 95% of valley students don't know what their actual efficiency is. And in many cases don't know what their capacity is and the capacities capacities not being communicated to the business, to the planning cycle. So this is just some of the ground truth that we're seeing out there.

00:10:16

And of course the whole goal is how do we improve on these numbers? And then get to the point where we've got a very good alignment between what the work that's being planned, the work that's being delivered. And of course, most importantly, that feedback loop and really that's the goal of, of, of approaches like OKR is to establish an effective and fast feedback loop between business planning, strategy, uh, understanding and measuring and delivering to the customer base and what we're delivering through through our daily work. So the thing that strikes me the most is that measuring flow is, is how often it has not been part of how we plan and how we work. And I think that the really neat things I've actually gone back to, and most of my presentations to leadership teams gone back to, to this, to this, uh, tried and true, true quote from, from gene Kim and the Phoenix project that improving daily work is even more important than doing daily work.

00:11:08

And really, if I could just pull out of state one challenge around the way opioids are being deployed today is that they ignore the improvement of daily work. We're asking our teams to do so much to change so much. Their backlogs tend to be so large. And if we don't add capacity to the improvement of daily work, we will just, we will just overload them even more and get even less out as you saw in some of those lost that this sticks. So really the question just becomes, how can we use OKR to drive the improvement of daily work? And of course, to really do that because, okay, ours are meant to speak in terms of outcomes, those key results that we, that we are delivering to the customer, to the business, uh, how we actually connect improvements in daily work to those business outcomes.

00:11:46

And so I'll share with you just a few short examples of how this, uh, how I've seen this work successfully in organizations. But first we, we do have to get on the same page of how to measure improvement, how to track flow, uh, across the technology, the business stakeholders, and the way that we measure the way that we operate. So fundamentally to improve flow, we need a way of measuring flow that we can all agree on. And there's a really quick recap for those who haven't seen it, of the flow framework. You can see more on the full framework.org. The question of course is how do we measure what, what flows in software delivery so that we know if we're improving it. We know if we're making the daily work of our teams better, we know if we're removing burden from them versus adding burden to them.

00:12:24

And again, of course the whole goal is as we're doing that, as we're creating these new approaches like deploying OKR, is that that's supporting this improvement of daily work, not just putting more on the teams. So very quickly, uh, what flows in software delivery, according to the flow frame, I guess, features defects, risks, and bets. And so all work, all issue types in your agile tool map into one of these, and these things are zero-sum game. The meats exclusive collectively exhaustive. That means if a value stream is able to take on more feature work, chances are it's able to drive more outcomes. However, there's too much technical debt, uh, what's happening is that, that, that, that will take away from the capacity of the value stream. Or if the defect rates go to high incidents, there are too many incidents because of a lack of investment and, uh, and stable platforms, of course, that will take the capacity of the value stream.

00:13:08

So the point of the floating is to basically to expose and to make visible, uh, the zero sum game that we have on every value stream. And then to allow us to create organizational objectives around improving that in addition to creating objectives around revenue and costs on customer outcomes. So here's a very quick example of how we can connect those things, uh, first linked to make sure that we can measure these metrics. So just like you've got these four flow items there for flow metrics and for distribution. So that's flow velocity, basically how much we're able to deliver across the period of time. But the key thing is from the business from the customer's point of view. So this is end to end philosophy, not how long it takes my team to do it, but all the way from where, when the work enter the value stream to when it's done flow efficiency, what are bottlenecks full time?

00:13:49

How long did it take to deliver from antenna all the way when work was committed, uh, to be delivered to when we had running software and flow loads, that's just that metric, how much work in progress is there. So as an example, and now we've got a visual of a value stream over here. What we see as the value stream that does have a bottleneck. It looks, it looks fairly clogged and, uh, some very battle, badly structured OKR is clogged. So when OKR is, are basically around get me this next feature, whereas my feature, what happens is too many features get put into the, this particular value stream, this, this set of teams that's, that's delivering value, uh, for a customer. And when they're being micromanaged, that causes a whole lot of thrashing that canceled work, that you saw that 20% of work being canceled.

00:14:29

That's exactly what what's happening when that work is canceled and additional work has put in. Uh, and we've got everyone basically, uh, struggling to keep up the work while you work is coming in. And really the cause of this is that, that, that planning, that demand is not connected to the actual flow of the actual capacity of the value stream, nor is the understanding of what the bottlenecks are part of, of actually the planning. And that's really the challenge. As you keep cramming more into the pipe without improving the, the, in this case, uh, the width of the pipe, and you get these predictable and problematic results when you're ignoring capacity, you just keep increasing work in progress. And this is one of the biggest findings that we've seen is that basically overloaded value streams are just endemic across enterprise organizations now good, okay. Ours are actually an opportunity to improve that so bad.

00:15:12

Okay. Ours make it worse, but okay. Are a great opportunity to improve that. Uh, when the main things they do, of course, it's the cascade business outcomes to the value stream so that everyone can be, can actually understand, well, what outcome are we trying to drive to? We're trying to do to capture more customers that we're trying to keep our customers happier. Um, I, we're trying to grow our customer base and we're trying to make a business partner happy or trying to make, uh, an API more stable because so much of our business builds on it, uh, to do that, they need to measure the F we need to measure the flow of value. We need to understand what bottlenecks are happening and how we can actually improve on the situation to have capacity from our work, because fundamentally, most organizations are actually constrained by the capacity of their value streams.

00:15:50

Uh, and then of course the prioritize learning and improvement. So we're always making sure that, uh, once we found one bottleneck where then learning that, where that next fondant neck is, and then improving there. So I'll give you some very quick examples of how meaningful, uh, this approach can be. This is an organization, uh, it's a financial services organization. So quite a sizable bank, and in their case, as part of what they strove for and their key results, they want it to just improve time to market. They want to reduce flow time. So they were able to make that a, an objective measure it through how many days for features full-time improved by. They saw a reduction from 55 days now to 38 across organization. So this is a very, very solid, but of course, by removing some bottlenecks by implementing some DevOps automation, some security scanning automations, and fixing some upstream bottlenecks that they had.

00:16:40

And this translated this, this flow, this flow key result that was achieved translate to an $800 million revenue pool forward because of all the capacity they had to deliver on those plans of digital experiences that they had. They had so many of their business cases and their digital transformation were based on, uh, another one is reducing the cost of quality in this case, of course, you know, many of our, of us are working with sort of very large legacy systems. Uh, some of those have caught, you know, frequent outages and incidents in this case actually investing in. And this is sort of nothing new to say, uh, back to the future kind of scenario that we see all the time is actually investing in testing automation, test harnesses and platform stability, uh, unlocked basically by reducing defect caused the defect resolution time to we reduced by 70% with the lives is much easier for those teams.

00:17:28

And that unlocked $52 million of revenue growth through just a much healthier customer base. And then this is the, that story that we so often see a large security organization. And in their case, they were able to drive a 40% acceleration of feature delivery by reducing tech. That, and this is, uh, another really important scenario where they had this, this measurement of, uh, investment in technical debt and the outcome of that being additional feature delivery. And then the outcome of that additional feature delivery capacity from effectively the same size of organization driving this $140 million revenue polar head. So again, you kind of get, get a sense of how we can use these objectives and key results and measuring flow as the key results, uh, as leading indicators of these business outcomes that we're all after for the organization. So I think the key thing now is that we understand the different types of metrics and the different things that we can measure the key results to get to those kinds of results that we just saw.

00:18:27

Uh, so first of all, the business metrics, they tend to be already very well understood in every organization. I think in, in, and this is really for the business facing parts of your portfolio. I think one of the key things around this shift from product to product of course, is to provide those business metrics for the platform products, for the shared services, for the delivery of CICB pipeline itself. Each of those need some kind of outcome metric, how many developers are on the new pipeline? How many, uh, how many of the business applications are using the new API APIs or the new cloud environment that created, we create them so long, but those do tend to be well understood today. Now the key thing with a float, but the flow metrics is they provide a leading out a leading indicator to how we're doing on that.

00:19:07

So for example, if we're able to deliver more features with less toilets burn more quickly, we should be driving that, that customer outcome, we should be driving that business metric. And it really it's that interplay between the flow metrics, being a leading indicator and the business metrics. They tend to be a lagging indicator because to drive additional revenue to drive costs down, uh, tends to happen in many months cycles. Whereas the flow metrics, you can actually bring into your monthly review cycle, your quarterly review cycle, and understand whether things are moving fast or not is also an example. And then the other key thing is, is the team metrics do not throw away the team metrics. This has been just a fascinating thing where organizations take well, deploys per day are no longer representative metric for us. We've, you know, we're now at the point where we can deploy multiple.

00:19:48

It depends. They, you still need to measure how many times you deploy per day. You still need, you need very much need that door on a metric, but chances are that you need a bigger picture metrics. You understand where your bottleneck is upstream or downstream of deployments. So again, these metrics are all meant to work together and the flow metrics are the whole goal of them is to be end to end and to be cross team cross value stream, rather than focused on improving one part of the value stream. Now, of course, when you find that bottleneck, it's all about those particular metrics. If that bottleneck is, is in the, kind of the safety and reliability of your deployment pipeline, that's what you're focusing on. And those are the metrics you've improved as part of your, your quarterly OKR for that, that part of your portfolio.

00:20:25

Uh, another key thing of course, that I did want to relate is, is, uh, is, is roadmaps and release plans, right? We can't just the same way we can't, we, we should never throw out our, our team metrics and those remain as important roadmaps and release plans and okay, ours need to work together. So roadmaps and release plans basically define what gets delivered in which order. So I briefly flashed that, um, the visualization of the Suez canal in terms of us needing to think about improving flow into what happens when we have the bottlenecks. So we can think of that as the, sort of the order of the container ships that, that now needs to go through that canal needs to, that needs to be delivered to customers in order to minimize delays and to maximize outcomes. So those are the roadmaps, but roadmaps alone are not enough.

00:21:07

They need to be aligned to our okay, ours, but in the end, we need value stream markers around accelerating flow, like how quickly we can deliver on those roadmaps and finding those bottlenecks fighting like, like the evergreen stuck in the, in the Suez canal. So really think of value stream OKR is for every value stream. And it really is important to think about them for each one independently is how do we widen that canal to get more ships through? How do we make it easier to get ships through how to make it safer, uh, to get subs, to get ships through that canal. So really again, in addition to looking at the roadmap, which defines what will get done, which then of course, delivery of that will define the, the customer and business outcomes that we have always looking at, how do we make it easier to deliver?

00:21:44

How do we make it better deliver? How do we get to those, uh, to the five ideals from the unicorn project, um, in our daily work. And again, just becoming measured around it. That's the world cars are a great tool. And then the other key thing is the organizational OKR. And this, these are really the we're transformations on large scale. Digital transformations can use OKR is to help track the improvement work. Because one of the big challenges that we have of course, is a lot of large transformations have taken a completely waterfall approach to their agile transformation, ironically, or the DevOps transformation or the digital transformation as a whole. And in the end, we want to be able to measure that transformation. And this is for the visual I, uh, I have here as to how do we build new canals in terms of the transformations?

00:22:24

How do we build that, that new canal, that new cloud platform that we move a lot of our business applications onto a lot of our customer facing applications onto, and then how do we track whether we're getting the right results from that w whether that canal is working, whether we dug it too shallow, uh, whether we have some, some other unforeseen circumstances that means it's not as productive as we thought it was. And so understand those organizational KRS, and I'll show you the example of all of this in just a moment is, is key to, uh, measuring in an iterative way and applying flow and feedback, and continual learning the principle of dev principles of DevOps to the transformation itself. So here's an example from insurance company, uh, and one of their OTRs was to become the most innovative, ensuring their industry for this large enterprise.

00:23:06

So they had a relatively small market share. There was the larger insurers then insert the companies, um, and they want to grow their market share, um, from six to 30%. So they're really substantial part of the market over the course of their planning window, which in this case was a year. So it was really significant stretch. And then they create this beautiful and aspirational, uh, KR to reduce the time that it took to provision the policy, basically cut it in half from 43 to 20 days. And so this is all about the customer. How can we get the customer, their home insurance, their car insurance, whichever part of the business was how can we get it to them? And half the time that kind of cascade down to everyone, every value stream, this every team supporting this is extremely valuable because it makes you rethink the, uh, technology architectures that you're using the sign-on, but th the authentication, um, the sign up, uh, approach that you're using in case that's the bottleneck to the customer's journey.

00:23:58

And then finally, this, this last key resolve that they track was to improve flow time by 20%. And this is actually how they capture the need to drive improvement within the value stream itself. So it's that last one that's so powerful, because again, it's not just prioritizing work, it's actually helping us improve daily work. And so here's how some of the value streams I'll give you two examples actually picked us up. So this is the mobile application value stream. Uh, would they set us for us to become most innovative customers, have to love our mobile experience. Our star ratings are not great right now. We need to improve mobile experience. So they translate that into increasing the net promoter score for the mobile applications from 31 to 60. So that's a pretty high net promoter score. Um, and they, they were really wanting to make sure that you could do that, but to do that, of course, they realized they needed to get more features.

00:24:44

Those features that delighted the customers that made it easier for them to authenticate them to, to buy insurance products. Uh, they need to get that feature full time down from what was over 20. It was over 20 days to under 10 days. So basically cut in half in 20 days is not bad. This organization's doing quite well. Of course, we measure flow times and in months quite frequently, but still it w they realized that it wasn't quite enough for them to drive the sort of result results that they wanted in the time from they want it. Now, from that flow efficiency improvements, they have the organization level, uh, they already had multiple experiments from those exp, uh, going to understand where the bottlenecks were from those experiments and the way that they were measuring those. They realized there's actually verification of new features by their business partners that were the bottleneck.

00:25:26

The bottleneck was not that teams, the delivery development teams themselves. So they set this aspirational goal here of saying, we're now going to target. And this all happened within one quarter. We're now going to target zero days, wait states on business impact. And what happened as a result of that is that the flow time went from nearly 30 days to, well, under 10 days on average for new features that they were delivering a net promoter score started climbing, but the net promoter score was a lagging indicator of this, because what happened here was that the net promoter score was as a result of delivering those great features. Now, of course, chance if they could have delivered those great features and maybe the net promoter score didn't improve. And that indicates there's another problem. There's a disconnect between the investment and flow, and then building those great features.

00:26:10

And what's driving customer outcomes. If customers have already left and gone to a competitor in this case, they were able to actually see that correlation, that very direct correlation between improving flow and driving those business outcomes. And so this is exactly the kind of feedback loop we want to put in place to make sure that the activities that we're doing and how we're improving that work is actually then driving those customers, those business outcomes, and then feeding back into the planning cycles. Because of course, as you can imagine, this was a really major success for the organization and taught them about the importance of, of faster feedback loops. Ah, here's another really quick example, and this is a, this is now a platform team, so it's not a customer facing team, and this it's actually the same organization. And this organization, they realized that they had done such a fast lift and lift and shift and move to, uh, basically, uh, some of the core platforms and systems to the cloud, uh, that they were making very inefficient use of storage services in the cloud because they're not using bonnets for services.

00:27:06

They'd moved a whole bunch of their databases effectively did to be hosted. And so the per application hosting costs were certainly prohibited prohibitively high because, uh, part of the ROI of cloud was meant to be that we actually improved our cost profile. Didn't make it dramatically worse. Now, what the teams knew at that point was already was that they had, again in this very fast move to cloud and what was too much of a lift and shift. It brought on a whole bunch of technical debt because they were not leveraging those new storage services that would actually available from their hosting provider. And so they basically spend a release cycle where their flow philosophy was dedicated to tech that reduction, and they tracked both that flow metric. And on the left here, we see how they're tracking their application. Hosting hosting costs as a KR, that's a key result.

00:27:53

And they saw as they took down that tech that move to the new storage services, this hosting bubble was dramatically reduced, and they were in a much better point for scaling and bringing more of the business onto this new cloud platform. So again, whether we're sort of focused on top line and our customer results, or in this case, actually just, just the cost structure, uh, th that will, that will happen when, when we're not keeping up with technical, that we're not leveraging new technology platforms as, as we had intended, uh, this again gives you that, that feedback loop of connecting that, that investment flow in tech that reduction to actual business outcomes. And so in summary, uh, the, I think the th the, the key point here is to make sure that you are as if you're leveraging, okay, ours are similar planning systems, because of course, a lot of us have similar planning systems that aren't quite the same.

00:28:39

Uh, you can use those to catalyze a shift from project to product, and really the main part of that, just to make sure that you, as you focus on daily work, as he focused on improving your roadmaps for, for every product value stream, as you make your internal products have their own, uh, first class roadmaps, uh, your platforms have the roadmaps that your prioritize and measure both daily work and improvement in daily work. And so that every value stream always has that flow metric, uh, as a key result. So those flow metrics can really help you prioritize and keep and track that improvement. And as you saw on some of the examples here, actually seen a very fast payback loop on that improvement. And so they empower the value streams to set their own flip key results, uh, as, and then of course connect those to those business outcomes.

00:29:24

And I think the key thing I've also learned is that to celebrate those successes, when you've established that feedback loop, you're driving those outcomes that you let the rest of the organization know in terms of help. I'm looking for, if you're seeing that, I'd love you for you to share that out as well, um, on the DevOps enterprise channels or on flow framework.org. So, uh, to learn more, check out project to product.org, or check out the project to probably a book that sticks out about this and remember that all of the proceeds go to supporting charitable programs for women and minorities in technology. So with that, thank you.