DevOps And Modernization 2.0

Don't be legacy. Be heritage. Modernize. In our past talks we covered both the radical people and process changes at CSG. This started with our Lean/Agile Transformation in 2012, followed by our DevOps Transformation in 2016 and integrating Product Management into DevOps thinking in 2018. This year we will cover the great modernization work we have done at CSG.


As a company with a strong culture of engineering excellence we saw it vital not to act like a legacy company. As part of this, we knew we had to invest in modernization and remove technical debt. In 2010 we set out to completely modernize and transform our technology and application stack. This included Foundational Modernization such as: E2E Version Control, Automated Testing, Telemetry and Infrastructure Modernization.


We then focused on a multiyear effort to modernize our application stack by moving to commodity and OSS. We also have made great strides to modernize our mainframe technology.

SP

Scott Prugh

Chief Architect & SVP Software Engineering, CSG

Transcript

00:00:09

Yesterday, we heard Erica. Morrison's amazing story about how they dealt with a two, four outage up next to someone else from CSG Scott Prugh, who is their chief architect and SVP of software engineering. I so much admire the work that Scott and team have done, which is why Scott Prue is the only person who has spoken at all seven years of the DevOps enterprise summit in the U S in previous years, he spoken about these journey, transforming software delivery and operations at CSG. This year, I asked him to talk about the amazing engineering journey and architectural transformation, which is one of those breathtaking I've ever seen involving mainframe code over 40 years old and migrating off of hostile vendors whose business models might be in conflict with those of their customers. You're going to love this presentation, please. Welcome Scott Pru.

00:00:59

Yeah. Thanks for that great introduction. I'm glad to be here. Uh, very quickly, just a little bit about CSG. CSG is the largest provider of customer care billing, order management and digital monetization, um, software, uh, in, in the world in the U S we have some 65 million subscribers on our platform and we support, uh, about 70% of the us cable market. We have customers like Comcast charter, uh, dish network on this platform. And we really see, and we support getting bills out to many of the subscribers in the United States. The story today, it really talks about, uh, the transformation of some hundred plus global DevOps teams that support our applications across 50, uh, 50 apps and 20 plus tech stacks.

00:01:49

Uh, so, um, I want to say I'm really glad to be here and I want to thank, um, how much speak is a Jean early, kind of thank you for this opportunity. Uh, and, uh, I just want to be clear that, that I get to be the storyteller, um, in this and that really the thanks goes to all the incredible engineering teams, the platform teams, the product management teams, uh, at, uh, CSG have given this, uh, the, um, opportunity to continue to improve and evolve our platform. And I just want to say that it's been a very difficult time, and I want to recognize all the people at CSG, uh, people in the it industry and people in the other industry who have shown incredible resilience, uh, during, uh, during this time. And finally, I want to say that at CSG, that two of our core values are being authentic and being a good person.

00:02:41

Uh, this means that we treat all people, regardless of race, sex, color origin, uh, fairly and equally. Uh, so be a good person, shape a better world, use your platform for good. The final two things is we have a lot of people virtually here from CSG, uh, reach out to them on slack, congratulate them for all the great work that they've done, and also engage them, ask them questions, what it's like to work in a transformative dev ops organization that continues to improve the way that they work. So I'll give a little history of, uh, you know, kind of where we were in does 14 through 18. We talked a lot about the people process and cultural, uh, transformations. And she mentioned today, we're going to talk about our technology transformation. A transformation was really always underneath this, but we just never went into details before, uh, of, uh, what we did and exactly how we did it.

00:03:37

And also note that this is the second time I've given this presentation and I've updated some of the slides with the version two dot oh tag, uh, for, uh, new discoveries and actually new metrics and statistics, um, that, that we found along our journey. So back to, uh, does, uh, 14, I presented about our agile transformation in 2012, where we basically removed, um, silos collapsed, um, uh, several developments files into cross-functional teams that both, um, design, uh, that basically design build and test, um, their, their software. This resulted in a reasonable improvement of 83% to our releases. And in 2016, we went through a dev ops transformation where we brought together development, operations onto cross-functional teams that both build and run. And this resulted in the 74% reduction in incidents per month. And then, um, also we able to do during that time as gross subscribers from 48,000,062 million, and also TPS on the platform grew from 750 to 4,000 or 433%.

00:04:43

Now I spent the next few years actually spreading dev ops across, uh, the, uh, the organization. Uh, so you'll note in 2018, I presented with Brian Clark, um, our head of product management about, um, uh, lean portfolio leadership and PM step ops with that, um, with that, uh, transformation, we were able to improve impact minutes, some 58% increase something covered this on demand, which we'll see more about later 460% and our, uh, employee NPS over 400%. So looking back in 2010, I, we couldn't tell the future, but there are things, two things. Um, we knew we needed to do. We needed to grow. We needed to lower costs, uh, and we needed to go faster and be stable. These were things that both our executives and our, um, leader and our customers were asking us to do. So if we kind of fast forward, um, if we fast forward, you'll see that that is actually what happened.

00:05:43

Um, we were able to actually, um, we've actually been able to grow our subscriber base today to 65 million. That's a 33% increase and TPS on the platform grew some 800%, um, over that time period. Uh, and that's a result of the cable industry going digital self service and really kind of mobile driving, um, uh, API calls across our platform. But with that growth, we could not increase our costs so many hundred percent. So really this, uh, this story kind of takes you through the things that we did to actually maintain that and actually reduce costs over that time. So how do you, uh, the problem with legacy systems is how do you maintain patch and secure them? How do you combat external threats and market forces? You know, forces like regulations like GDPR or market events like COVID, um, how do you increase stability and safety?

00:06:35

How do you go faster and deliver features and how do you support growth without a massive increase in costs? And finally, how do you minimize exposure from hostile vendors? So my definition of hostile vendors are vendors that you spend more time in audit, compliance and legal than you do in engineering it's vendors. The surprise you on contract renewals with 500% increases and it's vendors who create Byzantine, Byzantine compliance, and audit scenarios, where it makes you impossible to leverage basic core infrastructure and run them effectively. So one of the things, one of the sayings we have at CSG is that we honor the past, but we also inspire the future. And we decided a while ago that we weren't going to be an act like a legacy company. We were going to be heritage, and we're going to modernize and great engineering companies allocate time for modernization.

00:07:26

It's something that they do modernization. It fuels dev ops. It creates safety and environments and reduces technical debt and improves things like productivity, quality, lead time recovery. It also reduces risk. We're just as risk of the legacy technology, hostile vendors, and also workforce risks that can continue to maintain the applications from engineers, Java modernization. It's never done. Once you finish one modernization, you'll find you need to do another. So it's something you have to constantly integrate into your practices and continue to drive that monetization path. So here are the realities of modernization, and I tell this story really through four, uh, sub stories. Uh, so, um, the first story is about our API platform. The second about the second and third about our mainframe and the fourth about our composition platform, these applications are foundational. They support a key piece of CSGs, uh, business. Now, if you're going to do this through your house, you'd move out.

00:08:25

We don't have that luxury. We have some 65 million subscribers, a billion trends a day. We process 87 billion a year in customer revenue, and we produce over 75 million bills per month. So we basically had to go through this transformation, supporting all of this without impact on the platform. So how do you approach a massive application? Modernization answer is very carefully with great perseverance and engineering excellence, but there are key capabilities that you can master to actually get you there. So the first set of four key capabilities are very familiar for posts when they were dev ops or things like automated testing, CGI telemetry, and infrastructure, but where I'll spend most of my time today is on these capabilities above and gray. These capabilities of features, which you code porting, incremental rollout and strangulation, they are key to modernizing applications. I also want to go into a little bit on kind of what I call these foundational pitfalls.

00:09:22

Um, and, and there's really three. The first is the bi-modal trap. Bi-modal trap is really kind of thinking in terms of old applications and new applications, and then doing things like these foundational, uh, key capabilities and monetization only on the new platforms, um, that really kind of gets you into a trap where you're all platforms can no longer be evolved. You have these, uh, emerging threats, um, either regulations, security, or even hostile vendors that, um, that will, um, will catch up with you in this scenario. The next is the rewrite trap. It's the trap that says, you know what, that old application it's on, crufty technology, I'll just rewrite it on this really new, cool technology. Um, that road is really long and you also have, will have a very extremely hard time if you don't have testing, that really exposes the behavior that you're actually trying to get to.

00:10:14

It doesn't mean you shouldn't at some point rewrite that, but you really need these foundational things underneath to mitigate the risk. The final thing is tech debt and giving away your pivot tech that slows you down the road is going to be very long on, on these modernizations. By having these key capabilities in place, you can then pivot, you can pivot away from vendors and to pay away from infrastructure. You may change your approach, but if you can rebuild tests, continually integrate and introspect the system, you basically can now kind of avoid this pitfall of, um, uh, of really kind of giving away your ability to pivot quickly. All right. So I'm gonna tell the first story, which is our S Abbas API, which is about a Unix heavy enterprise service bus. And I really start with, this is what I call a golf course software.

00:11:01

Uh, my definition of golf course software is heavy, uh, software that gets selected higher up in the organization and then forced onto, um, development and operations teams to implement. Um, and so look at the, this, uh, sales executive and CIO, they're really happy to be doing this deal. And they're saying things like low code, I don't need developers just map your data. It's really easy to operate. And it's also integrated with everything in your enterprise. It's really great. Please don't do this to your people, let them pick their own tools. So a little bit about the problem we had with this, um, this heavy platform had poor developer, uh, aesthetics that high builds and test I've heard it had poor operational aesthetics, low TPS density. It was unsustainable costs to support, um, the business growth. So our approach was moved to commodity port some 300 trans leveraged by 1200 integrations, strangle the old platform off by using feature flags and Canary and apply these foundational monitorization, uh, T uh, capabilities, things like testing, CIA, telemetry, and infrastructure.

00:12:05

So a little bit more about how we did this. So we had a software load balancer, and we actually had that from, uh, previous implementations, um, a route table that we were able to leverage. Um, and in that route table, we added a flag, um, and it was called the route flag where we could actually direct traffic to old and new applications. So what we did as we went through the port, we would take a smaller client, lower risk transaction, and we would flip that route flag and send it to the new, uh, services written in that, um, in that new technology. And we're then able to kind of go through a client and transaction at a time and effectively test and make sure that we actually could roll this. We could actually roll this out, contrast this to large batch, go dark, um, uh, go dark monetizations.

00:12:51

This is a much safer, uh, way, way to go about it. Cause you will find a knowns along the way with different usage patterns that you need to be able to adjust for. And then finally you get through and you roll through all route flags and then everything was pointed at, um, the new, um, the new technology. Uh, and then finally you strangled that old technology off and take it out. Some other things about the financial monetization that was really important here is automated testing. So before we had some heavy proprietary test tools that were only used by testers, creating silos was a high cost and increasing, it turns out that this was a hostile vendor who was continuing to increase our costs. And it was just a high manual test effort still, um, because these are really test case driven tools and not automated tools. So we moved all this to, to basically Gherkin. Uh, and then we put the tests and version control of code and then the testers and developers could collaborate on those tests speeds. And with this, we were able to really build up a full, uh, functional test suite for the application. As part of this, we really grew these automated tests in 2012 from almost zero to all close to 14,000 tests a day. And we really documented that journey in this previous, um, uh, dev ops enterprise forum, uh, paper that you can leverage.

00:14:05

So a, a little bit, uh, more on how we did the testing here. I want to go through is that we also run it to basically confirm that the old platform and new platform work the same. So we did something by injecting a route flag into the transactions, running the old tests that sorry, the tests we through, um, through that application and then actually recording the old results pointed at the old application. Then we actually flipped the route flag to be new record the new results, and then we can compare those and we would run those every night through a series of tests. And basically then we would know that the, the new infrastructure produced the same results as the old infrastructure, and that mitigated a ton of risk for us by applying automated testing to this, uh, to this modernization that way. So a little bit of statistics.

00:14:52

Um, one of the things that we really struggled with the old platform is the TPS density was really low. So we increased that from T from around 200 to, uh, to over 1300. Um, the dollar for TPS was just dramatic. Uh, basically two orders of magnitude times two cost reduction in TPS able to direct them a lot more effort from the team to feature development build and deploy dropped from hours to minutes, server recycles from, uh, tens of minutes to a few minutes and really kind of drive the automated testing was another key result of this. So I have to recognize and thank a bunch of people. A picture on the left is that the SL boss, uh, team really who took us on this five plus year journey, I have to thank and recognize them, um, because of the modernization that was done here, we're able to actually extend and grow that team, um, to our Bangalore office now with commodity code, um, it's a lot easier to train and up-skill people to support this.

00:15:48

There are people who want to thank, uh, individual Mike Battle Lugo. Who's the director of manager that team, um, had just showed incredible resilience of, of continuing to help, um, manage this effort. And then my colleague, Jeremy van Heron, who was a great visionary, um, really kind of helped lead the teams and really get us through this. So finally, I want to zoom in on, um, on a plaque that, um, you'll see everyone holding there and I've also got to my desk. It says the best time to plant a tree was 20 years ago. The second best time is today. So modernization is something that really want to want to start. Even if you haven't today, it's not too late. The second, uh, the second, uh, thing here is a, one of my favorite quotes from one of my favorite movies. Um, from end the mode luck favors the prepared, the types of things that we've been able to do here by being prepared by doing this migration, we can spin up new environments.

00:16:38

We can, um, segment off customers basically on their own instances. These are types of things that we could never do with the old infrastructure because of cost and the inability to manage it. Effectively. Second piece of this picture is this is the celebration. Um, and, and I'll tell you that, um, west did say that in, um, the unicorn project. Um, but I did not say that actual quote, what I said actually could not, um, could not be repeated or printed. Um, but, um, and additionally, that that is not an eight U server. That's a two server, there were eight use services environment, but they were actually too low, too heavy for us to get them into the parking lot. So we just use the two year servers. So the next story is about our mainframe and DB two. Um, the problem there that we had is this lack of commodity data access.

00:17:25

We could only get to it by a CNCS. Maintainability was at risk and these unsustainable cost increases continued jeopardizing us. So we approached it by converting to the Sam, creating an incremental rollout feature, switching approach, strangling off of ECM data store and offloading these re transactions from CFCs to DirecTV to prayers. So, um, I've updated this because I did miss one thing, but, um, what we had here is, um, a feature switching mechanism. I call it the data store edition, uh, porting data store is harder than code and the pattern goes like this. You make the old data store that be the primary, that's the first step, um, which is kind of where you are. Uh, then the second step is the old data stores, the primary, but the new data store becomes the replica. And you do need to backfill that. So you need to backfill that old data store.

00:18:12

You can do that in battery. You can do that online. There's several different techniques there to back, uh, to do that. And then you compare them how you compare them to catch any issues. Then the third step is making the new datastore primary, make the old data store, the replica, comparing you do that so that you can roll back. And then, uh, then you finally make the data story, the primary, you can use this pattern with any type of data store. You can use document databases, relational databases. Um, you know, the industry has seen this pattern and in many places, and we use this for our mainstream data migration. So here's a little bit more around how it works. So actually at the mainframe level, we had a bunch of switches. Um, we had, um, and we were able to use those switches and carry them through a set of sequences to switch the region, right?

00:18:57

So the first switches switch one, which is the same only where we just read and write to the Sam and DB two as a narrative, the second switch, a switch to where we basically read and write from DB two, but we sorry, reading write from V Sam and we right through to DB two. We also backfill that, so that the data is in there for these transactions. The next switch three is basically we, um, read and write from DB to we write to the Sam. And at this point we can roll back to either switch one or switch two. And this is key because when you find issues and you will find issues, um, because there's going to be data that's either different or corrupt that you need to clean up. Um, you're able to actually roll back, um, very quickly and then, um, use those nightly batch compares, which you have to actually kind of find a lot of those issues.

00:19:45

The final switch switch forward DB two only, basically as we read and write, write to DB two, and then be Sam becomes in there. So some other things that we actually did here and to facilitate through this API API platform is build on top of our route flag that you saw previously. So we added a flag called the lead flag. Um, and in this, we actually can now direct reads to CIC S which is the old path, or we could direct the reads to DB two. And so we really went through that same, that same set of steps to actually phase through, um, the, the DB, uh, the V-Center DVQ conversion through the API platform. So some kind of metrics here, um, data access was the sand by a CIC S um, uh, before and now we can go DB to direct re transactions. We're a hundred percent CSCs.

00:20:33

We now go, uh, 62% of them direct to DB to have high data accessibility. And surprisingly our average response time dropped. And we did all this with near zero customer impact by being able to flip through these flags and roll during the day. So the next story is about our mainframe Java evolution. And so we had a problem and we had 3.7 million lines at this high level, assembling maintainability was at risk. Um, really the productivity, the agility, the workforce sustainability, and we continue to get this unsustainable set of cost increases. So the approach here, we, we built across, uh, custom cross compiler tooling with specialty companies that actually do that. We're one of the target, the more complex update logic. We wanted to take all this converting code and get it into these really foundational capabilities, things like CGI and the test coverage. And then, um, we wanted the code to run off board and we needed to build an incremental rollout strategy here again with feature switches and do deploys, uh, during the day.

00:21:34

So a bit more of the details of the steps here. It really kind of starts with that foundational test coverage. We were able to leverage a lot of these automated tests that we previously built to cover the entire legacy code base, um, with that testing. So we've got the breadth and the depth of that testing first, and that really helps us understand actually how it works today, then a code analysis. So we use code analysis really to understand the dependencies, and these are actual pictures of dependencies in the system. And this allowed us to really kind of choose key modules that we would actually pull out, um, because the dependencies are sometimes very deep, so it can serve some modules that are easier on the edges, then cross compilation. So, um, we'll use leverage cross compilation to basically do mass migrations codified patterns, and then additionally refactored learnings.

00:22:19

So remember we had 3.7 million lines of code. So one path is you just start typing. The problem with that is you'll, you'll really never finished. And your psychopomp to make adjustments, um, is really, really long. Um, the other is to build a cross compiler and that's actually what we did. So we built a cross compiler that takes the high level assembly runs through this compiler technology, and it produces Java on the other side. Um, and, uh, just a, a quick on I'll wait, do I look like what this looks like on the left? We've got high level assembly on the right. We actually have the job of that's output. And for folks who read assembly and Java you'll note that there isomalt weekly equivalent, um, but this is what we're able to actually do. And now we've got this new Java code that we can actually, um, we can test on the other side.

00:23:03

Um, and so with that, that cross compiling basically take that, and we get foundational version control, CIA and unit tests across that. Then we refactor it. So with that, we can do things now, which you couldn't do. It would be very hard to do by hand. We can actually feed patterns into the compiler. So this is a very simple one where we actually just have, um, you know, logical names that wouldn't want to define that we want to assign to the symbolic names. We can recognize that domain specific pattern and cert that into the compiler. And now we can increase the target code based maintainability on the other side. And the final thing is running that functional test coverage again, on that target code, so that we actually verify that we are getting congruent behavior across both the original code and the, and the portal code.

00:23:48

Um, the, the final thing is the feature switching, the telemetry and auto auto rollback. We leveraged the feature switching to roll out and roll back. We integrate that heavily to production telemetry. So when we detect issues, we can then trigger things like auto roll back upon failure, a little bit, what that looks like. We actually have more flags that we've added into these route tables. And with those flags, we can direct the traffic up to CIC S where we can actually direct the traffic to a Unix Linux, Java cluster, where the ported application code is running. And one of the things we did here is actually we, we wrote some functionality to auto detect failures, and when we detect those failures, we can fall back immediately to see ICS. So it gets our recovery time close to zero, so that if there is something wrong with the code around, with the new cluster, we still have the ability to fall back.

00:24:36

And we do that very quickly with, with almost zero impact. We're able to actually do that during the day. So the final thing, some of the results is that, um, you know, right now, now it's a hundred percent high level assembly. We're looking to get 85% of it to Java. This, this project is still in process. It's not complete. Um, we want to get, um, a lot of the code going direct to DB, to the maintainability cause much more supportable, much higher productivity, cause we can use a lot of the shared tools across groups. Um, and then, um, we can use open telemetry across this, which before we had a lot of isolated telemetry that wasn't shared across teams and we really share those practices and tools and again, with new zero customer impact. So, uh, with this, there's just so many people to thank, um, you know, I w I do want to thank the, the, the mainframe teams that were involved in this.

00:25:25

Um, it, it's just an incredible journey, both the DB two and the, and the Java. Um, it's just amazing. I also wanted to call out one individual in particular, Gary Gallagher, uh, Gary just hit 36 years at CSG. Um, and that is, uh, pretty amazing. And one of the things about Gary that I am so amazed with is his ability to just continue learn, and really continue to take the challenge to evolve this platform and modernize it. Thank you, Gary. All right. So the fourth story is about our composition platform. So our composition platform really takes, um, output from the billing system and turns that into electronic statements or even printed statements that go into the U S mail system. So at this platform we had about 4 million lines of proprietary cobalt that has been around for 25 plus year. There was no version control, no CGI, it was proprietary units, proprietary and hostile vendors.

00:26:15

We didn't have unit and functional tests there. Wasn't great telemetry. It didn't scale horizontally. There was just a, some portable unaffordable, vertical scale yet have bigger and bigger. Iron's actually make it work. We approached it by added foundational CGI, um, Coldwell unit testing and functional testing, add feature. I in feature flags to support trunk based development and incremental rollout and converting the proprietary units and cobalt to Lennox and canoe COBOL. And we moved at difficult and hustle. So I'm going to do, is I'm going to tell this story, actually, I'm actually, Steve BARR is going to tell this story for us, um, by going through the celebration day, when they finally actually completed it. Um, and it's really great as the folks who actually are familiar with transformational leadership, either from Steve Mainer's work or, um, from Nicole Ford screens work in accelerate. Once you to look for a couple of things in this, in this presentation, really how Steve exhibits, the behaviors of transformational leadership, things like vision, things like recognition. Um, these are things that kind of inspire people. And it's really great example, um, of this, um, in this video

00:27:19

For the first five and 25 years, I'm a big part of our application is under version control. And that's a big, that's a big deal. There's more, we also have a continuous integration. And what that means is, is that as a developer makes a change to the software, their changes automatically compiled and automatically built as a part of the build process. So we know right away when something is working, as soon as we check it in, that's great. There's more, we've also have, uh, COBOL unit testing. So COBOL is considered a legacy technology, but we've put some modern techniques in place to where I want to develop a test or checks in the code it's automatically tested. So there are unit tests now around the cobalt, we've really built a learning culture. We learned that we have to change things. Uh, we have to improve things. We learned that, um, we just, we learned a ton through this effort and that's what I'm, I'm probably the most proud of. Um, there's a lot of things that I'm proud of when we'd be done to create what I would consider a safer environment. All right, guys, I'll kick it off. You ready? All right.

00:28:39

All right. That was great. So for folks who, um, uh, enjoyed that, please get on slack. Uh, give Steve a little hip hip array, give a congrats to the team, give them a hip hip, hooray, because it was really a great and successful project. Um, we're 95% complete. We're really on the last, um, line here. And I'll give a little commentary on that. We've converted 4 million lines of prior to cobalt took a new cobalt. And can you call? It was very interesting because it compiles to see, so you can actually integrate in all kinds of other third party C libraries, honestly, you're unpacking tickets, uh, really great telemetry around the system. We're on a commodity solution, Lennox and cobalt. And one of the reasons this has taken longer than we had hoped is, is because of hostile vendor leapfrogging. Some of these vendors do not allow you to basically kind of dual license across multiple platforms, even though you're safe using the same capacity.

00:29:29

So we actually have to remove those vendors, bring in either open source or friendlier vendors who will allow us the flexibility to actually migrate over time. So that's a couple of things at the end here, really to kind of give a process update, um, and, and some things that monitorization does also allow you to do so modernization, uh, improves lead time and improves things like deploy frequency, your change failure rate, and your, your meantime to recovery. So we have something that CSG it's called the Nissan demand, uh, previous rod. And it means releasing value when it's ready and decoupling it from large releases. Um, rod ends up being a forcing function, which is a behavior shaping constraint for high performance behaviors and foundational monetization capabilities. So if you kind of look at our, our rod progress back in 2017, where around 5%, and we kind of kicked off a program to say, how can we release features faster?

00:30:20

Um, and this was really our mechanism to do it. This was our mechanism to move to a batch size of one. Um, so over time we really progressed and as our last release, twenty.one, um, we hit, um, 69%. So what that means is 69% of all of the features were put into production before the release day. And we're getting really close to no longer having a release day and really just having a continuous flow of features through the system. And you couldn't do this without practice changes, cultural changes, um, but also you can't do it without having great modernization place. So a Aveda kind of refreshing the metrics and, you know, this is what I presented last time a team. And I've really got an update for, um, for you on, on what stuff looks like today. So release impact. If these improve, we have one of our best releases, um, and a 97% improvement.

00:31:11

And then you think about release on demand. The releases are getting smaller, so there's less impact. Um, our incidents per month have of, um, have really kind of held, held steady if that 80% improvement subscribers have grown. Um, actually even as, as a result of COVID, we actually saw that subscribers on cable systems have actually gone up. Um, so we've grown to 65 million TPS continues to grow, uh, 68 over 6,800 that's 812%, because that impact minutes, um, last year at a 74% in pre, uh, improvement, we just talked about release on demand, which is a 69% improvement. We've got new MPA EMPS scores from our employees, um, that has gone up again. And so we're at a, a total 500% improvement of, of, of our employee net promoter score with this, um, with, uh, with really kind of all these changes and a new metric here, feature cycle time.

00:32:01

We measure our cycle time in the beginning of this journey, which is when we actually start a development to actually, when we install, it used to be close to 250 days. We've actually dropped that to 56 days. That's a 77% improvement. And finally, I want to say that, um, if you haven't written the call for fantastic book accelerate, I highly recommend it. Um, but I tied the accelerate metrics, the change failure percentage, the meantime to recover the lead time and deployed frequency basically to our metrics. And you'll see how change, um, the change failure percentage ties to release impact incidents per month, and also impact minutes, meantime to recover types, impact minutes, lead time, uh, deploy frequency ties to release on demand and lead time and deploy frequency also tied to future cycle time. So just fantastic correlations between what the research and accelerate predicts, and basically what we were able to show with this transformation along the way.

00:32:56

So what did we learn? Um, so you can modernize your legacy applications, be heritage don't act like a legacy company. Monetization is vital. It requires engineering excellence, um, leverage CGI, automated testing, telemetry and infrastructure don't fall into those pitfalls by modal the rewrite trap, the pivot giveaway really leverage those foundational capabilities optimized for developer and operations, aesthetics, um, use features, switches, code porting, incremental rollout, and strangulation on your, on your modernization journey. And finally feel DevOps create safety and reduce that technical debt improve things like productivity, quality, your lead time on recovery, and finally reduce your risk, reduce the risk of legacy tech, the risk of hostile vendors and on your workforce, your workforce risks, you can modernize and fuel dev ops, um, with these techniques.

00:33:50

So finally helped that I'm looking for, um, capacity forecast in an estimation and wishful thinking continues to be the most difficult problem in, in computer science. I'm convinced, trying to forecast your capacity, get good estimates. Um, next portfolio level whip constraints, really kind of stopping this technique of feature Tetris of trying to squeeze in at one last feature that really kind of continues to overload your teams, improving intake, lead time with traditional it mindsets, having to have Vanessa w that's fully scoped for every piece of work and then creating capacity for backlog swarming. How do you get to that capacity for those next set, you know, of, of lower impact of incidents and kind of swarm and resolve those? Thank you.