Las Vegas 2019

DevOps & Modernization - An Engineering Excellence Story

In our past talks we covered both the radical people and process changes at CSG. This started with our Lean/Agile Transformation in 2012, followed by our DevOps Transformation in 2016 and integrating Product Management into DevOps thinking in 2018.

This year we will cover the great modernization work we have done at CSG. As a company with a strong culture of engineering excellence we saw it vital not to act like a legacy company. As part of this, we knew we had to invest in modernization and remove technical debt. In 2010 we set out to completely modernize and transform our technology and application stack. This included Foundational Modernization such as: E2E Version Control, Automated Testing, Telemetry and Infrastructure Modernization.


We then focused on a multiyear effort to modernize our application stack by moving to commodity and OSS. We also have made great strides to modernize our mainframe technology.


Scott Prugh, Chief Architect & VP Software Development. Scott supports the North America Development teams that deliver CSG?s hosted Billing & Customer Care Platform. Scott has broad experience across development and operations functions from startups to large enterprises. Scott is a Lean enthusiast and his mission is to help others learn and improve their environment to maximize value delivery to customers. Previously, Scott was CTO of Telution and built the core runtime and billing architecture for the COMx product suite. Scott lives in Chicago with his wife and 3 kids. In his spare time, he perfects pizza, enjoys wine and code.

SP

Scott Prugh

Chief Architect & SVP Software Engineering, CSG

Transcript

00:00:02

Uh, the next speaker is Scott Prugh, who we know very, very well he's presented, uh, every year here at the DevOps enterprise summit and a has serves prince over 70 million monthly bills, uh, monthly. And over the years, she's presented with Erica Morrison, executive director of software development operations. Last year, he presented with Brian Clark, the VP of product. But this year I asked him to present us something that has amazed me from the very beginning. When I first met him specifically, how has CSG re-engineered some of the largest portions of the technology stack, including parts of the application that were written over 40 years ago at first data corporation, before there was spun out, it is one of the most heroic engineering stories I've ever seen. So please welcome again, Scott Peru.

00:00:54

Jean, thank you for, uh, that, uh, that wonderful introduction. I'm a glad to be here. This is a really great opportunity to tell a fantastic story at CSG, but I want to be clear. I'm just the storyteller. The credit really goes to the great engineering teams, the product management teams at CSG that allow us the ability to kind of continue to improve our environment, the great infrastructure teams. And so I have two asks. We have a good number of people at CSG here today. We have more than actually, you know, the company. So when you see a CSG year, congratulate them on really the fantastic work that they've done, not just here that we're gonna talk about today, but over the last few years, transforming the way they work. The second ask is engage them, ask them about what it's like to work in a high-performing dev ops organization that has really transformed from legacy ways of working to modern ways of work. So I have to recognize someone. So Erica Morrison, I don't know, she's probably up there. I saw them sitting up there. She's been my partner in these presentations the last couple years, and she has done just some fantastic work. Eric, I want to recognize you and everything great that you've done, not just for CSG, but contributing to the community and how we transform. Please give Erica a round of applause.

00:02:11

So, so folks who follow me on Twitter know that I practice fridge bonds. So this is my fridge and my wife loves it when I kind of lay things out in the fridge and kind of plan out what I'm going to do. But really the last few years we does 14 through 18. We talked about really the people, the process, the cultural components, and changes that we've gone through this year. As Jean mentioned, we're going to talk about our technology journey, which really kind of underlaid a lot of that, but we never really went into the details of what happened there in this transformation. There's some pretty fantastic stuff. So I'm going to recap real quickly, the last couple of years. So, um, in, in does 14, we really talked about our agile transformation and, and in 2012, which kicked off and we basically put in cross-functional agile teams, uh, that really now built, tested their own design built and tested their software.

00:03:01

And that yielded some reasonable improvements in what we called release impact. When we put a release in, uh, we improved that about 83%, then we went through a true DevOps transformation. In 2016, we collapsed together our product operations teams into basically our development teams to really own the entire life cycle of building and running the software. We saw some fantastic results with that, including incidents per month dropping, we're able to grow our subscriber base some 27% up to 18. And then also TPS on the platform grew to 18 to about 4,000. A lot of that was enabled by the technology changes, but also changing the way that we worked. Uh, then in 17, we're really focused on spreading dev ops, really kind of to the rest of the organization, to our platform teams, to how we manage projects or move more to a product mindset, but also engaging deeply with product management and those dev ops principles.

00:03:59

I presented last year with Brian Clark around this lean portfolio leadership and PE PM meets dev ops. And in that we really talked about how we look at the portfolio, how we integrate the other capabilities of dev ops, basically into how we look at the product. We reduced something called impact minutes, some 58% through 18 release on demand. When we basically start getting rid of releases, improve some 460%, and finally our employee EMPS improved quite a bit, also some 400% over that timeframe. So kind of looking at the problems we'll lay out today, back in 2010, we really kind of looking forward and we didn't quite know what the future looked like, but we did know we needed to grow. We need to lower our costs. We needed to go faster and we needed to be more stable. And these were all things, both our executives and our customers were asking us to do.

00:04:50

So you kind of fast forward and you'll see, that's actually what happened. We were able to grow our subscriber base. Now through 19, some 63 million subs TPS on the platform continues to dramatically increase because the digitization mobile and the cable industry changes to have a lot more be self service. So that, that grew to about 6,000 TPS, which is 700% increase, but we couldn't increase costs 700%. We really had to reduce technical debt and change the way that our infrastructure worked and really kind of that's what we're talking about today. So here's the problem with legacy systems? How do you patch them? How do you maintain secure? How do you increase the stability and the safety of those systems and how do you go faster and deliver features, which are really what your customers want. And then how do you support this growth without a massive increase in costs, as we add more subscribers and more transactions, we can't have costs go up proportionally to that.

00:05:43

And then how do you minimize exposure from dangerous vendors? I'm going to pause on that and give you my definition of dangerous vendors, dangerous vendors, or vendors that you spend more time in audit and compliance than you do with engineering they're vendors that surprise you at term contract and increase your costs by 500%, the vendors that put in Byzantine kind of working agreements of how you can kind of provision you can't use virtualization, you have to isolate infrastructure off. I think folks in the industry are familiar with, with these experiences. These are existential threats that can really turn the P and L of a product line, kind of upside down as a surprise. Now, a lot of what I'm saying may sound a very victim mentality. Don't take it that way. You have to attack this problem and you have to really engineer your way out of it.

00:06:27

If you wait, it will become a big problem for you. So one of the things we kind of say at CSG is that we do have a great heritage and we kind of honor that past, but we also inspire the future. So what our view was don't, don't be legacy, really kind of be heritage and continue to modernize and great engineering companies continue to allocate time for this monitorization memorization fuels DevOps. It creates safety and it reduces technical debt and improves things like productivity, quality, lead time and recovery. It reduces risk, legacy technology, dangerous vendors, and also workforce sustainability. And finally, you're never done once you basically do one monetization, you realize you have to do another and you keep moving on to go across your infrastructure. So here's really the picture of the reality for modernization for us. And this is really laid out in four stories.

00:07:19

First story is about our API platform. The second stories two and three are about our mainframe. And the fourth story is about our composition platform. All of these platforms are foundational. They really support a massive business for us. So if you're going to do this to your house, you'd move out. Right? Well, unfortunately we can't do that. We have 63 million subs over a billion trans a day. We produce, we process 87 billion a year in revenue for our customers and 75 million bills a month. All of them expect that to work. Additionally, large batch go dark transformations don't work very well. So we're intent of doing this without impacting our customers and continue to transform why we actually had subscribers. So how do you approach this? Well, the easy answers, very carefully, great perseverance and engineering excellence. So, but there's much more to it. And I've got some capabilities which I'll talk about.

00:08:10

And if you can, if you basically can master these, you can transform. So the bottom floor in blue are subs that you're familiar with automated testing, CGI telemetry infrastructure. I'll spend very little time on those. They're fairly well-known in dev ops. The ones that top I'll spend a bit more time on and talking specifically how we leverage things like feature switches, code, porting, incremental rollout, and things like strangulation. And I have left a good bit of detail in the slides. So you can kind of take this away and leverage it and use it as a reference point in your implementations. So, story one about our API platform was running on a heavy Unix-based enterprise service bus. And I'll kind of go into that a little bit and how we kind of made it through that transformation. So I'll start with this problem. This is what I call golf course software. So, so this is software that's green-lighted at some higher point in the organization, see how happy that sales guy is with the CIO and they're making this deal. And they're saying stuff like this, it's a low code environment. You don't need developers. You can just map your data. It's really easy to operate. It's already integrated with everything you already worked with, right? Don't do this.

00:09:22

And the other thing with golf courses is there isn't a developer operations person in sight. So, so please don't force these heavy platforms on your development operations team. Let them pick them, let them pick the tools that they want to run. So that's really what we started with. We had this platform that had a horrible developer, aesthetics, lots of windows. You had to click in and map stuff, override the mappings high build and test and test effort. 14 hour builds. You heard Jean talk about the Symbian problem at Nokia. I mean, that was it. You know, it took forever to build weeks, to test deploying, to production where massive deploys, 45 minutes to recycle every server. There was no observability. Your telemetry TPS density was incredibly low and there was really an unsustainable cost to support the business growth. So we've talked about a dangerous vendor.

00:10:07

It was incredibly costly to actually support this platform. So our approach was to move it to a commodity stack port, some 300 transactions leveraged by 1200 downstream integrations to native code. We strangled off that old platform with feature flags and Canary, and we'll show some diagrams of that and apply of course, all the foundational stuff, the testing, the CGI telemetry and infrastructure. So we'll kind of start with this, which, you know, we've got this old service bus, right? And we've got all these transactions go into the old service bus. And we were fortunate enough. We had what's called the software load balancer was written many years ago, and we actually had some configuration flags in that load balancer. So we leveraged it and really leveraged this concept of the feature, switching the incremental rollout and strangulation to convert over. And so as we started the port, the code, we would go take a low risk client, like a lower volume client that had less use cases.

00:10:57

And we would flip that feature switch. We would basically make sure it would work. We would do that actually during the day and look at those transactions, make sure that they were working. Then we went to riskier transactions and then we went to riskier clients. And over time, basically you phase through this and keep the entire thing in flight. And now we can actually remove that legacy enterprise service bus from the environment. So I will dip into this foundational monetization, automated testing here because it kind of hits on two things. One how important automated testing is. And you heard that earlier in our previous discussions, but also kind of the dangerous vendor problem again. So we had this set of legacy test case tools, incredibly heavy, incredibly expensive. We only bought them for testers because it was so expensive. So it created kind of the perfect silo of actually not really getting testers and developers to work together.

00:11:49

The cost was going up about 25% a year on that, and really had a high manual test effort. We commoditized everything to Gherkin, put the test in version control. And now we had developers basically collaborating and test suites, great thing, gherkins free, but the better thing is now puts everything in code and in version control with everything else. And you really get those two roles to collaborate. So by doing this, we were able to grow from almost zero tests at the time to almost 14,000 tests a day, we collaborated on a paper several years ago on, on this research and it's available for you. So I highly recommend to leverage that if you want to look at testing and legacy systems. So I want to also look into how we leverage that testing because some of the things we need to do is we had to make sure every transaction was exactly the same.

00:12:35

And so the test confirm a bunch of that, but we also used routing flags basically by running the test suites, running an old route flag through, and then a new route flag, getting those results and comparing them. Those would catch subtle differences in the XML IX, spacing, things like that, that unfortunately would actually break downstream parsers. And so we were able to get a lot more coverage by leveraging that test automation and those techniques. So here are the results. So today we're, we just hit about 6,000 TPS on the platform. The TPS per node went up substantially. So one of the problems that we needed a ton of nodes to actually run this, you know, we would've needed about 28, but we only need about 4.5 to support that entire volume. Now we do run more nodes than that to basically provide for isolation. The cost per TPS dropped over two orders of magnitude times two.

00:13:22

So you can kind of see that cost difference, feature development, increased things like recycling servers. Now only take two minutes as opposed to before it was 45 minutes per server. So it was, it used to be incredibly dangerous. It's a lot safer now. So as far as some recognition, I want to recognize the SL boss team on the left. So that's the team that, that spent over five plus years on this. We've been able to grow that team to Bangalore because we have actually a commodity platform. One of the key things we can train other people on it now. And then in the lower right-hand corner, I want to recognize two people. Mike Battle is the manager and director of that team. And then one of my colleagues, Jeremy van Haren, who's just did a fantastic job leading this one thing we did notice when we took this picture is we're a lot grayer than five years ago.

00:14:05

So one of the side-effects of modernization is kind of gray hair. So that's just something to be warned about. All right. So I assume on, on this plaque, this is what I've got on my desk now, which is, you know, the best time to plant a tree was 20 years ago. The second best time is today. So don't wait, start modernizing. And if you haven't yet, it's not too late. And then one of my second favorite quotes from this project as, from my favorite superhero is luck favors the prepared Edna mode from the Incredibles and the amount of things that we've been able to do with being prepared and having this platform ported are just fantastic. We know up new environments, roll out new customers that we couldn't do that stuff before. So the final thing of the celebration was, was actually this. So it turns out west did say that in the unicorn project, I didn't say the top thing.

00:14:50

What I said, can't really be printed. And then secondly, that's a to use server and on eight U server, the eight servers were too heavy to lift. So we just, just use the two, two you ones. All right. So the second story kind of gets onto our mainframe and mainframe DB two. So we had a problem where we were a hundred percent V Sam. So we had lack of commodity data. Access has only by CCS. Our maintainability was at risk. We had this unsustainable cost increase again, which really jeopardized our viability as a platform. So our approach there was to convert those V Sam files and to 500 plus DB, two tables. We implemented incremental rollout here in a variety of ways and strangled off those V Sam systems. And then we started moving those read transactions off of CIC S really onto our commodity infrastructure to access that data store.

00:15:36

So here's the data store migration pattern. And you can use this with any set of data stores. You really kind of go through four steps. You make your old data store primary, which is where you are today. Then step two is you basically make the new data Serbia, replica, and you compare those results. Then step three is you make the new data store, the primary and your old data store be the replica. You compare those and you do that so that you can fall back and then you make your new data store be the primary. So how this worked is we had a bunch of switches at the, at the mainframe level that allowed us to walk through these states switch one, being V Sam bean region, right switch to the V Sam was read and write, and we would write through to DB two.

00:16:17

And then the next step is that basically DB two becomes read and write. And we write back to V Sam to kind of keep it in sync and note that both that switched three and two, we can roll back. And that's important because when you find an issue and you will find an issue, you need to be able to roll back without impact. And then you do these nightly compares and this finds data. And then that's been around for 30 plus years, that actually may be broken and you basically need to actually fix the logic up. You need to clean up the data. The final thing is, is you switched. We switched to DB two only, which basically now V Sam is inert. And there's no going back at that point. So how are we doing the switching? If you remember this picture where the load balancer, we had, basically those route flags.

00:16:59

We added a read flag now in there. So these incoming transactions could switch and now start going to DB two and really went through the same process to start switching everything over to actually read directly from DB two. So here's some things here on this, you know, before we were only visa and by CSEs now we've got DB to direct our read transactions. We're a hundred percent CICF. Now, 62% go direct to DB two, and we have really high data accessibility. And our response time surprisingly is actually even better than it was before. And doing all this with new zero customer impact is pretty impressive. So the next one is, is mainframe Java. And this one really looks at a problem that we've got, which is we have close to 4 million lines of high level assembly. So I've ever seen assembly language. It's hard to maintain.

00:17:46

It's hard to understand, and we have maintainability at risk and we continue to have this unsustainable cost increase across everything. So our approach we took is to write some cross compiler tooling, and you'll see that in a minute and start targeting this update logic. And we also wanted to run this code off board of the mainframe and again, carry through the incremental rollout to basically minimize impact. So one of the things that we did as you saw that functional test coverage, the good thing is we'd already covered a good portion of the platform. So we had to augment that legacy code base that high-level assembly. We'd already done a lot of that work. So we got leveraged and built on top of that. We leveraged code analysis really to kind of look at all the code. And those are pictures really of the module dependencies in the system.

00:18:28

So we can kind of start unwinding the sweater, right? The big hairball and kind of pull off a lower risk, less impactful transactions first and kind of move through the system cross compilation. So here's the, here's the deal. You've got 3.7 million lines of code. You got get two options. You can kind of start typing, which you would just never finish. Right? And the other problems with that, as you get really, really long cycle times to actually get code converted or two, you can create a cross compiler, which is what we did. We invested with some research companies that specialize in this to basically cross compile the high-level assembly to higher level languages. And then what we do, and this is actually code that really came out of the cross compiler. So for folks that read assemble, you'll see that it's isomorphic really the same.

00:19:14

So, um, on the left is assembly code on the right is what gets output in Java. Now the key thing here is, is after this comes out, we put it in a foundational really all of these foundational constructs, CIA unit tests. And then the real important thing is we continue to refactor the cross compilation to get the code quality to higher levels. This allows us to find these domain specific patterns. It allows us to increase the target code base maintainability. This example, we have naming overrides that get loaded into the cross compiler that takes symbolic names and turn them into real variable names later. And the final thing again, is leveraging that functional test coverage to go across the entire code base. That's been converted to make sure that the same, if I don't think is in production again, use that feature switching. We enter, we basically integrate all the code with our telemetry.

00:20:00

So we use the last to search and put all of our telemetry into that, and then also have auto rollback. So if we detect errors, we can automatically fall back, which basically gets our recovery time close to zero. So again, kind of what this looks like. We add another node called the S flag. And in this we have update memo, which still goes to CICF, but update memo for client to now it goes to Java and it goes to a Linux cluster, which runs off board. What we're also able to do is auto detect failures. So if we start getting failures from the Java code that we're calling, we actually automatically revert back to CCS. After we see a certain number of failures, again, making, driving our recovery time, very low. So here's kind of where we are and this, this project is not complete yet.

00:20:41

We're in progress. That's why I have a star on the, after we were 100% high level assembly. Our goal is to get 85% or of it moved off to Java, the updates transactions. We're getting our targets about 40% go direct to DB, to maintainability and productivity, kind of great improvements there. Telemetry. We really didn't have great telemetry on this code that was integrated with everything else. Now we use our open common platform and then giving the shared practices and tools that the CCI code, refactoring tools, all those tools that you really enjoyed on these other platforms that we didn't really get on the mainframe. Now we can leverage. And again, our goal in this is near zero customer impact is we do these rollouts. And because of that auto rollback technique, we can make these changes during the day. All right. So some thanks here. So there are so many people involved in this project. So, you know, close to 200 people, we even had Shrek involved. That's a picture from a Halloween party and it was really an effort across everyone to do both this DB two and Java. I do want to recognize one individual, Gary Gallagher here. He just hit 36 years at CSG. So let's give Gary a round of applause.

00:21:48

So Gary is one of our distinguished architects. And what I'm really proud of is his ability to continue to adapt and learn and really kind of take this platform to the next level. It's really been fantastic, but thanks for all the mainframe teams, all the other teams that were involved in this, it was really a great accomplishment. So the final thing is our composition platform. So our composition platform is where we produce these 70 plus million statements per per month. So think of it as a platform that takes, you know, layout from the billing system, puts it on paper, uh, puts it on paper or on PDF or for presentation online. So if you look at this, we had a lot of these other problems for millions of lines of proprietary COBOL. I was over 25 years old, didn't have version control CGI. It was on a proprietary Unix.

00:22:32

We didn't have any testing or functional tests. There was no telemetry and we couldn't horizontally scale it. And it was unaffordable vertical scale. So they just want you to buy bigger and bigger iron that's more and more expensive that you have to license all this expensive software on. We had a crazy number of impacting incidents per day. It really was, was very tough. It wasn't a safe environment for the employees and it wasn't a safe environment for our customers. So I'm going to, I'm going to let, I'm not going to tell this story. I'm gonna let, actually Steve BARR tell this story through, through a video. There's a couple things I want you to note here. And then for folks who follow Steve, Mayner, you'll understand the concept of transformational leadership, but you know, setting a vision, uh, inspirational communication, um, recognition. Those are all things you'll see in this. And a CSG. We talk about something is the shadow of leadership. Think about the shadow as a leader that you leave. It's very important. And I'll let Steve talk about this now

00:23:29

For the first time in 25 years, I'm a big part of our application is under version control and that's a big, that's a big deal. There's more, we also have a continuous integration. And what that means is, is that as a developer makes a change to the software or changes automatically compiled and automatically built as a part of the build process. So we know right away when something is working, as soon as we check it in, that's great. There's more, we've also have a COBOL unit testing. So COBOL is considered a legacy technology, but we've put some modern techniques in place to where we're want to develop our tests or checks in the code it's automatically tested. So there are unit tests now around the globe, we've really built a learning culture. We learn when we have to change things, uh, we have to improve things. We learn that, um, we just, we learned a time through this effort and that's what I'm, I'm probably the most proud of. Um, there's a lot of things that I'm proud of, but we have to be done to create what I would consider a safer environment. All right, guys, I'll kick it off. You ready? All right.

00:24:48

So let's give Steve and the composition team, a round of applause. And if you see them later, give them a little hip hip hooray, because it was an incredible project. So there we covered about 90% of this project is completed 4 million lines of proprietary cobalt to canoe COBOL, which is an interesting project because it compiles to see. And so you can integrate in other third party, C libraries, very easily, almost zero impacting tickets. We've got telemetry and we're moving to a commodity solution. They're all Linux and this canoe Pope COBOL. So we've also pretty proud. You'll see released on demand mentioned in a minute. Our lead time for features were months to get things through the system. We now put things in days we're putting features in, as they're done with all that testing version control and automation of you also saw in that picture, there was Jenkins running up on that board that was Jenkins running, basically COBOL builds and COBOL automated testing.

00:25:40

All right. So I'll give you a little process update with the last couple minutes here and how it relates to modernization. So one of the processes and we kicked this off in 2018 was released on demand where we wanted to start getting rid of large releases in both Damon Edwards and John Willis challenged me and said, okay, you reduced batch size wants, reduce it again. And I struggled with a few years. So we came up with this, let's make batch size, be one, one feature goes in when it's done and look at the incredible improvements here. We're now at about 60 per 2% of all features go in when they're done. Now, we were at 5% in 17 and now at 62%, and we couldn't have done this with out a bunch of changes, but one of the key changes was modernization. Being able to shorten the cycle times, improve quality, allow us to actually do this and includes rolling things back.

00:26:29

If there is an issue. The other thing is actually how monetization improves cab. So I always ask this question, how many people love cab? That's what I thought, oh, I see one hand. I'm sorry. So, so for folks who follow me on Twitter, you'll know that actually CSG got rid of cab, uh, and it created quite a flurry. Um, and, uh, my mailbox on Monday morning was flooded that this, and this is what you want. It flooded with his cancellations for all those cab meetings. So I was, I was talking to Dominica last night about really just releasing all those time. Thieves across hundreds of people have immediate. You don't have to go to any more. That was incredibly frustrating. Um, I was unfortunate enough to collaborate on a paper, um, basically around this with a bunch of other folks in the industry, I highly recommend you read it.

00:27:16

Uh, one of the sections about is about architecting for safety, which is really about modernizing how your software works to minimize the kind of the impact of change. And then I highly recommend, you know, Nicole's, um, basically recent work around how she does the research on how a clear change process positively impacts software delivery performance. It's true. It does, but heavyweight change processes do not. Um, and so there's some great research there and some great research in the paper that I highly recommend. All right. So the final thing is to kind of refresh all these metrics for folks. Um, you know, I took you through the 2018 stuff and I'll take you through kind of where we're showing now. So our release impact because of the fact that we're getting rid of releases, obviously dropped significantly. Um, it's dropped down 94%, um, incidents per month continued to go down subscribers.

00:28:03

We've added more, as you saw, kind of in the previous slide, we're about 63 million. Now that's a total 29% increase since we started. This TPS has just exploded again. There's just like this insatiable desire to consume our APIs at 700%, um, impact minutes continues to go down. So we're projecting to be about 66 37, by the end of the year, that's a 71% reduction release on demand skyrockets our goal by 19 four, which is our next release is to be at 70% of everything going in. We don't have a new reading on, on EMPS, but you know, we hold steady at that 400% right now. So the final thing is kind of what we learned. You know, you can modernize your legacy applications. I encourage you to be heritage don't act like a legacy company. It's vital, there's existential risks out there, but it does require engineering excellence, leverage automated testing, CII, telemetry, infrastructure, automation, all those things you hear about in dev ops, optimize for developer and operations, aesthetics leverage things like these features, switches code, porting techniques, and Coleman or rollout and strangulation, and finally fuel dev ops.

00:29:07

You can create safety and reduce technical debt. You can improve things like productivity, quality, lead time and recovery. And then you can reduce risks. This legacy technology, it's technical debt, the risk from dangerous vendors and really workforce sustainability. And then the final thing is help I'm looking for. So right now, capacity forecasting, an estimate estimation and wishful thinking around that. This is the hardest problem with computer science. I'm convinced how do you basically communicate capacity? Still do figure out how to do estimates effectively and combat the wishful thinking of that. Everything should take less time and get to production faster. That's what we want to do, but it's really hard to battle that CapEx to OPEX, cloud hurdle, costing, improving intake, lead time with traditional it mindsets like everything needs an sow. So you have to write everything down, but then how can you be agile when you have to write everything down and then creating capacity for what's called backlog swarming. If you follow John Hall, which are all those threes and fours, like, how do we, those, how do we basically tackle those? And, um, it reduced the debt around those remaining kind of incidents out there. And that's it. Thank you very much for your time.