Las Vegas 2019

Your Shift left Initiatives are Failing - Here’s How to Fix Them

Today's development teams are becoming more agile, building code more quickly and accurately to meet the needs of their customers and employees. But faster development is now putting pressure on teams to test code more quickly, often within shorter sprints. Shifting quality left has never been more important.


Join us for this session as we share to how to stop falling behind and how to win at shifting left.

RJ

Rich Jordan

Test Engineering Manager, Nationwide Building Society

Transcript

00:00:02

Thanks for coming along. Uh, my name is rich Jordan, as you can see on the site, I'm not Steve baloney and I don't work for Broadcom. You might have read that on the, uh, uh, the sign out the front. Um, I'm actually, uh, a customer of Broadcom and Porcha motivated me along to essentially take on this subject, your shift letter initiatives, aren't working. Here's how to fix them. Okay. So, um, I'm a test engineering manager within a test COE within nationwide building society, which is a UK, as you can probably tell from my accent, a UK based business. Okay. So the agenda I'm going to walk through today a bit about what nation mobile society is appreciate. You guys are from the U S you probably are thinking that we're the insurance people, but we're not okay. I'll explain what we are and what the bill does is it is, um, bit about, um, nationwide where we've come from.

00:00:52

What are its state looks like a bit of commonality around the challenges that we face, the changing world of banking and the need for change and transformation. Um, a lot of common themes with the, the, the kind of the agenda of the event coming out through these things. And then we're going to tackle the question of everybody is shifting left. What do we mean by that? What are the common kind of industry buzzwords around shifting left and testing the tend to come out. And sometimes they work a lot of the times they don't, so we'll explore what they are. And then we're going to talk about what we're actually doing within nationwide around test engineering and really making some of those shift-left initiatives work for us. Okay. And we'll go into a bit of detail about what that really means. Okay. So come on, click up.

00:01:44

What's nationwide building society. We are a UK based financial service organization. Uh, pretty much, um, we're a bit of a bank, but the difference between ourselves being a build society and a bank is that we don't actually have shareholders. Okay. We have members we're owned by our customers, essentially. Okay. Our roots trace back to 1846. So we're quite an old company. Really the main takeout of this slide is the third bullet point. We've grown to be the world's largest build society over a series of mergers and acquisitions over the last 30 to 40 years. And especially the last one in 2008, where the banking crisis, where we will have amalgamated with a number of smaller builds societies within the UK. Okay. And as you could probably tell, if you merge companies together, you need to merge those it systems together. Okay. And that creates challenges a bit about our customer base.

00:02:42

So we have a relationship with one in four households in the UK. If we've got any UK based people here, you probably know exactly what nationwide is. You probably bank with us, or you have a mortgage with us, or something like that. We have about 17,000 employees. We also have a large strategic partner base based out in India. And our channels include everything that you would expect from a large financial service organization. So we do current accounts, we do everything under the sun in central channels, mobile tablet, smartwatches, you name it, we'll do it. We also have a big presence in terms of brick and mortar in the UK and 700 branches.

00:03:20

Okay. So some stats about what we do in terms of it. So a bit of idea about scale. So in 2018, we made 25,000 production changes. And year on year, we raised the bar in terms of our resilience in terms of operational delivery. Okay. We are 80 development is broken down into projects and squads. We're currently going through a transformation exercise where we're moving from a project based organization to a product based organization. We're not unique in that. Okay. And there are about 130 change initiatives going on within it at any point in time. And majority of those things go through what our 21 squad, essentially a squad for us. If you break that into business units, for what we will do, we do current accounts and therefore we've got a squad for current accounts. We do mortgages. We've got a squad for that. We'll do a, a mobile app.

00:04:13

We've got a squad for that. You kind of get the idea about how we align our squats. Our technologies include a larger array of newer technology. So we're implementing CAFCA in a event driven architecture at the moment. That's the newer technologies and APIs and microservices. But because of our legacy, we've also got some older technology. Okay. And one of the ways we explain it is we used to sponsor the England football team. Okay. So what we did is we did a mapping between England football players and when we, when they made their England debut and when we introduced systems into nationwide. Okay. And I appreciate context is everything here, football to me is something very different to you. Okay. So if I asked you to name or sorry, guests, when Trevor booking made his England, Debbie, you wouldn't have a clue who he was. Okay.

00:04:58

So I've switched the context and I think it will work NFL drafts. Okay. So our oldest system was introduced where the first draft pick was a chap called EDD to tool Jones. Anywhere we want to have a stab at when he made his first, his first draft, he was picked in the first draft, who said, 74, there you go. 1974. So you get the idea in that. Yes, we've got some new stuff, but we've got some challenges around legacy where we've, we tend to bolt onto a highly coupled state where we still have a legacy dependency on those old technologies. Okay. So a bit about me. Um, I traditionally come from a test engineering background and what we will have called historically as a test center of excellence. Okay. So anything technical test that will sit with a team that I will run amongst a few other portfolios, and that's how we operate.

00:06:03

We essentially deliver capabilities as those teams, as you can see from there, that team is called Dave ops. There's two reasons for that. The first reason is slightly provocative. It's kind of two fingers up to people in our organization who is saying the right words. They're using the right terminology. They're bringing a lot of the tools, but they're working in exactly the same way as they used to work. Okay. The other one is far more PC and it stands for data automation, virtualized, environments, operations, performance, and security. That's what we do. Yeah. They are testing capabilities that historically we've, we worked as a center of excellence and we've kind of, kind of had a pull effect in terms of people coming to us and using those capabilities, we kind of figured out that they didn't work. Yeah. We created our own little library tower and we evangelized from the top, how they, everybody should be doing it.

00:06:51

And they went off and did their own thing and it didn't really work. Okay. So what we started to do is turn ourselves into a center of enablement where we're essentially indoctrinating, educating evangelizing, we're creating incubating capabilities that we're then federating out into those areas. So much more about enabling, rather than trying to hold an expertise internally. Okay. So that's a bit about me. That's a bit about nationwide wider, nationwide need to change. Okay. So anybody involved in financial services, you're probably very familiar with these bullet points. Okay. So the fintechs are coming. Yeah. The Monzo, the world, the starlings of the world, they are to a certain extent green fields. They're, they're born in the cloud. They have an ability to be agile with a little a, that is very scary to us. Yeah. They can offer new products far quicker than we ever can.

00:07:45

And they're only going to grow in popularity. And we're seeing that over the last couple of years, coupled with that is something that happens in the UK called open banking. Open banking is a regulatory demand on the banking sector to essentially open it up competition. Okay. So it's an API based regulation where we essentially meet to make it as easy as possible for our customers or our members to switch banks or understand the best offering that they can get from any of our competitors. And we need to make it as easy as we can for them. Yeah. Which then leads onto the next point, which is customer experience. We're not giving the best customer experience to these people. They're just going to use open banking and go off somewhere else yet, because it's easy to do that cost of it change. We're always under pressure to reduce costs.

00:08:32

That's not a new one. Everybody's their resilience. Yeah. We're, we're a reg regulated organization. And not only do we want to be resilient, we need to demonstrate to our regulators that we are resilient. You'll be familiar with a couple of high profile banking outages in the UK quite recently, because of that, the regulator is very keen. Understand that the peers of those organizations can demonstrate that they are resilient in the way they operate. So that's a very important thing to us. And we've also got things like industry trends. Yeah. We're, we're, we're, we're very aware that that our stakeholders are sold certain dreams in terms of the way these fintechs are working, uh, a new way of working that can generate far greater outcomes. There's a slight bit of cynicism in the way that I've kind of articulated that in terms of buzzwords, but you get the idea in terms of the way we caught ourselves D Dave ops. And the fact that we disrupt the people who are using the right terminology, using the right tools, but working in the old ways. Okay.

00:09:38

Um, and within nationwide, um, we are delivering a much wider, broader transformation agenda called something through hashtag we are change. Okay. Now this is an organizational change delivered from the top downwards. And you can see the OKR, if you like around what that change capability is delivering. Um, really the top two sit firmly within the strategy of what we're doing in terms of test engineering. Yeah. Improve patient predictability. Predictability is all about tests. Yeah. And we want to do it fast. So that's a bit about, Nationwide's a bit about what I do a bit about what nationwide the broader agenda is doing. Let's try and tackle what the question yeah. Shift left. Okay. So what is shift left? A better research for this? I went and Googled very quickly and it's quite interesting that the first 10 hits all come up with shift left is shift left testing. Yeah. So what about testing? Why is it, what about testing? And it's an interesting one. And I went to images very quickly and that was the picture it came up with. Okay. And it is quite odd in this. If you look at the scale in terms of attention to detail, attention to quality, sorry. And if you look at the old way of working, does that suggest that no one paid attention to quality until test, which is kind of worrying isn't it and inaccurate. Yeah.

00:11:10

And so what the model is suggesting is that the witness, the attention to quality is essentially moved to the left hand side. Okay. Um, but it does make the question, does that suggest that quality then drops off after development and builds again, I'll leave that open to interpretation. Now what we then got is a number of kind of transformation initiatives that become very unique to test. Okay. And for anybody that works with consultancies, these things will be very familiar with the things that are pitched to your organization in terms of what will transform your test capability or cast velocity enable you to save costs in terms of your testing costs. Okay. And they're not wrong. Yeah. There are some slight misinterpretations about certain things, the obvious ones being things like BDD and TDD as a test, only activity. Yeah. That's quite a common thing within a lot of the engagements that we see, but they are outcomes of something that is missing foundation.

00:12:23

Okay. And really this is where our journey starts in terms of our test engineering journey within what we're doing in nationwide. Okay. So what are we doing and how are we responding to the transformational strategy that nationwide is asking for? So what this is, is a test of manifesto, and you can see from the tagline at the bottom, that this is an adaption of a chalkboard, John focused on smarts adaptation to BDD, okay, this is driven development. Now we're not actually trying to do BDD within nationwide. We're actually trying to do model driven development. Okay. Slight adaptation, but the, the picture and the statements fit well for us. Okay. So if you cast your mind back to the challenge that we face, we carry a complex, highly coupled its state. We're going to carry a bit of technical debt. And we start to build models to understand where we carry that technical debt.

00:13:24

And we acknowledge the fact that we actually carry that debt. And we don't know certain things about our it estate. And the only way to address that is to first acknowledge it. And then to start conversations about chipping away about how this thing should be working. Okay. And they're the first two points that are on that slide. And this is the kind of the crux of what John folks and smart we'll talk about. For example, in BDD is that BDD is not about using Gherkin or cucumber. It's about the collaboration with the three Amigos. Yeah. But it's not lost on me by the way. There's four people at the top, but the designer and the business at the same person in that model. Okay. If you've got that, you have got a stable base for predictability, okay. We know what we're doing. We know what we should be doing.

00:14:11

Yeah. We have an articulation of what a common understanding of quality is. Yeah. Quality is subjective. It's in the, by the eye of the beholder, we show it in the eye of the collective. And therefore for the first time we are understanding what a quality benchmark means and we can go for it. Yeah. We have predictable outcomes. We know what we're delivering in terms of development, design, and test. And we can go for it. Okay. If we know those things, we have a very good grounding around automation. Yeah. Ines is predictable. You can't automate it. Yeah. You can't. So when we've got those things, we can automate. If we're automating, we can be faster. We're generating lots feedback. We're on number four. And we can execute things in our CI CD pipeline. Yeah. We get really lots of metric. We can start to understand where inefficiency sits, where we are winning out certain aspects where more technical debt carries, where we've gotten certainty.

00:15:09

Yeah. We are becoming very driven by analytics and where these things are happening. We want to, uh, evangelize, we want to celebrate the fact that these things are working. This is the best way to, to, to, to attack these things. We've got to stop doing certain things. We want to share that best practice with the community. Yeah. This is all about iterating, but common continual improvement. You get the idea. Okay. So this is a kind of a 65,000 foot view of what we are aiming to do. The next few slides are really going to deep dive into each one of them to give you a bit more substance around what I'm really talking about here. Okay. It's easy saying these things actually then put them into practice that. So that's the, kind of the possibly why the shift draft, uh, initiatives are failing. Okay. Come on.

00:15:55

So problem with test ice creams or volcanoes testing, hasn't really changed in the way that it's delivered in the past 20 to 30 years. Okay. We've delivered through ice creams, lots of, lots of functional UI based testing, um, by probably X business users, executing happy path, functional tests. Okay. Um, we've made attempts to do automated GUI tests, but automated GUI tests are very flaky. Yeah. And not always having predictable results mean they fall over a lot. And therefore we spent a lot of time putting a lot of effort into creating these automated packs and they just don't work. And therefore we revert back to doing manual testing. Okay. Integration tests. They kind of look the same as the functional tests that big end to end functional tests. We're not really doing proper integration testing or integration in the small, and then unit tests kind of a bit in the eye of the beholder in terms of the way that developer built.

00:17:00

It, that's the way he tested it as well. And therefore the negative test or the structure testing that they've used kind of doesn't exist. Yeah. And underpinning, this is a kind of a transfer problem where, um, although we will have, um, uh, been on certain courses, got certain certifications about the way we do test. Um, we don't apply structure testing techniques to the way that we work. Um, and that's a problem. Yeah. Um, it's kind of subjective. So what we need to do is we need to move from the ice cream to the volcano. Yeah. We love a triangle in test and there's a few, uh, presentations that although didn't call out test specifically had triangles in them in terms of the way that we're attacking this thing. And what we're talking about is deconstructing the typology of what the system and attacking the functionality, where the functionality sets.

00:17:50

It. Doesn't all sit in that UI in the context of the business. Okay. So we're doing far more solar based tests. We're doing far more ETL based tests. We're doing far more API based tests. Um, we are being far more structured in the way that we're attacking these things because they are smaller components of a bigger piece that we've been constructing the model, essentially from a testing perspective. Okay. We're pushing the rigor down into unit testing because if we're doing much smaller components of test, why wouldn't we be doing that earlier? Okay. Um, and if we're doing these things and we are, uh, strangling, if you like constricting the amount of UI based testing we need to do, we've got a much smaller, um, capability or footprint that we need to create in terms of UI based testing. Okay. Um, so we've got highly, highly automated, uh, API based tests, highly automated server-based tests, automated ETL based stuff, automated GUI based tests.

00:18:46

And we're still gonna do a bit of exploratory testing of the NDF because we're only doing testing because we're changing something. And therefore there is, there is a certain degree of a thing we don't know about what we're changing. Yeah. And it's going to be an efficient to automate it. So we're going to be doing manual testing. Let's be realistic. Right. Um, so our spring volcano volcano kind of says, let's do away with automated testing of, sorry. Let's do away with all UI based testing is kind of not getting it that what it's getting at is we, we attack the estate or we attack the functionality. Uh, the level of functionality sits. Okay. And therefore, what we're saying is that if we actually concentrate our effort in terms of where we're doing the testing, we can pay far more attention. For example, to user experience at the UI based level here, we don't really want to be doing end to end functional testing at the UI because that's not what the user experience is about. Yeah. We don't want to be proving the integrations at the UI based stage. Let's do it at the end integration. Okay.

00:19:51

So easiest to dump. This is a few years old now, and it's a, uh, an architecture diagram of, um, one of the changes that came through the nationwide estate. Okay. Um, a lot of systems, a lot of interactions, highly coupled few legacy systems in there. It's a challenge. Yeah. Where do you cut the scope of testing? You know, where to start? Where do you stop? Yeah. We risk never delivering anything. If we tried to do everything and therefore you just, can't, respirate risk-based approach, that's a testing question. Where do you understand the impact of change? Yeah, because let's not treat this as a testing problem. It's treated as a change impact problem. Okay. Now what we start to do, and, um, I liked the way that Mark Fowler, who talks about technical debt and technical debt quadrants goes about it is he comes at it from a design perspective.

00:20:44

Okay. So he cuts technical into four quadrants and, um, nationwide historically have very much sat in, um, the orange and the red quadrants for sometimes the absolute right reasons. So I mentioned the mergers and acquisitions, you know, for the right business reasons, we cut a few corners to make sure that we were hitting certain dates for the regulators, but we created a bit of debt for ourselves. Yeah. And sometimes we've done them for a no reasons, the naive reasons. And that's the bit where that layering comes in here. We don't even know where we're doing it. And a good example of that is, is, is this picture in terms of where do you cut the, uh, the impact of change? Um, it's kind of like saying if I, if I Chuck a little pebble into this architecture diagram, where does the ripple stop? Yeah. And if I keep checking, uh, pebbles into it sometimes are going to miss where the ripples stop. Yeah. And then inadvertently I've, I've layered. Yeah. My documentation for all of these interconnected things have become out of date or they don't quite fit together anymore. Okay. And that's a problem.

00:21:58

So

00:22:04

Going a bit old school here, um, when we do testing qualifications, one of the first things they will teach us is verification. Validation. Okay. Verification. Do we understand what this thing is supposed to do upfront validation is testing. Yeah. Now I'm going to suggest as an it organization, as a testing function, we do far too much validation. We do no where near enough verification. Okay. You know, it's, it's, uh, to go to go onto the, uh, shift left picture, um, it's kind of, uh, accepted that, that requirements and not very smart designs will be slightly out of date. Yeah. That's not good enough. And why do we not spend more time doing verification because, um, prevention is better than cure. Yeah. If, if, if I'm missing something into design, I'm only going to build that thing and hopefully I'll find it in test. If I don't find it in test, I'm going to find it in production, which is an even bigger problem.

00:23:04

Okay. Um, so Michael Bolton is a quite predominant, prominent, uh, voice within the testing, the test engineering organization, or sorry, industry. And, uh, kind of what you're saying is true, right. If this was any other industry, any other engineering function, we wouldn't, we wouldn't let them get away with it, whoever they are. Yeah. And, um, really what we're talking about here is, and this is where we get on to, uh, Richard Bradshaw's fart model. Is there a realization that the it systems are complex? Um, they can do many things and therefore we can never know anything. We can never know everything about what that thing will do. Um, but we need to keep pushing the bounds of what we understand that we need to keep pushing the bands in terms of my knowledge. Yeah. And importantly, when we're doing that, we need a way of capturing that knowledge. And we need a way of, of, uh, reusing that knowledge because, um, if we are becoming an ever learning organization, um, we can push the bar in terms of what we know and what quality looks like to us. Okay.

00:24:13

So we model. Yeah. Um, and our models are living articulation, living specifications for us. And the important thing for us is those specifications. Um, they live and they tell us impact of change. Yeah. If we make a, uh, a model change, and this is the hierarchy of typology that we, we actually model that if we make an API function called change and we change our models, it will tell us the ripple effect into the system layer into the UI based lesser. We're spanning the, the business function into the, the, the technology, uh, functionality or implementation of that business function. Okay. And that's a really important differentiator, um, in terms of implementing things like automation, things like data, things, I guess, fee, um, we're creating the foundations around how this thing is supposed to work. Yeah. Because if we know how it's supposed to work, we can validate it. We can automate the validation. It's easy. Right.

00:25:12

Um, and you know, these models that we create, they are not just testing models. They are models for design. Yeah. This is where we're doing model driven, design, not model driven testing. Okay. And we recognize that that change will happen. It's inevitable. And this is why we need living specifications, because if we're using work documentation, if we use the video diagrams, static documentation, they're only going to add date. Yep. People are not going to update them. We will miss some stuff. And, um, those living specifications, they give us insight far and above ever what we would know from a manual analysis. So what we've got here is, um, uh, an output of a collection of models we've created for, um, uh, I mentioned we're doing a, uh, an event driven architecture implementation. This is Catherine from an implementation about, um, streaming, uh, customer data. Okay.

00:26:07

And what we've done is you can see that we've broken up the topology of our infrastructure. Um, and, um, we can see there is, is, is the, the functional path through, um, uh, essentially the typology of, of, of get customer details. Yeah. Now you could probably do it in a traditional, uh, non-structured way and kind of, if I was to pose to you and say, guess what kind of customer detailed transactions that you would, uh, create? You could probably have a guest and you'd kind of be 80% there, 90% there. Um, but we go at it from a very structured, uh, collaborative, conversational point of view, where we thrash out not only happy paths, negative paths, data, quality, parts, all those different things, where iteration after iteration, we're coming up with new scenarios around how this thing could work, could go wrong. What technical debt do we carry?

00:26:59

How do we address that technical debt? Okay. And you can see from there without knowing anything about our estate, where the complexity sets. Yeah. So you can see that BDT is where complexity sets for us. Yeah. And what we do. And this is where we use modeling tooling, AOD being a Broadcom tool is we use test optimization techniques, pairwise you may have heard of, or cycle raise. You may have heard of to make sure that, that we refactor the design, the test pack. So that over time, our test pack doesn't layer. Yeah. It's the same problem with designs if you like around, um, if I create a regression pack for a typical system, and six months later, I iterate that six months around eight or eight, that again, six months it's right around chances are that's proportionately grown or multiplied out with duplication in it.

00:27:57

Yeah. Because I'm not going to do the analysis manually. Yeah. But these tools allow me to do that. They allow me to refactor at runtime, those test packs to keep them slim, to keep them lean. Yeah. So you can see there that I've got the functional analysis around what this journey is. You can see what the optimization takes from a, uh, a lean testing perspective. Yeah. So although I could run 6,675 tests, I don't really gain a lot over and above running just the 256. Yeah. So although I could automate those things and we have automated a lot of the all paths because of the very quick they're very, their API is basically, um, I'm creating a lot of analysis for myself for not a little, not a lot of gain. Yeah. I can do 256 and get the kind of same output. Okay. Now, four weeks later, if we didn't have this capability, we'd probably go into rooms and do a lot of archeology, a lot of analysis around where is change happening?

00:29:03

How do we, uh, uh, essentially could tell the scope of what change or complexity is? How do we could tell the scope of tests you name it? We probably go and talk to some sort of SMEs who know the subject or the subject. And they'll tell us exactly what the scope is, but we know that's flawed. Right. And therefore what we do is we use our living specifications to essentially give us the numbers. Okay. So you can see here where we've made change. So you can see there CDT jumped from 25 past the 31, and you can see the ripple effect as we go down the estate. So not only can we have an end to end scenario, but we can also treat them in isolation. Yeah. So we can see if there is no change down the stack. We can sit, we can, we can isolate the testing. Essentially. We have a lot of variables that we can now use in terms of understanding the scope of change, understanding where we want to put our effort in terms of tests and everywhere else as well. And that's really powerful in terms of understanding of change, understanding, risk, understanding estimates. Yeah. And keeping our velocity going.

00:30:08

So as a slice of a, a, a CGI pipeline. So for us, uh, in the context of dev ops, um, we're doing continuous testing and the idea being that we need to contest test continually, um, there's one such pipeline and a few of the tools that we use in that pipeline. Um, but the tools are not really that important. Yeah. It's the methods what we need to achieve in terms of continuous testing and how those tools enable those methods to be true. That's really important. And it especially true when we get on to things like test execution tools, where they're more logistics than anything else, they kind of do the same thing. It's more the, the interaction with the technology. You're trying to test more so than the tooling that we found. Okay. So, um, we measure a lot and we capture a lot of measures as by-product of what we do.

00:31:01

Okay. And you can see there it's, it's about a two week snapshot of, uh, execution of a typical pipeline. So nobody's actually executing any test it's it is all been orchestrate through Jenkins, uh, using soap UI. Um, it's fast. It's recordable, I guess the interesting point there is over that two week period, we are in test execution for whole nine minutes. Yeah. And I think the interesting point for me is, is around, um, just cause it can execute 24 7. It doesn't mean I should. Yeah. I'm creating lots of analysis for myself. If I'm achieving the same goal with nine minutes, that's wicked. Yeah. I can spend the, the rest of my time exploring what this thing should be doing. I have a lot of debt, but I still need to address, right. Why do I want to waste my time executing tests for no value? Okay.

00:31:48

Um, so we get into continuous improvement, continuous feedback, and what we've started to do. And if you're familiar with Gartner magic quadrants or Forrester waves is we've created a bit of a, kind of a league table of, of teams with our organization that are employing model driven development, automation, ice-creams volcanoes kind of testing heuristics, which we know to be true, but we need to measure where these teams are. And we need to, uh, make examples of, of, of the people who are doing it really well. And we need to understand why those who aren't doing it, aren't able to do it. Okay. Um, if you've read on, in terms of the bullets, you'll see, there's a little bug here in terms of you saying belt is not a marathon runner. Yeah. Um, the initial slide said, you need to be more like your sprinter. You don't, you need to be more like Elliott Googie.

00:32:40

Yeah. Is a marathon runner that ran just under two hours for the first time. And why not Usain bolt, why you need to be this chap because he doesn't continually. Yeah. Um, I think, uh, the interesting stat is, although, uh, you saying bolt, uh, ran about nine and a half seconds, he did it once. Whereas, uh, this chap did 26 miles of 17, second hundred meters, which kind of is the kind of the continuous element that we need to be striving for. Um, it's all about eliminating waste for us. Um, again, just cause I can execute that many. Why do I want to yeah. Um, what value does each test have? And the important thing there is about the optimization because we use our thickener raised because we use pairwise testing. Um, we eliminate redundant paths within our models. We are constantly refactoring to understand whether we've got the right coverage, if you were in, uh, Nicole Green session.

00:33:39

Uh, one of the big challenges around, uh, the teams that are dropped off is around their ability to articulate coverage. What we can, when we can do it really, really quickly that lives and breathes for us. Okay. Um, and if we do those things, we can spend far more time doing or freeing up our testers to be doing manual testing, which is the real value at not just mundane instruction following. Okay. Both predictable. Then you can be fast again and again, and again is the key thing there. Okay. Um, I'm sure we're all avid readers of the, uh, the world quality report. Yeah. Um, year after year, they will call out the biggest blockers for QA are data environments. Okay. Um, you will have known from the DevOps. So I run a data team and I run an automate a virtualized environments team. Um, and, um, we really started doing data, uh, seven or eight years ago, not in the middle of GDPR, but we did it into the data protection act.

00:34:42

Okay. And, um, we went in the wrong way. Initially we, we took it as a technology challenge when it never works. It was a data challenge. Okay. And it's a lot of the same problems that I've talked about earlier on, in that we did really didn't understand the data. We carried a lot of data debt. And if we went and talked to our testers about what the test data, uh, they needed to satisfy their test cases, they couldn't tell us if we went and talked to the architect to say, can I have a data model? He couldn't tell us. Yeah. We went and talked to the, uh, the business designers to say, what, what, what data triggers that business functionality? They couldn't tell us, but they wanted data. And they would moment if it was too slow. Yeah. And that's a real challenge and a real problem.

00:35:22

And, and we didn't actually acknowledge that until we kind of got past early man. So we tackled it from a technology perspective. We went into some really heavy lifting, old systems of record and we understood how to mask them. Yeah. Brilliant from a technology really interesting, but totally useless because we know how to mask it. Nobody knew how to use it. Yeah. It was still taking a long time division test data. We centralized that to make it a bit cheaper because we realized how to do it, but we really didn't start cracking on until we realized that problem wasn't necessarily a test data problem. We were trying to, what we were trying to do is face into the matrix. Everybody's seen the film. Yeah. So the basic premise of it is humans stuck in a computer world and they don't acknowledge it. And a few of them that do are trying to get out okay.

00:36:12

And kind of where we're coming from. And this, I don't know whether you've seen this thing on Twitter and the top right-hand corner is it's quite an interesting one. Um, classical programming roles, data, you get answers. Yeah. The bottom one is a really interesting one for me. And, and data answers, the person who posted it said machine learning is the rules. Yeah. The big problem for me is how do you train the machine to know what they should be knowing? Yeah. You can't. And the, the, the kind of the matrix, uh, draw for me is, is kind of, um, we're stuck in a world of technology when we were working in our world at it. Yeah. When, when does the information technology come in more so than the technology, because we can't defer to machine learning until we can teach machine taught. It should be doing okay. And, and the bottom one is, is, is, is kind of the very much the, kind of the hierarchy of, we take data to build information, to gain knowledge. It's very much on par with what we're trying to do in terms of modeling, chipping away at design knowledge on knowns that sit in our estate. Yeah. They're not telling technology unknowns for us. They manifest themselves as technology or through technology, but they're not, they're really information.

00:37:26

Um, just a few stats. Um, 2019, we did 3.7, 5 billion masters, synthetic data records, 1.2 billion interface calls through stubs. So if you take stubs, for example, that's, if you take the picture of the, the, kind of the very complex architecture state that breaking up that a state forest, that's breaking the monolith into understanding or isolating change so that we can release quicker. Okay. So there's some pretty big gains that we're making and we're making them quite quickly as well. Yeah. And we're not doing them as tests we're doing is model driven development. Our three Amigos are in this together, cells are chipping away and they're acknowledging the fact that ways of working previously might not have been that efficient. Okay. So, um, kind of consolidate all of the theory that we've just gone through there. And what we've got here is a continuous testing strategy on a page.

00:38:18

Okay. So I mentioned I'm from the, um, uh, center of competence or central enablement and what you've got on the top, there is the enabling factors of those. So you've got a data team, you've got, uh, uh, an automation, best practice, guardrails patterns, all of those things that we used to have as a center of, but we didn't fail the rate them out, but we're doing that now. And this is where we've got the squad model, the cell models, where they are actually going to put these things into place. Okay. So we've got the right testing strategy, ice cream is volcanoes. We are test engineering. We're modeling the hell out of this thing. Yeah. We're creating living specifications. Okay. Um, it's a bit old. It says we're doing TDD and BDD, but not doing model driven development. Apologies for that one. Um, once we've got predictability in what we do, we've designed our tests to really well, then we're into the execution phase.

00:39:05

Yeah. We check, we automate the hell out of what we're checking the stuff we know isn't changing. Yeah. That's the stuff that's really fast. That's our CI CD pipeline. We're winning on that one. And we apply structured testing or exploratory testing, I should say to the stuff we don't know. Yeah. And that's not just functional testing that gets into the nonfunctional world as well. Yeah. And again, there's a lot of technical debt around non-functional that can only be on earth by understanding what the actual requirement was in the first place. It's a conversation. And then we bank that we create models for it. Yeah. We do those things and we want to create a lot of metric. Yeah. We want to learn when we're still making mistakes, where we're still an efficient, and we capture a lot of those things, um, through a central repository and a lot of dashboards.

00:39:55

Um, what we don't do yet is, uh, exploit AI. Yeah. Um, partly because we can't find the technology to do it. We know we've got a lot of structured data that we believe in. Yeah. Because everything that we've talked about has built the trust in what we're doing. We can't find a source of, uh, of information where, um, or actually, uh, a technology where we can, uh, exploit that to give us new insight that we don't already know. And that really brings us on to, um, testing volcanoes 2.0 for us, which is, um, where we've got our engineering element. Um, we're doing, um, the right structure testing at the right level of typology where we're being fast enough. We're automating all our levels of automation are through the roof. Stakeholders are happy. We've got that buzzword going CIO pipeline wise. We're testing is not a blocker.

00:40:47

Yeah. What we need to be doing is how do you explore AI to accelerate the addressing of technical debt or, or, or making sure that that inadvertently we don't build it up somewhere else. And, and that's really where we're, we're into the futures in terms of what we're looking at. So if anybody's got any ideas brilliant to talk to you. Okay. So the last point is for us anyway, don't just shift left, learn, go right. Go left, go up, go down, go in the middle. But one change at a time. Yeah. Th th if, if, if you, if you have an estate like us, no one is going to pay to address everything at once. That's stupid, um, chip away at when changing it as it comes along. Um, eventually you'll get momentum eventually, uh, reuse comes along and what seems too hard the first time. Isn't quite so hard the second time. Um, but you got to bank that learning and you've got to continually raise the bar in terms of your understanding of quality. Yeah. So that's the end of my talk. Um, Christine from Broadcom wanted to make a quick announcement.

00:41:58

We have companies in the bath, we are launching to the same things at the wrong number 9 0 2. So please buy, please grow a cocaine and

00:42:12

Any questions happy. And the question it's kill. Thank you.

00:42:27

How did you guys first start? What was your first?

00:42:31

So, so, so we start in the wrong order. Yeah. So, um, we, we were trying to do automation for ages and it kept going wrong. Um, we moved onto the 14 months of UI based automation using some older tools. Um, and, um, we would, uh, we had an island of automation tests. We would say to the, uh, uh, testing checkers, the check over your test pack. We'd automate that we give it back to them. What do you know they'd moved on? Um, so, um, we knew that didn't work. We started to understand why that didn't work. And it really came down to predictability of, of, of what we were set up wrong in terms of automation. But the fundamental problem for us is really, um, what we were trying to test. Wasn't actually articulated. There were no expected results. It was kind of a bit human in terms of I'll run a few instructions. And if it comes out with this report, I'll go and check the answer and you can't automate that thing. It's not predictable and it's not repeatable. And, and, and, you know, that's, that's the real big learning point for us, which is why we've really gone from modeling

00:43:37

Your radio.

00:43:45

The ratio, uh, there is, there is no number system necessarily. Um, it's, uh, we have, we have certain teams that work at certain levels of the stack. Okay. And if you work at the full topology, you will, you will look for a ratio that is artificial, but, but you should have less UI that you have API. Um, essentially it's going to be through collaboration care and, and part of it, is it again, unearthing your technical debt in terms of, um, we have an interesting conversation or we had an interesting conversation about exposing solar to, uh, the test is so we could create solar based tests and the fact that we didn't have designs for it meant that we couldn't do it. And therefore we raised the conversation. We raised the debate to say, actually, if you want to implement a lot of these practices, we've got to address a bit of this debt before we can do otherwise. We're a bit screwed.

00:44:35

That's your skip rate increase or the,

00:44:38

I was sorry. Um, so-so so the interesting point for us and, and, and in terms of modeling, um, we, we found very quickly that, that our, our, our defect prevention in terms of conversations or, or eliminating defects in design, went down, sorry, increased by about 30, 40% quite quickly. So we were quite infantile in terms of doing modeling and we were still reducing a lot of defects at the design level by 30, 40%. Okay. Um, from an automation perspective, um, I think th the interesting thing, um, from an automated execution point of view is we are far quicker at finding configuration issues and exposing configuration, uh, inaccurate inadequacies that we've got in certain teams. Okay. Is that right? All right. Cool. Thank you very much, everyone.