Modern Test Automation, From Cloud to Mainframe

As enterprises adopt test automation frameworks to drive Continuous Testing, the complexities of heterogeneous environments - cloud, mobile, distributed, mainframe - can be a challenge.


Join this session to learn how to adopt test automation holistically, addressing these complexities while maintaining enterprise standards. Help your organization deliver better quality code, wherever it runs.


This session is presented by Broadcom.

KP

Keith Puzey

DevOps Architect, Broadcom Enterprise Software

SS

Sujay Solomon

DevOps Lead, Broadcom Mainframe Software

Transcript

00:00:12

Okay. And welcome to this session. We're going to talk about modern test or cause my automation from clouds mainframe, um, we're going to be to present it today. There's Sujay Solomon from a DevOps com mainframe team and myself keeps using who's a DevOps architect. And so this session we're going to cover to start with the testing challenges. So those challenges that you all were off, um, we're going to drill into things like what are the current trends around test automation, and then how, where are the solutions that can enable you to improve your test automation story? We'll then move on to some of the more complexities when it comes to automating in an enterprise and more content, how that will benefit you. And we'll end on kind of some getting started, how you can get involved in our platform.

00:01:05

So when we look at the no digital transformations testing is a very large impediment these days to these transformation programs. And when you're texting, a lot of the time is spent on gathering those requirements, understanding what's been developed, and then also looking at how we can then test that efficiently. I don't see testing is, is a time it takes time to test and this affects the time it takes us to deliver applications. And what we found is still today, 70% of testing is still manual. Uh, and enterprises have now they still have to COE and those series tend to be quite risk averse. So they, their goal is to ensure the good quality software is released, but also we have this impediment of, we need to speed up. So we're seeing enterprises moving towards a shift lead. So I'm moving some of this testing to the development teams and this causes kind of balancing that where we have our agile teams.

00:02:11

Now that goal is to deploy quickly there generally. No, because it developers, they tend to use their own tools, which tend to favor things that they've used in the pups. So you tend to find that open source frameworks. I use a lot in agile teams and this is very efficient for the teams to actually, you know, develop the tests quickly. But we find the tools that aren't developed for low price scale, full scale testing. So the developed they're designed to be used by developers on their environments when they're actually developing on their machines. But when we look at the COE, they tend to have the more, um, and the price scale tools for their testing. And they tend to be very risk averse. Now that gold is quality. They want to make sure the waiver goes out. The door is the best quality that can be delivered.

00:03:03

And we have this balancing act of like, we need to be delivering quickly, but also we need to be always bear in mind that the risk, that philosophy has that we've been good things that can be very quick deploying bad software. We need to balance this velocity in risk. There's also other test automation competitors. So we need to look at as well. So if we look at some of the other challenges we have these days with test automation, now it's the testing bottlenecks, no traditional testing. This time box event, we would go for the waterfall, we'll get to a testing phase and we then do our testing. Um, we spent a lot of time in that area testing. We need to kind of improve how we can do testing. So speed up that process, but keep the quality. And what we also see is that there's delays in getting access to environments.

00:03:54

Now, enterprise tools are complex. Generally they're made up of lots of different systems and those systems are different endpoints, API APIs. You need data to test them and for team to be effective, they need to get this on demand, but he's got to quickly access environments, quickly access test data at this kind of move to the engineering shifting left. How can we enable engineering teams? And we say enable and give them the tools that they need to actually do this efficiently. And those tools need to be things that one the engineers are comfortable using, but also are things that know the CA we can also enable them with. And our last point is we need to get away from just doing performance testing and we need to move on to performance engineering. So designing products early on considering the, how they will perform in production and also test early. So very early cycle, we start doing performance testing, not just functional testing and all of these, um, impediments, so effect teams today.

00:05:05

Now, if we look at some of the trainers, these are some of the trends we're seeing in test automation. And the first one is this blending of agile and dev ops. And what we mean by this is their agile teams are looking at now. They take the requirements, they convert those stories. They then drive that to deliver product, but they need to also consider the ops site. So the implementation, the maintenance, the ongoing support, and we're now seeing this blending where the DevOps teams are now more focused on not just delivering functionality, the agile teams are working on, but also the, how that will be maintained and supported in the whole framework from engineering all the way through to production.

00:05:51

And that kind of leads into, as I was saying before, about the, um, the provisioning of these test environments and provisioning this test data, uh, we're seeing more organizations that are looking at how can they speed up the delivery of these environments and this data to teams on demand. So agile teams aren't waiting for required for a buyer must become available and slowing down there let's end the recycles. And the third bullet point here is that we need to be testing the API APIs and the microservices modern architectures. I made up of many services within an application. So instead of just testing the UIs, we need to be testing those, those API APIs to ensure one their performance, not just that they're working, that they're performing the way that no, as this scales, they're going to work correctly, as well as the fact that they have the functioning of going to work. So we need to automate that. But the final point we mentioned, the previous slide is this shift towards performance engineering. So looking at the product from the ground up, understanding where bottlenecks may be influenced and making sure that we're we're catering for those problems needs as we go through the development process.

00:07:07

So we go hit a typical enterprise architecture that we still recognize, um, enterprises are complex. Now you've got the, the network infrastructure, you've got the Intel infrastructure. Um, most organizations have some form of cloud infrastructure. And then you have the application layer and basically is part of the testing. You may be DEXA to any one of these components and maybe all of these components to do testing. And I think, see, now there's a complexity here that is, is difficult to manage and difficult to test against. So how do we unify these agile teams and the CRA? So how do we know this balancing act we talked about? So how do we make sure that our agile teams are not waiting for environments waiting for test data, but when the, why is the sci we use can also work with our developers. So a platform that our COE is can work with and then enable our testers and our developers and everything we do has to be integrated within our development pipelines. And this is how we achieve quality with speed.

00:08:16

So what we're talking about today is BlazeMeter products. And when we talked about like developers, like open source tools, per se, I ways obviously want tools that are in the enterprise scale, plays me to does this. So as you can see here, we support all these open-source tools, things that developers are comfortable developing and using building their tests, but on a platform that gives them those enterprise, right functions over the security, the reporting, the support, all the integration points. Now on the right hand side, you will see the site called tourists and tourists is our open source initiative sponsored by four com. And it's used for all these different executor's. So Jamie to selenium, Gatling, J unit, um, they all work with Thomas and it allows you to do your load generation locally on your machines and in the cloud. And again, this is a way that we can enable developers to use tools they used to when they like, but in a way that better fits into the enterprise.

00:09:20

So we talked a lot about functional testing and performance testing. And what blaze me to also does is as well as our functional performance testing is we have built in test data so we can generate data. We can use your existing data. We can build that data and including the platform so that the COE and the development teams can leverage it. We also have built in mock services, which allows us to, um, provision environments that might not be readily available like third party or internal and a full monitoring and testing platform for API APIs, allowing us to test API APIs, um, and the whole know within your complex environments, all those different APRs that you may have inside your inbox. So with that, I'd like to hand over to CJ to talk about some of the cross-platform challenges.

00:10:13

Hey, thank you, Keith. So, you know, as, as Keith said, applications today are really multilayered and they have a very large tech stack as shown on the screen here. There's front ends like mobile browser UI has open API APIs. And oftentimes these are powered by API that are routed through gateways servers, meshes and so on. And keep kind of talked about how to test that. But when you dig deeper into the API is they're typically backed by either, you know, cloud or containerized services or some, you know, legacy services that are behind and enterprise service bus, and still very productive doing their work. Now, when we look at the cloud or containerized services, these were, you know, they were born in a dev ops ready world. And when it comes to testing, they're in a relatively good shape. But when we look at legacy services like kicks or batch services running on mainframe mid-range and others, there's really not that much dev ops or automation practices employed, especially when testing. And this has to change because these legacy services are so critical to most enterprises on are really, they're unlikely to go away given their current throughput. So without implementing DevOps and automating the testing of these legacy services, your full stack application teams cycle, you know, those teams are really going to suffer and they're not going to be able to get code from dev to prod as quickly as they could. And one of the barriers to this happening is really culture. So let's take a look at, uh, you know, their culture at present.

00:12:08

So if we look at the SDLC here, you know, most often when we take a look at the stack, the API and the consumer layers are typically kind of, you know, system tested through a solid set of automated tests, which include functional load stress test. So not too big of a problem, there's definitely improvements to be made. Uh, but they're in decent shape. But if the app change actually needed a legacy code change or a database schema change, these teams are very siloed from the API or the consumer layer teams. And they just haven't really had the chance to adopt, uh, the DevOps culture of automation that we see. That's so prevalent today. Yeah. We see high levels of manual testing from early stage dev unit testing, all the way to integration testing. That's done typically by manual testers at the QA level of the SDLC with, uh, you know, legacy services on the mainframe and such, especially the QA level integration testing that eats up multiple sprints of your team's cycle time or even worse. It can cause some of these teams to actually just be stuck in like a waterfall state where, you know, they've finished their coding and they kind of just throw that coat over the wall to testers, which isn't necessarily good from either a quality perspective or a cycle time velocity perspective.

00:13:41

So if we move to the next slide, you know, where we're trying to move, some of our customers through is to change this. And the simple answer is really to empower your development teams, to actually own the responsibility of creating tests for all levels of the SDLC. And ideally doing that in an automated fashion. Because if you think about agile and agile teams, anyone with the right skills should be able to contribute to the team's cause. So whether you're a developer, you know, writing legacy COBOL code, or you're a tester who is, you know, maybe following regression test plans and doing manual testing today together, and you can put your hand up and say, you're, you know, you're willing to take ownership of quality within the team. And with that responsibility comes ownership. And once you've got ownership, you're starting to build inherent quality into the code that's produced by these teams.

00:14:46

So, you know, what are some tools that we can take a look at to actually make this happen? So Keith, if you could move us to the next slide, we're going to talk a little bit here about tests for Z, which is, uh, an open first mainframe test automation approach that Broadcom is bringing to this. So, you know, when teams start shifting left, we don't really need to reinvent the wheel when it comes to automating the testing of some of these legacy applications, Broadcom contributes heavily to the Zoe framework. Uh, if it's hosted by the open mainframe project and the Linux foundation, and ultimately that's a modern bridge between the mainframe and open tooling and frameworks. So what we're doing with test for Z is we're applying these same open principles that were established by Zoe, uh, namely, you know, rest API APIs, SDKs to really make it easy for developers to use existing test frameworks and languages like, you know, J meter mocha Python to automate mainframe testing.

00:15:54

So in terms of how the experience actually looks like for a developer or tester, who's writing these tests, you know, now with the test for Z and this open approach in place, test developers have their choice of framework language to first create those scripts that turn, you know, all of those manual steps in a test plan, into functional test scenarios using this combination of Zoe, uh, to submit jobs perhaps on the mainframe and then to use tests for Zs API APIs, to perform deep data assertions with a lot of services, like, you know, being able to handle large volumes of data, uh, doing pattern based search and compare, uh, as well as, you know, snapshotting test data before you might do any modifications to it. And being able to do all of this on platform, such as on the mainframe, without the need to actually move any of that data off, because there's always sensitivity, that's involved with moving such data off platform.

00:16:57

Now, once you've got those scripts, it's quite straightforward to manage those test cases, using test platforms like lays meter gr x-ray, or even execute them from CICT tools like Jenkins, or get hub actions. Now, this really makes, you know, mainframe and legacy testing from a user experience perspective, really not that different from testing applications on other platforms and specifically these tests for Z services that we're talking about. They're great for programmatic testing of interfaces on the mainframe, but you know, today there's still quite a bit of legacy terminal based UIs, uh, that folks still have to test. So I'll hand it over to Keith to talk us through, um, you know, how to test some of those terminal terminal based UIs that we still have in these enterprise application.

00:17:52

Thanks. So J so we've looked at how we can test the microphone, but we can also include that in our BlazeMeter testing. So, um, we talked about how blaze meter a integrates these open source tools. And we created a, a plugin for J meter that allows us to not do RTE tests, and we can do a functional for commerce tests. So this top screen shows us a functional test where we're walking through load testing and application on the screen. We can use that same test. It's actually their mother Paula's test something, Tesla, tens, hundreds, thousands of users running this test against your platform for performance testing. So let's kind of look at how this actually looks. So if I go to my BlazeMeter console, so now looking at blaze meter, and you can sit on the top here, we've got our different tabs for functional performance mock services and API monitoring.

00:18:47

I'm going to start with a functional test and you can see, we can start by creating a functional test by uploading a script. So taking existing scripts, not loads, those we're going to switch and actually go and cry. What we put a script list test. So build a test using our script, this pallet. Now we can use a recording. So we're going to built in Chrome recorder to record these types of tests. We're going to do this manually. And if you mentioned, we want to test something like the Google search screen, the first thing we need is to go to that URL. So I'm going to drag over, go, and then type in the address where we need to go to so google.com. We'll see when we get there, we want to type in our search box to search for some text. So we use the type action to then say, we want to type, and we need to know where on the screen to type.

00:19:36

And we have some connection library that we call the location of objects in a webpage, and we've already captured these objects here. Now we can actually use a picker to go manually add these, but we're going to just click on. We want to click on this entry panel and we need to type in some text. So let's just talk from there, dev ops, and then we need to click on the search bar. So let's find the click action and lot before we now need the object that says, we want the search button and that all do our search. Uh, and the last thing for our tests we need to do is obviously an assert to check test work. So let's do an assert on the title. And I know that the, the title of the, uh, the search window should be the search value. We typed in Google search.

00:20:28

Now I can run a test it so I can put this in D book, run this locally on my machine. And what we'll see in a second is we'll start the test. We will run the test locally, like I said, and if I just scale this two things, see the test, the tests will change color. As we're stepping through the steps, you can see we've gone to google.com. We've typed in dev ops. We can click the search button. And then if you look at the title on the top, you'll see it go to be Google search that test passed. So we never attached. Now we can now run this test on the platform so we can choose location. So I'm excited as you want to run this test, either inside our firewall or outside on our public agents and then which browser we want to use.

00:21:19

So if I had a few browsers here, we can say, I want to run Firefox Chrome edge and the version. So at this point I'm just clicking run would then stand up the relevant infrastructure at this location and run these browsers and run this test. Now, we'll see the tests we've got on his got static data. I could also say, instead of putting here DevOps, let's create some test data. So we go to our test data panel and we can import data from CSVs. We can take data from your on premise systems. Um, we can create our own synthetic data, and we have some examples here we're going to use so typical examples of things like no addresses, how we're going to example here for the UK data or driving license, we can do date examples. Those are practically one low five minutes in the future, five minutes in the past next week, next, um, next day and something called registration form example where we have a typical registration form. So if I just add this, now we've got dates of birth names. And if I just say my iterations, I can say, I want 10 rows of data.

00:22:33

We've now got 10 rows of data. Their first names, last names addresses social security numbers, email addresses of the system with the names. So we now have something that looks like valid data. We can use that test and to actually test this. All we do is take our parameter. So perhaps we take, we take the address. So we just copy the address and we change our tests. Instead of that whole credit value. We're going to say, we're going to take that data from our test data. That's going to go in and type in there. And also we need to change our cert to say, we also need to put that in our survey. We'll put that in debug. And that test will now run using that test data. If I mouse over it, we'll see what value we're using sets address we're using. We don't want that test and it will run exact same before, but instead of looking for DevOps, we're going to enter the address that we generated Google to do a search.

00:23:25

We make sure that search comes back for yourself. And we now have a test. Now, if we look at some previous examples that we've got of these tests that ran before. So if I go to the, the history and we look at an example here, um, this an example of a test is run. So what we get is all the steps of the test. This is a test we did for actually checking out a banking application. We also get a video that shows us the tests. Then they're logging in and everything that happened. We have a waterfall to show you the performance of all the components of that test.

00:24:04

And also now, if we were to choose a test with multiple browsers. So if we're going to look at the summary here, now we're saying here we tested here, multiple browser types. Um, each of these lines shows us the results. Seeing say, tick means to past the iteration shows you the data we used. So that was a test data we used that we looked at before, the, the mainframe testing. And if I go and look at an example here of a mainframe test, just here, um, this is the same thing. So if I expand out this mainframe test, you can see where we did a connect like we did with a go on the web browser. We do the same thing, connect to the server. This is the screen that comes bank within other steps for now into our username, into our password, click on the star monitor. And this is the screen that's on the main phone. So we can run the same functional tests on the main phone as we do with our web browsers. If I now switched to a performance test, we could look at some other examples.

00:25:14

So what we're talking about a performance test in this case, this is again, is a mainframe test. Um, what we have is number of users within the load for his summary on the top here, one of the timeline report we can see for each of those steps we just looked at. So we had that. Now, the connect, the disconnect, the department's use name, we have all the response times. So we can say what's the average response time for the start monitor. What was the average response time for that connect? So we have all that data. We have much more here of latency time. We have, we can drill in to these transactions. And like I said, we can then run a full load test against the main phase. So if we look at allergy, create a performance test is just like we saw the functional tests would come into the UI would upload our test here.

00:25:59

So I just click on the upload script. We can point to our Jamie to file. So these are our different Jamie of files here. Let me just upload that we would load that Jamie, a fall native LinkedIn platform. And at this point we can run this test. Now we can see here that this load test actually has some test data and we can choose to upload the CSV file that comes with it all. We're going to say, say what call the data model. So that model is something within the platform where of an example, that data we need. So I'm just going to say, I need this data model here. And like we saw in the functional, we now have our test data. And all we do is we specify the amount of load, how many users, how long the test is to run for.

00:26:38

We can then choose the locations. So perhaps we want to run this in different parts of the world, different platforms like Google, AWS as your, again, we can run this inside your firewall. So if we look at the, on private location, now we can have a mixture of load current inside the firewall and on public locations, we can also link this to what we call mock services. So we mentioned how low environments are important. We can specify which mock services needed, uh, and not just the, the endpoint, but also the transactions in that mock service. So we can say we want no a poor performing visa payment system. So we want to test how that API responds. When we've got this load running, we then define our failure criteria we can integrate with, as you see here, lots of APM tools together, backend performance metrics, as well as the front-end performance metrics.

00:27:29

We can then run this test. So if we just briefly look at an output from this test, what we see in the results screen is these that were tested run. We can look at the summary of the performance, those tests. We can baseline these tests and compare them against the baseline. We can also look at trends, so trend for this test over time that we can also, like we discussed in the session. Now we can build mock services to provide those end points that might've already available. We can mix these end points with that data, sewing supply and end point with the data we need. And we can also do, uh, API monitoring. So allowing us to monitor if Lim point is actually working, how it's responding and there is a performance and is it matching those criteria? So we've covered a lot in the last few minutes, and obviously I hope you see the value of blaze meter. And I prospect a survey to talk about how you can get started with our platform.

00:28:30

You, we kind of talked about the dichotomy between agile teams looking to shift left, and the COE is being risk averse. And Keith highlighted the keys to kind of achieving that right balance there. We also talked about the full enterprise stack where you've got legacy teams that are looking to shift left, uh, and, and from a culture perspective, you know, we want them to take ownership of testing at all levels of the SDLC, including automating it. So, you know, if you're looking to get started with addressing some of these in a structured and proven way, we have a few resources here that I'll, I'll highlight here. Uh, one blaze meter university, which, you know, we've got a rich set of self paced courses, which can help you really level up on your knowledge, uh, with testing practices, especially at an enterprise scale and, you know, the variety of courses that are available on there, you can identify exactly where your gaps are and you can kind of cherry pick courses that, that are of interest to you.

00:29:39

We've also got our block site on bleeds, meter.com. We welcome you to check that out. Uh, and then when it comes to the mainframe side of test automation, uh, we've actually got a get hub repo on here, you know, get hub.com Broadcom MFD test for Z. It contains a, a few use cases that we've come up with working with our customers. Um, most of these samples that are on there are JavaScript tests, scripts, uh, but you know, you're welcome to just clone those and the license will allow you to just use it and get started. And then, uh, another resource I'll point out is we've got our modern mainframe, uh, blocks light on medium. Uh, so you can check this out to kind of learn about mainframe dev ops, but specifically, if you're looking for test automation, blogs on there, I would urge you to search for test for Z, uh, on that block site. Thank you so much folks for attending our session on modern test automation from cloud to mainframe and for all of your enterprise grade testing needs. I welcome you to visit blaze neater.com.