Connect More: How To De-Risk Your Enterprise App Integrations (Europe 2021)

Modern enterprise applications such as SAP and Salesforce make it easier than ever to connect applications, data, and processes across on-premise, cloud, and hybrid environments. These integrations let you optimize operations across lines of business and drive delightful customer experiences. But with great power comes great responsibility. When your application landscape is being continuously updated, upgraded, and customized, a defect in any one release could lead to disaster for your business. How do you begin to push through the noise and concentrate on what really matters when innovating? Join us for an insightful investigation into how continuous end-to-end test automation can de-risk your enterprise integrations, letting you accelerate innovation while ensuring your business keeps running. We’ll we’ll share how having a common language and a risk-based, AI-powered approach to testing can help you focus resources, innovate faster and do it dependably. This session is presented by Tricentis.


Chris Trueman

SVP LiveCompare, Tricentis



Hello everybody. My name is Chris Freeman and I'm part of the products team here at Tricentis. And today what I'd like to talk about is how we de-risk. Our enterprise app integrations led to cover three topics with everyone today. First to reinforce why we believe that testing is incredibly powerful tool to help us deliver enterprise application innovation from any quality as a function of both process and data, but without every increasing reliance on the data that our integrated enterprise applications produce, we really need to be mindful of the integrity of that data. So I'd like to spend a few moments talking about that before finally trying to do my level best to predict the future and gets it right by considering new sources of innovations and how we can future-proof our enterprise applications to accommodate those and what that will mean for the future of testing.


And we need to frame all of that within the context of the digital transformations that are underway in all of our organizations. And what we observe is that there were four key themes, what challenges the CIO is, and that teams are faced with the first, how do we increase release velocity while at the same time, trying to reduce the cost of delivery. And since we're talking about this suite of enterprise applications that we've worked so hard to integrate, we need to be mindful of how we de-risk changes in those environments and always working to deliver high quality outcomes in production.


Now, the good news is that customers are achieving this today. Here's one example that I've drawn from our consumer products, customers were using Tricentis live compare. They've been able to increase their release velocity by eight times, going from 11 to an incredible 88 releases per year. If we think about innovation for a moment, simplistically, it's about taking ideas and bringing them to market. Now, if we get a little bit more specific and think about our enterprise applications, we need some way to organize all of our efforts in terms of how we transform those ideas into working software, running in production. Now throughout the history of software development, there've been lots of different ways for us to organize our efforts, but today the current established best practice is some form of agile DevOps. Now, my particular focus is on the role of testing to support that.


And I do truly believe that testing a very powerful tool to help us achieve our aims with our enterprise apps. Now, it's not just me saying this I've drawn from the most recent world quality report where just looking at some of the words that he used to describe the role of testing, I think reinforces this idea of how it supports innovation. Things like contribute to business growth and business outcomes, or ensuring end user satisfaction and customer experience. And then as we moved to the top three, looking at things where people have reflected on those existential events of last year, things like quality of speed, speeding up the software delivery process with good quality, tying that back to those key challenges that CIO is and their teams and are faced with, especially like the first of these, where it's about supporting everybody in the team to achieve higher quality.


So software quality testing for me means about answering three important questions. First, what's a test and then does it work? And finally does it scale when we think about those questions, it's worth digging into how enterprise applications are tested today from our own observations, what we see is that the majority of customers rely on their key users to test the changes that they're making. Now, this has one very clear advantage, namely that our key users understand how those systems support our business processes better than anyone else. However, there are some disadvantages first, and I know I'm stating the blindingly obvious it's not their day job. And so as a consequence, what we find is we have to have a risk mitigation process put in place because we have to deal with defects that show up in production. And that risk mitigation process is known as hypercare with hypercare in software, just as like in the biological equivalent, it's now a life or death situation.


And so we bring together our most talented resources from key users, development, operations, and their job is to fix the defects as quickly as possible. Now there's not just a high cost associated with this because we're relying on our most talented resources that can also be implications for the reputation with our business partners and customers on an individual level. If you've ever been part of a hypercare team, you'll know the sustaining that level of intensity can be very hard to do. And of course, while all of our great resources are tied up on hypercare, the backlog continues to grow. So what actually happens is that we build this incredibly powerful engine to deliver innovation in our enterprise applications. And then we stick this enormous limiter on the engine. Some of the case studies that I've read online, talk about how a one month project extends into three months because of two months of hypercare. Well, now our release means we can do four releases a year. That's far from 11 and certainly a long way from the ATA that our customers are achieving.


So what can we do to eliminate hypercare to basically remove that limiter from this incredibly powerful engine? Well, Tricentis what we offer is a way to integrate dev and ops for optimized testing. And we do that through a set of capabilities that we call change, impact analysis, we've change, impact analysis. We're going to ingest all of the changes that have happened in development. And we're going to combine that with all of the data that we've extracted from production that tells us what's actually being used to support our business processes. And then through an analysis of that data, we will answer the question, what to test automatically. We will identify the most at risk capabilities that we must test in order to assure the quality of our business process support. And for us risk is largely a function of the frequency of use, which represents that dependency that our business processes have on these integrated enterprise applications, combined with an assessment of the damage that we're causing through the changes we're making in development.


Now, I don't wish to scare everyone by putting a frightening diagram up on the, on the chart here, but this is actually a live SAP system that we've analyzed with change impact analysis. Now you're probably thinking I can't make out anything in that diagram. And that's actually the point. Each of the components that are unreadable represents something that makes up the implementation of SAP. And in fact, this isn't a complete SAP application. I've chosen just one transaction amongst many that are used day in, day out by customers to support their business processes. And this is why it's unrealistic to expect our key users to be able to do that complete testing job for the changes that we make, because the environment they're operating in is so sophisticated, but using Tricentis life compare, we can turn that into a set of most at risk capabilities to test and through our integration with Tosca, we can even find all of the test cases that we have available.


And what about gaps? Well with gaps, we actually know who those subject matter experts are in our key user community. So we can go back to them and have them help us close the gaps most efficiently and most effectively. In fact, we have tooling available today, which can record their daily interaction with their SAP applications. So essentially codify all of that great expertise that they have. Let me show you how that actually works in practice with a demonstration. What can we do to ensure that the changes we make in development don't create unpleasant waves in our packaged apps, let's bring the benefits of CIC CD pipelines to our packaged apps, especially automated actions like code quality analysis and running unit tests to this we'll add unique impact analysis that will tell us if the changes we're making cause ripples or waves let's shift left and focus on supporting our developers.


Tricentis live compare will monitor our SAP development systems and analyze every commit life. Compare examines, each commit from four perspectives. It identifies the impact of every change. So is this a ripple or a wave? It runs all the available unit tests, reporting failures directly to each developer. They identify as code quality issues when it's cheapest to fix them. And yes, we can see how things have changed. Side-by-side across all commits at the end of each sprint live compare analyzes all of the changes to identify the most at risk to test and integrates with Tricentis Tosca, to identify test hits and gaps, putting all of the pieces together. We now have live compare working hand-in-hand with Tosca to run all of our functional API and integration tests as part of one seamless CICB pipeline. So can you, if running tests it's worth highlighting here, Tosca support for the latest SAP gooey court's theme.


How as much as I enjoy using the new quartz theme running tests, one by one, isn't very efficient. So life comparing Tosca, we'll use the distributed execution service to leverage all of the available compute and run our tests in parallel, as testing progresses live compare, it gives us the complete view of what's past failed and still to be run from the perspective of the most risk to test. We can shift right, using live compare to monitor our production SAP systems, tracking issues, categorizing them, looking for the root cause and providing feedback to developers in short Tricentis live compare and Tosca combined to eliminate unpleasant waves in our packaged apps. So we've seen the demonstration of all of that in action, but what does it mean in concrete terms across a range of customers? Well, I've surveyed a number of our SAP live compare customers. And what I can see is the benefit of using change impact analysis.


This first column represents the sort of traditional response to changing SAP, which be, oh, we have to test everything. The second column represents the results of the analysis, identifying the impacted capabilities. Now this is good. I mean, it's certainly a reduction. So we're actually going, uh, in two ways to address the challenges that our CEOs and teams are facing. Mainly we're going to spend less time testing so we can increase release velocity, but we're also going to be focusing our testing on those damaged, uh, capabilities we depend on. So we're minimizing risk and improving quality. The challenge though, is that there's still a lot here to test, but by using live compare, we can reduce this. So there's most risk capabilities to test. And it's this that delivers an average 85% reduction in our test scope using this. We can eliminate the need for hypercare. So we can remove that limiter from this incredibly powerful engine that we've developed to deliver changes in our enterprise applications.


Let me focus now on data. I said at the beginning that PR quality is largely a function of process and data, but today our organizations depend on data like never before. And so it becomes very important for us to consider the integrity of that data, especially in the context of the distributed and integrated enterprise applications that we run is a very simple example of a business process. That's in this case documented as supported by SAP, this represents order to cash. And we look at diagrams like this, and we think our business processes are quite straightforward, a whole sequence of activities. One after the other, the reality can be quite different. Business processes are incredibly variable and complex. This is an actual example from one of our pharmaceutical customers. And this company happens to be responsible for delivering, uh, the majority of pharmaceutic pharmaceutical drugs throughout the United States.


And this is the distribution process. And we can see from the shop that they depend to a significant extent on different SAP applications, but it's not just SAP. I mean, I can see Salesforce listed. I can see custom applications, but really the key takeaway from this is that all of these applications have to be integrated in order to support that business process. So when it comes to de-risking change in an integrated enterprise application environment, we need to really pay special attention to these end-to-end business processes and how they decompose into individual applications. Won't be enough simply to be able to automate the testing of SAP. We really needed to be able to support not just all of those packaged enterprise applications, but also was different technologies that we using to build our custom apps. Now that's testing the end to end process, but we then also need to pay special attention to the data as the data flows through all of these integration points and is acted upon by these different systems.


Now, traditionally paying attention to data integrity was a very expensive endeavor because it depended on an extremely specialized set of skills that often understood at a very detailed level, how each of the different systems persisted and operated on that data. We were over having to write very low level, uh, SQL scripts to try and extract that data and then try and combine that with data from other sources. So it was a very expensive endeavor and very high cost and thus difficult to achieve, but we need to pay attention to it because data flows through this incredible pipeline from original transactional systems where maybe we don't retain the data. So we have data loss. We may have duplication of data given the different systems that are storing and processing it. We may have developed integrations using ETL link tools. So we need to understand any transformations that are taking place, be mindful of missing constraints and inconsistencies. The point is, as we go from left to right, the cost of fixed data, integrity issues just grows. I mean, to the point where those information reports that are served to our business users. So they make informed decisions based on it can be extremely expensive to rectify. I mean, it's a classic example of the garbage in garbage out.


Now here's an example of a customer's achieved outstanding success through their use of Tosca data integrity. So there's a specialized set of components designed to address this problem directly Worldpay and enormous amount of money was invested in applications designed to help that business leaders make informed data based decisions. So if there are any errors in that data and then the pipeline that delivers those information reports, it can have significant effects, but well-paid were able to achieve extreme success using Tricentis Tosca data integrity. They've been able to reduce the time to markets. I think of that as increasing release velocity, they've been able to reduce costs by an astonishing 90%. And one of the things most mocked about this is that rather than traditional methods, maybe which sample a few data records here or there to judge, whether we had integrity with pay, they can take a forensic approach and examine everything because of the incredible performance that Tosca data integrity provides.


So coming down to the third and final part of the session, what I'd like to reflect on is some of the new sources of innovation that we see and how we can future proof, this incredibly powerful engine that we've developed to support change in our enterprise applications. So if we consider the enterprise applications that are available to us today, then some level of capability is shipped in the box, right? We install the software, we're going to have some level of capabilities, but that's not going to be enough to support our business processes. So in fact, all of the leading enterprise applications provide some level of configuration control. So these are things where we go in. We don't need to be a programmer to affect change, but we do need to have a combination of domain expertise in the software application, as well as subject matter expertise in our business operations.


But this is a fairly, um, or at least a larger pool of resources that we can draw on to support this work. But at some point in all of these enterprise applications, we reach a hard limit. And that's the barrier that separates configuration from code. Now, just as the, uh, I think it was Grady boots. The said that the history of software engineering is the rise in abstractions, and we're always striving to produce tools that are far more expressive in helping us design solutions to solve problems that we face. What we now see with the rise of low code, no code tooling is that the notion of who is and who is not a developer is changing in fact more and more people from traditional developers to subject matter experts, to power users and users of being equipped with the tools that they need to create solutions for themselves and enterprise applications like SAP, Salesforce service.


Now, uh, very much a part of this trend. I mean, SAP recently acquired app. Guive a provider of low-code no-code tooling. What that means for us in our organizations is that as this barrier is pushed further and further away, we can expect to see more solutions delivered on those enterprise applications. In a sense, those enterprise applications will provide a set of services that will be composed by and consumed in these new innovations. So this is going to create a change in terms of the number of integrated applications. In fact, I think in some of the research that we conducted recently with the America's SAP user group organizations are forecasting a 31% increase in the number of integrated applications that they expect to support in their environments. So it becomes ever more important for us to focus on the data, to focus on the end, to end business process. So my wife has said to me about choices, we can go faster, we can reduce risk, we can lower our cost. Well, we've Tricentis you can choose any three. Thank you very much for your time today. I hope you enjoy the rest of summit. Thank you.