Despite being a tech-driven organization, some teams are much faster in adopting technology and process improvements than other teams. We support their development. But we also want to make sure that those teams which can’t keep this pace also benefit from these improvements and are not left behind. We explicitly chose not to apply the two-speed IT paradigm. On the technical level, interdependencies between the corresponding application can be limiting for both parties. I will introduce patterns that we apply in our organization to deal with these situations. They include technical measures, organizational measures, and ways to share knowledge. It is important for us that people are confident about their way forward. Yet, of course, we face some challenges.
Head of Customer Solutions, Hermes Germany GmbH
Hello, my name is Stephan. I'm happy to speak at DevOps enterprise summit. Again, I already introduced parts of our DevOps and cloud transformation story in earlier conferences today answers today. I would like to tell you about our idea of leaving nobody behind in our transformation. I start with a short introduction to give you an understanding of our setup. I will then introduce technical patterns that we apply as well as organizational development patterns. I will give a general understanding of how we share knowledge in our environment and conclude with some takeaways, Hemi, Europe, which is our mother company is the largest post independent parcel company in Europe.
I am part of Hemi, Germany. Here. We take care of parcel delivery in Germany to both doorstep and more than 16,000 Hermis parcel shops. I'm working at Hermis as a head of development, and I am responsible for the customer. It at Hermis technology plays an important role in because we need to adapt to an always changing market, which is growing with five to 10% per year for the last decades. DevOps as a culture plays an important role for us with the principles of automation measurement and sharing knowledge and delivering as continuously as possible is one of the most important aspects we have.
We set that as a strategic goal in 2017, which means that we are now in our fifth year of transformation of following these paths. And we are quite happy with the results that we find overall. We see a good increase in delivery speed. And on the other hand, the incidents decreased and due to these circumstances and the pervasive cloud technology and automation, we now have in place, our people feel safe to change and deploy their respective application whenever necessary. So we have a high degree of freedom concerning the processes. So we generally progress well, however, in an organization of about 30 to 35 product teams in total, it is not the same everywhere.
And we asked ourselves, how can every team participate in the transformation easily, easily, this possible at all? And do we see a speed increase everywhere? And the answer is we found that teams are running and transforming at different speeds. To some degree, this is probably natural for a large organization. So we asked ourselves if this is something to pay special attention to, or if we just wait and see what happens, and we decided against wait and C and you probably remember the famous two speed. It Mitta, which was coin of five to 10 years ago when technological innovation around cloud was still in its early stages.
And we are convinced that taking this route makes it really difficult to have a healthy tech culture and even more difficult to have a healthy tech organization. It makes things complicated since you always have to argue in which context, which approach applies or not, but foremost, we don't want any losers in our transformation. We don't want to have anyone being left behind. We want everyone in the same working system, at least to some degree, as much as possible. And this is a hard way. It's not the easy path since it takes extra dedication, but from our point of view, this is worth it.
I will introduce you to the things that we discovered and the patterns that we are using to deal with this before kicking diving in, before kicking off the short disclaimer, in my presentation, I will use the terms slow system and slow system team, and fast system and fast system team. I'm clearly not using this in my day to day work. And this is also here, not meant to stigmatize the teams. This is why I mentioned this now explicitly, I just needed some terms to describe the situations for you so that you can easily understand it.
And the question we had, why are some teams slower than other teams? And we found two main reasons for that reason. One is overly complex logic. Some teams have too many domains addressed in their application, too many business contexts. And this often comes with lots of dependencies that are then hired to manage. And the second thing is that a particular release is often made difficult by complicated tech, complicated testing. The necessary scope might be too large, cause there are too many dependencies or the landscape that is needed to conduct. The testing is too hard to set up reason. One and reason. Two is the technical setup. Often slow teams come with a certain age. I will not use the term legacy because I think that is not really fair.
Thus, the technological foundation of such slower systems was not meant to deliver continuously. Other paradigms played a more important role back in these days. One example of such an environment AJE containers that we sometimes find other aspects is architecture of the slow system that is not well crafted. For example, when that system was not meant to live for multiple years, but was thought of as a one trick pony, we developed it. And, and one year later we will replace it by another application, which never happened. On the other hand, we didn't find any example in our organization where the team set up itself was an issue. So capacity or small capacity is no typical sign of a slow team. And the same is true for the skills of the team. So in all of our cases that we have in our organization, the slower teams, at least as well, skilled as the faster teams and taking a look at those issues that we found, we found that it is not easily possible to overcome those reasons.
So we had to develop patterns to deal with that. Let's start with some technical oriented patterns that we are using to ease collaboration between teams, pattern one, isolate dependencies to slow systems. This is actually the most common pattern that we find in our landscape. The fast system team isolates the dependencies to the slow system by adding a dedicated adaptor or gateway, and almost treating the slower system as a kind of external system. And by applying this pattern, the test scope is reduced for the fast system team when releasing their own application in high frequency, cause the adapter covers the changes and makes those internal changes invisible for the slower system.
And the same is true. When there are changes to the interface between the two applications, these changes can be coordinated well, and this often primarily affects the adapter so we can reduce the implementation scope and the test scope for the actual adapter that we have. Pattern two is formalizing the interface agreements by using contracts. So we are using the contract based testing approach quite often in this scenario because this allows to decouple the actual implementation and even leave out the thorough integration test. And on the bottom of the slide, you find the timeline for implementing such an interface using this methodology. So the both two teams that we fi see here have to synchronize and forming the formal contract and for the actual implementation, they are free. That can plan it as, as they are processes and just for the go life, they need to synchronize again.
And by applying this method, we successfully reduce waiting times for both teams, both for the fast team, which can plan their time as needed, but also for the slow system team, which does not need frequent coordination with, with the faster team. And the third pattern is joint development because the slow system teams usually don't have time for changes that are not essential parts of their own roadmap. This is why we started to send members from the fast system teams to make the necessary changes to the slow system. If possible, we do this collectively with members of the slow system team.
This is still a pretty new approach to us because it checks a new light on organizations and on team setups, since it almost autonomously allocates and reallocates capacity where critical, so that two teams find their path to making the necessary changes almost automatically. On the other hand, we need to take care that the fast system teams are not, not pulling the rack out from under the feet of the slow system teams, by doing changes wildly all over the map. And this means that pull requests and the good view of the pull requests for these kind of changes is very important.
This third pattern is already a mix of a technical approach and a cultural organizational approach. So let's take it from here and let's take a look at further organizational patterns that we have. So next to the technical patterns have some patterns for organizational development. And before I start with those patterns, first and foremost, even if it is difficult, sometimes even if the way is hard, we don't give up on improving the slow system teams. This is true because we very much value these teams. We value these people. They are a big asset in our organization, these teams that it's quite typical. They usually come with a long living team setup with little to no fluctuation.
We have very experienced colleagues in these teams who are working with the company for years, or sometimes even decades. They have a lot of expertise concerning processes, development approaches and for their respective business domains. On the other hand, slowness shall not be used as a weapon with full roadmaps that are not accessible for implementing changes that are necessary for other teams. Cause this might hinder innovations of other teams of faster teams due to the dependencies, which are in place in the landscape. And therefore we need to pay attention to the atmosphere between such teams that come with different approaches for development.
What we find is that these teams are often reluctant to pick up the new toys, the new methods, the new technology. So whether majority of teams, for example, naturally picks up continuous delivery. This is often still mad magic from the viewpoint of the slow system teams. We then hear, look, this might work somewhere else, but certainly not, not in our environment. And we think that this often comes with fear of change, fear of not knowing what will be next, what, what this change means. So we try to avoid the terms, the term change and the term breaking change. Instead, we try to start where the teams are and develop from there improving step by step, little by little and one concrete example for this where continuous delivery is really easy to achieve in modern cloud native environments. It is hard to achieve with a slow system, which comes with a technical boundaries that we covered before, or the process boundaries that might be in place. But what is often possible is to deploy on the cadence of a sprint, which is in our case every three weeks.
And by doing this, we give even this team kind of a beat. So we get the slow system team in the same working environment as the other teams. And this means in turn that all teams understand when they can expect the particular change rolling into the slow system team. And it also allows the slow system team to ease the planning because they know that every three weeks there will be a deployment that makes it easier also to agree with the business on the deployment, because it, to some degree, at some point it comes natural to deploy on a cadence of three weeks.
And the difficulty about going small steps step by step transformation is that it needs a lot of attention to keep track, not to lose the goal because we only see the next steps. And of course we need to keep track for the development of all teams, but this is especially true for the slow system teams and the larger the organization is the easier it is to hide from a particular change. So bend and wait until the transformation has gone by a common pattern to organizational changes. And this shouldn't be an option. The goal of our organization is to do this transformation. So we want everyone to have in this transformation, all in means all in to whatever speed and in whatever steps, pattern two ensure sustainability in our estate and probably also in your estate. The slower systems are often the more important applications.
And we need to consider this when working with these teams, we need to find good arguments when promoting innovation eventually translate and adopt these innovations for meet the conditions of the slow system. And a good example for this is observability monitoring. This is, this comes naturally when doing cloud native development. Since this is the only chance to understand if and how an application is performing in production and for the slow systems, the story is a bit different. They are often maintained in a more classic environment, virtual server, JE containers. And this is done for years or even decades, maybe even with the classic it operations team around this system.
Thus, the necessity for observability might not be as clear because the team was successful without it for, for, for ages, but by taking small steps, introducing the first metric and a second metric, taking a look at it, refining it, we were able to convince these teams that this is a good investment. It takes a bit more time, but when we teams, we, when we had the teams there, finally, they usually have a high visibility of their metrics. They're using it day in, day out, and they're even using it for generating some KPIs. And the final aspect I would like to cover is sharing of knowledge and important to note here is that sharing knowledge is no one way street. It's not only about sharing knowledge of modern technology, modern methods, modern processes to the slow system team.
We find that often the opposite is the truth. So we can learn a lot from the co system teams. It can be around processes, development approaches and concerning business knowledge because we find lots of business logic in these applications and have lots of experts knowing about this business logic. What we do in our organization is that we are parallelly organizing it wide fares. So it wide conventions meetings so that all teams can present what they achieved, what they are doing and let the other teams learn from that. Secondly, our reviews are open and usually it is well invested time to take a look at what other other teams do for some common topics like continuous delivery. For example, we also establish communities of practice to share knowledge, honestly, speaking, this is the hardest part of sharing knowledge. So it's not easy to run these communities of practice for a long time, but probably that is part of another story that I present might present one day, let me conclude with some takeaways.
I brought a quote from our business from Christine, who is one of our trainers in the customer service. And we asked her about the business impact on deploying of a cadence of a sprint, which means three weeks in our context. And she said, look, we used to inten intensively, train our call center agents with each release. It takes a lot of time to do that. And thus, we were very reluctant to introduce new releases on a, for the application on the cadence of three weeks. But we found that the changes rolling in every three weeks were quite small and they were quite easy to explain, which allowed us to often skip those extra release trainings.
So it made it even easier for us, which I think is a good story. Let me conclude with some takeaways. What did we learn? First of all, there is no one size fits all approach. In DevOps of transformation, we found that teams are, are running and transforming a different speeds. It takes dedication for teams, which can't absorb modern technology quickly and easily. This needs to be considered and well planned cause that takes management capacity. But this effort, this extra effort that we we have is worth it for having happy people in our organization and a healthy tech culture. I would like to encourage you to follow a similar path and leave nobody behind your own transformation in your own environment, in case you should have any questions. This is my contact information. Thanks a lot for listening and hope to see you.
Unlimited users from organization
Clarissa Lucas’ Audit Playlist
Matt Bonser, PricewaterhouseCoopers LLP; Yosef Levine, Deloitte; Jeff Roberts, Ernst&Young; Michael Wolf, KPMG; Gene Kim, IT Revolution
Auditors' Workshop - What You've Wanted to Ask an Auditor but Were Afraid to Ask
Matt Bonser, PwC; Pierre Fourie, Ernst & Young; Sam Guckenheimer, Microsoft; Yosef Levine, Deloitte; Darren Orf, PwC; Topo Pal, ; Caleb Queern, KPMG; Jeff Roberts, Ernst & Young; Amy Sword, Deloitte; Michael Wolf, KPMG
DevOps and Internal Audit: A Great Partnership
Rusty Lewis, Nationwide Insurance; Clarissa Lucas, Nationwide Insurance
DevOps and Internal Audit: A Great Partnership (Part 2)
Clarissa Lucas, Nationwide Insurance; Rusty Lewis, Nationwide Insurance; Ethan Culp, Nationwide Insurance
From Your Auditor Friends: What We Wish Every Technology Leader Knew
Rusty Lewis, Nationwide Insurance; Clarissa Lucas, Nationwide Insurance