Scaling Data Analytics Delivery Model With DevOps Practices

As a Data-Driven company, adidas has strong demand for reliable, scalable and fast Data Analytics Platform.


This presentation is about our solution how we enabled DevOps principles along with number of capabilities from the Accelerate book (like Continuous Delivery, Architecture for scaling) in the Data Warehouse domain. During the journey we convinced more than 10 teams around the world how DevOps principles bring them speed, high quality and empowerment in creation of data models and data pipelines.


Now as the practices are adopted, 10+ teams are delivering Data for Analytics demands:

- independently from each other

- faster and with higher quality

- in smaller batches

- deploying changes to complex data models and pipelines on-demand multiple times a day instead of old fixed biweekly cycle.


Some details:

- Our purpose is to increase speed and quality in delivering Data for Analytics use cases by:

- Increase quality of delivery in Data Warehouse environments

- Increase speed of changes

- Improve reliability of complex Data Models and Data Pipelines

- Improve reliability of deployments


Top challenges:

- Merging skills and ways of working of experts coming from Data Warehousing and Software Engineering backgrounds

- Finding Continuous Integration and Delivery patterns acceptable for Data models and pipelines

- Some tools in Data stack cannot be managed with the code

- Code (definition of the Database object) is tightly coupled with the data

- High complexity of atomic objects (e.g. a database view harmonizing KPI calculation from data points from multiple systems can contain several thousands lines of code)

DL

Dmitry Luchnik

Director Data Analytics Architecture, adidas

Transcript

00:00:13

Dozens of productive deployments today, many independent teams doing that. It sounds familiar, right. But what if this is happening in the enterprise data warehouse that's city company? Hi, my name is Mr. Lucia, and I could ask I'm a solution architect, the data analytics department. I'm excited to be here and to share with you how we scaled delivery model in data analytics. Thank you for spending certain minutes of your time with me on this. Let me start with a short intro of what data and analytics that it does meet. I did this as a global company, running businesses, some all content grouped into five markets, and that means our demands for analytics are also distributed across the globe. We have five markets at borders, marketing, product creation, supply chain, sales, finance, and so on. It is clear that every function requires data, the efficient, uh, let me show what that means in numbers. On a global scale, we are talking about many, many teams in every functional of a company who have data analytics at the core of every decision they make. It's hundreds of people in 15 or even more teams, um, or create reports, connecting datasets, building prediction, models, and so on. And it's thousands of those who consume those keyboards and prediction models or analyzing what is happening right now from data volume perspective. They're out on a petabyte scale. This data is used a hundred thousand times every week and it's needed for day-by-day decision-making.

00:02:05

Where does data's coming from a big footprint, of course, of our global platforms supporting core business processes like CPFs or future. Uh, these are covering the core processes like supply chain sales or finance are the systems supporting equal like Adobe or Google analytics. Let's see, since I managed and governed centrally last on top, they have a fair amount of local sources as well. In general, they collect and process the data from more than 40 various source systems. So what do they have global company with distribute analytics, demands and usages multiplied by centralized data sources and platforms. So what is our approach or for speed in analytics here, you see an overview of data analytics department, our role to drive the digital transformation data-driven decision maker. However you do it. You provide reliable data, help insights and automate decision-making. We have several offerings interact, uh, is the data address two different types of personas, consumers of standardized reports and KPIs, data explorers, or data scientists. It also take care of collecting data from global sources and making it available for analysis. Now they're going to go deeper into our operational analytics offering.

00:03:36

Why, because this is an area we should advise caring and flexibility to cover market needs for insights to enable and empower those 15 teams across the globe of teams of content creators, and to solve those 3000 explorers. So what is operational analytics or how we pull it away? Uh, for me is the exploration reporting and visualization platform. It can be used for flexible browsing school, available data, finding answers to the question of an hour, getting insights from combining local and global data together. Some of the sample queries or questions of are presented here. So we talk about how to rebalance our stock between European countries or how to balance Lord in the factories or under what circumstances it was to go with an air shipment as opposite to the lunch.

00:04:41

So answer these questions. You need to consider multiple data points, demands, supply availability of the stock. So big area for supply chain or a structured in what we call, um, hub-and-spoke model hub for central functions like onboardings, providing platforms, tools, uh, shaping base of working, and of course, provision exhibit data from the global sources spokes, I empowered business intelligence teams like hated closed on analytical consumers can be a specialized technological, the it team or a business analytics team in the market. They know the needs, they can react faster and should be unblocked to do their job. This structure is based on several learnings we have made over the years. So local teams are much better aware of immediate needs of what it's really hot and what requires an action right now last, uh, creation of weatherization for that spokes are also supplied. Uh, they also usually very competent in data preparation for the, for the use cases, because if not for such platforms that would be doing that, the preparation manual is XL access or who knows what, however, centralized heavy lifting, connecting to global sources, getting the data out, harmonizing dictionaries, running data, quality checks.

00:06:12

And so, um, we shaped the architecture around personal areas, given this folks full empowerment to manage their school it's on schema level on data transformation or ETN level, user management, all that you can say it goes exactly according to the Congress law, but actually we shaped spoke teams around a couple of the detection principles. You'll see more loosely, a couple of concepts here. And this is one of the main ideas which helped to scale up our deliberative mode, by the way, this is not a first attempt to do something like this. And I did us, um, earlier variations. They're not commonly accepted as they had limited level of environment for the spoke teams. Um, idea of distribution was right, but implementation kits too tied central governance, and therefore was rather blockings and enabling others. I don't think the limitation is very different. So foundational acupuncture and very importantly, change management processes now are based on DevOps and continuous delivery principles from the accelerated.

00:07:29

Um, so how to deal with empowering spokes while not compromising integrity of overall solution and how the called such data analytics environment together. It is fair to say that there are some challenges, especially like how central teams supposed to bring global changes that are not disruptive for the spokes, how local teams can exchange know-how and help each other. Um, offer instance how reporting key figures implemented locally, stay aligned with global logic. Um, if we are not addressing these technical questions, they easily become business issues of from numbers, misaligned reporting figures in markets and headquarters or super long lead times means no way to operationally act based on up-to-date data.

00:08:24

The address is challenges. They applied them architectural formula. We took that data warehouse experience. They had internally actually decades of experience combined, uh, looked what happened on the market. Luckily for us, the accelerate book was published and presented condensed overview of what should be done. Be done, checked, um, architecture patterns from software engineering domain and try to fit it into the shape of the data domain. The result is what we call blender. It gives spirit empowerment to spokes on one hand side, second side, the second hand side, it ensuring governance and quality and, uh, multiplied by automation. It helps to accelerate data journey. This is by the way, again, one of our core principles, uh, of the fast analytics ecosystem, empowerment, loosely, coupled with dominance, uh, the name blender itself, so that it's landed, know how several teams blended the main experience from data warehouse and something's engineering. The main and main purpose is to blend data. Now let's go deeper into how it works.

00:09:46

This is normal continuous integration framework. They have big pockets in the center, continuous integration, continuous delivery frustrations with Jenkins, visual, disposable environments and so on. But wait, this is nearly normal because what do we have here is a stateful productive application means we can really be deployed. It's a box with many, many terabytes of data that changes. So this box should be incremental, the edge, or the wind changes our data warehouse experts. They know inside out the data modeling techniques can in the middle of the night , but they might not be that fluent with give continuous delivery Jenkins and so on. Um, that means in kit, they need to store artifacts understandable by data warehouse experts as good objects. It had jobs in salts, elects, shell scripts, some negatives there incremental changes, which I was talking about. They also should be in the native script that this the teams are working with known concepts and components, the minimize intrusion into the regular base of working the blend that is tailored to the skills of the data warehouse community.

00:11:13

Um, they also need to test incremental statement, incremental deployments and idea don't ever change. This makes full data model reliable at any moment. And, uh, one last thing, um, what can be the sequence of changes is a good idea when those changes stand produce, but it's not very descriptive when you try to find out what the actual structure of the table. So, I mean, browsing school tents of ed column, remove column, add column, remove column commands is not really rewarding. Therefore we need to regularly condense all this out the tables into and to show the real up-to-date start for, oh, that's an overview of our continuous delivery cycle.

00:12:00

Some words about testing framework really wanted to make it fast to ensure teams are getting meaningful feedback on a proposed change within five minutes. Um, what do we test? So I'll give you some examples of, uh, of our best in school. So I said before, what we test is, uh, incremental deployment and we test if this incremental deployment would work well, then applies to the productive database, means the first create a copy of production in virtual environment then apply. And I'll the SQL command from the developer. If everything is fine, this is green. Then the dispose, uh, as database. Um, but also checking if the data model exposed to the front-end is still working as expected because we have the full data model in the visual environment. Last, they also can check if reporting key figures, uh, implemented locally, still aligned with the local country, maybe looking at this picture, some of you recognized your own group past thoughts about what works and does not work well together.

00:13:24

I myself definitely use the sink like this at speed in delivery is compromising quality. That environment fails with central government. Some central governance that quality can be ensured by tighter controls. Luckily, this changes well now we introduced lender, our continuous delivery framework to data warehouse and what we saw, some of Fred's I told him to bring, and I dare to say green gets even greener as the follow continuous integration approach. The spokes know that incremental change is not going to break their data models or important from tents. Empowerment plays together is as good follow continuous delivery practices. Quality is supported by CAI and revenue culture. So speed is not compromised. This is a big contrast to the approach we used to have some years ago, there's quality checks done by multiple people coming into the meeting to prepare once a week production deployment. And, um, as we follow those a couple of principles, they can support sample governance.

00:14:44

This a clear test report. The local teams are not blocked from making an urgent change teams throw fully empowered to decide if they want to proceed. Despite the failed central test, um, the fits can be applied later, right? What maybe touch failed test is the real block. And the reason decision is fully is this for let's look inside blender. It consists of three layers. Clearly separated by the dust layer is for data house. One time, uh, data model stables use ETS, basically imports in stores, stored procedures and so on using blender, it means to contribute real layer. One to the data of recalls. All spokes are using at least those layer while designing this layer one, they can see that four months Hilton's a data recalls teams. And because of that, it throws a quick start, one hour onboarding and continuous integration and continuous delivery is enabled. Second layer is our testing framework. I want in here. Most of it is created by the blender. However spokes can create their own local tests for own purpose. This requires a bit more than data protocols, background, a bit of Python knowledge, a bit of knowledge of as data management, be deep going to be deep into continuous integration. The core team maintains contribution templates. This helps to onboard spokes is also relatively low.

00:16:41

And now the soul layer it's blend itself. Ideas behind architecture, design principles, continuous integration processes. Also all functionality like deployment protocols. Maintaining this layer requires deeper understanding of both data warehouse and software in shooting concepts. This layer is maintained and evolved by the coal blend. The team in visit Adidas it's open source. So the spokes also can contribute. They are. This structure helps to drive speed and scalability in data analytics area speed onboarding is understandable first layer, which uses only the main objects from the house. The main screen and ETL speed in applying changes to data warehouse architecture. As Linda enables continuous delivery, working model speed in foundational development. I will talk about this on the next slide as blend. The template is decoupled from each spoke area foundation, and all functionality can grow independently at it all its own pace adoption of the new features by spokes it's actually emerge on demand.

00:18:07

Um, soundboards on this decoupling again, how was it poison decoupled or better said loosely coupled with this folks, a quantum, basically every spoke has its own blend that, um, this is separate trapper in the bucket fully owned by four users, permissions, uh, approvers and so on. Uh, with the first layer of blender, the teams are managing own data models and ETL processes. Just to repeat each spoke has own copy of blender, all street life, including the core, uh, something similar to a living distribution open-source centrally maintained. Colonel is use the content in, in each installation. Uh, we do not have some central Riney blender for each spoke is fully independent. From this perspective. What we do have is a blend of templates, rapport, and, uh, here all central components are being developed. So our centralized governance checks, whatever we want to have a running, all spoke, repositories deployment protocols. And so on. This means all blenders are loosely coupled with the in place even. So is the basic principles and processes are aligned across all spokes or developments can go completely independent. Adoption of new features is also in Toronto sport by sport decision roll outs of the central functionalities there's teas on the boxes like here, for instance, um, is, uh, automated, but not interfering business folk, local owned fund.

00:20:02

Now let's see, does it help actually to follow the guidance from accelerate? Yeah. Try to map what capabilities we do have with blender. Some are down some not, and, uh, let's see actually how they help with them. Uh, this is managing our data warehouse. Uh, so first it's team based review creates full transparency of what exactly is going to be deployed. No changes keyed in inside some transport packages, how some day from our house management tools are doing that it's fully evident content handling. Um, we can combine and model together with an lording or ETL process dealing with this model modification. It's also evident principle, that's the old model. So they really have continuous integration of the complete data warehouse for reliable tests. Last multi I review and lien approval process, all American productive changes, pasta, um, interesting effect. Uh, we saw last year when the COVID, uh, hit, um, it impacted our priorities. They had to reshuffle focus on some of the use cases. As you can imagine, a Southern activities were put on hold and the spoke developers had to switch content sometimes contribution to completely different spoke and completely different business function.

00:21:44

Um, but because they had evidence similar changes in the past and because change process are harmonized is blender, new team members, they're onboarded quite fast. We haven't noticed any reduction of the delivery speed even under such circumstances. Um, so now I'd like to show you how we quantified their outcomes. I'm a data guy at the end of the day. So let's zoom into some of the KPIs as it occupies. Why not to again, take accelerate book as a, as a guidance also here, what you see here is the data from 2019 DevOps report. And an ambition of does is to get to the elite group of software delivery performance based on the four KPIs from accelerate deployment frequency, lead time for changes, meantime, to recover and change a federate from data analytics side. We also contribute to this ambition. What we saw is right, that detection tools and processes that actually can play in this led.

00:23:00

Uh, I will go deeper into the top two keypads deployment frequency frequency. We can measure it very precisely. What you see here on the, on the slide is an overview of all deployments from our first done, uh, two years ago, uh, to the current pace of changes. Every vertical bar is one day and the height of the bar is a number of deployments during the day. Every color is in the bar is a separate score. The big check just speaks for itself. Their spokes are in on demand several times a day, deployment culture, quite interesting that, uh, over the year also greater the demands to speed up in analytics. And this was, you can see like cries from the second half of 2020.

00:24:01

Next KPI is the, um, we can also measure it rather precisely. At least if we can see the kind of first commit to production, how to read the charts here, like this one in this 1, 1, 2 and three hours. And then we show them them separate everything below a single day is, but more than three hours is booked on their day. This is more than a day, less than maybe that we move on the under week and so on. So what you can see here is that for most teams, less than an hour flow from commit to production is a reality all in all less than an hour flow makes here about 25% of all changes. An example on top here is a typical small change taken less than half an hour end to end from development testing, CII feedback, review approval, knowledge and deployment to production about 75% of changes or bills areas combined, uh, it's changes, which are done within a single date. For sure we still have some heavy lifting topics. Uh, they still there, and they can take a bit more time than a day, by the way, heavy lifting is visible on this 3d chart here. Um, two teams, uh, attending to do more day two week cycles. So we see this one and this one that blue and this brown one, um, those are the central teams who take care of the global data and most foundational topics.

00:26:01

Now harvest time, after two years on the journey, they have some interesting outcomes. So we have 14 BI teams around the globe working in parallel independently, get loosely, coupled is at quarters. Um, they are solving important needs of the 3000 analysts with about a hundred thousand, uh, report executions per week. They manage about 80 terabytes of uncompressed data inside this environment. Uh, developments can be isolated in small batches lead times. You saw yourself, they went down in some cases for less than 20 for, um, what deployment, uh, diplomas are happening on demand when needed multiple times a day, instead of old bi-weekly or weekly deployment sites, complete content of a local data warehouse and be tested all a cup of also fair to say about the learnings, what we had along the way, learning from one obstruction, layers like the base or spill out to me, they do not really fit our needs to ensure adoption of the overall approach.

00:27:21

They had to go with the native over the liquid basis of the wall. Of course it has its own challenges, but it also has some positive effects on adoption or achieve high level of adoption of the deployment protocol. They had to make it super reliable and sitting all potential, uh, bells and whistles. And, uh, actually we had to rewrite it seven times within the first nine months. And then of course, um, RCI is a must. So that idea of running for best over the cup of coffee, uh, that also made it difficult, made a difference.

00:28:08

Now, before I close, yeah, the help, uh, that I'm looking for these journey here shows, um, that we can scale up data consumption. So once the data is already available and collected and unlocked, fast insights creation is possible. The next step of listening is how to fill the speed up process of bringing new data into analytics. Uh, we started the journey of a data mesh described in Martin Fowler's blog, its purpose to make data production easy and fast. So business domains can participate. This is an active stream and any ideas how to make it work are super welcome and super relevant would be grateful for that. It's been said, uh, let me say big, thank you for staying with me and for your attention, I would be happy to answer your questions.