Las Vegas 2020

H&R Block: Doing Taxes at the Speed of DevOps

As a tax preparation company, H&R Block must be one of the most seasonal companies in the world. While other companies experience peaks in their business, their revenue continues to flow. At HRB all our revenue comes in during a 102-day window and most of it in just 12 days. This has led us to face the question. How do we modernize during this age of disruption without risking the consumer confidence at the heart of our business?


Two years ago, H&R Block began its DevOps journey. There were a number of factors that came together to drive this initiative. First, we made the decision that it was time to move to the Cloud. This brought lots of new options to drive the automation used by DevOps. Second, the company made the decision that it was time to modernize most of its applications and platforms. Over the course of the next 3-4 years Block will undergo the rewriting or refactoring of some 70% of its software.


To accomplish this huge endeavor, HRB created its Modern Tax Platforms division. Through their leadership the enterprise is changing the way software is designed and delivered, quality is baked into the development process early, the way teams are empowered to take greater responsibility, and the way we think about the fundamental concept of a team.


As with many organizations, there is still much to be done, but the achievements we have already accomplished are impressive. Small nimble teams are replacing large monolithic systems that evolved over decades. The vision that begun in IT is now crossing over and joining hands with our business partners to break down old silos and drive an overall enterprise-wide digital transformation.

TO

Tony Ogden

Director of IT - Modern Tax Platforms, H&R Block

JR

John Roe

VP, IT - Modern Tax Platforms, H&R Block

Transcript

00:00:12

Good afternoon. Welcome to our talk entitled, doing Taxes at the Speed of DevOps. My name is John Rowe, vice president of technology at Azure Block in charge of our modern tax platform systems. Joining me is Tony Ogden, director of Technology for Modern Tax Platforms. Today we'll be talking to you about our journey from and developing a brand new Omni-channel tax engine. It is a journey started about two and a half, three years ago, and one that when we in, when we started the journey, we knew that there was, we were gonna have to approach the problem differently. And with that approach, we came up with the, we came up, we started to utilize DevOps. Before we go into the details, let's talk a little bit about who HR Block is. HR Block was founded in 1955 by Henry and Richard Block, um, as an accounting business that grew to do taxes, believe it or not, prior to the start of HR block, when you needed your taxes done, you actually just sent your information to the IRS and they actually did the tax calculations for you. Uh, Henry and Richard were at the right place at the right time, um, and the rest is history with the company being founded. Um, today we've grown to over 80,000 tax professionals across the us, Canada, Australia. We have over 11,000 locations, and we serve 23 million prepared returns, and about 8 million clients do it themselves. This equates to about one out of every seven tax returns in the US is done by using HR Block software or an HR Block professional. Now I'm gonna turn it over to Tony Ogden to talk to you about our seasonality.

00:01:47

Thanks, John. Good day everyone, and thank you for your interest in h HR Block's DevOps journey in the modern tax platform space. HR Block is gonna continue to invest in products and services that bring year-round value to our clients. However, it's, it's virtually impossible, almost impossible for remove the seasonality from our core tax business. I'm gonna address several of those unique challenges that we have to plan for year in and year out. First and foremost, we have to line with the IRS as it relates to the number of days in our tax season. We typically start in the mid-January timeframe, and we go through mid-April. Uh, that's approximately about about 112 days of the tax season. But we are at the mercy of the state departments of revenue and the IRS to certify their forms and schedules with us in preparation for the season. So if that certification process extends past the first part of the season, that 112 days actually becomes much less than that.

00:02:40

We do have peaks and valleys that we have to deal with, uh, at the very beginning of the season, from mid-January through mid-February, we typically deal with our refund client base, and then we slow down at the end of February all the way through the end of March, and then we see a big ramp up as we get closer to the end of the tax season between the first part of April and and mid-April. We typically see our procrastinated clients like me or those clients that typically have a balance due those peaks and valleys, though we are starting to see those be less sharp, um, as it relates to the last few seasons. And I think that's primarily driven as we're seeing client volume actually spread over even more evenly across the tax season. Our biggest challenge though, continues to be, uh, that we experience or have, uh, one visit or experience one client visit each tax season. What does that mean for HR Block? What that means for us is that we have to get it right the first time. We have to provide that wow factor. We have to prove that our, our experience and our expertise is a differentiator in the marketplace.

00:03:40

I mentioned that we have 112 days approximately in the tax season, but on average it takes 12 days for a client to file their return 12 days. So what does that 12 days mean? For us, it just means limited opportunity. It just means limited opportunity to convert that client or to be able to, uh, continue to retain our existing clients that that prefer our brand. John also mentioned that we have around 11,000 active offices each tax season. As we, uh, get to the end of tax season, we close the doors in about 75 to 80% of those of those offices, which just means for us that we have a limited footprint once the tax season ends.

00:04:23

Additionally, we have about a four to six month delivery timeframe, and when you throw in the pandemic, that actually reduces that timeframe even more. Uh, but I'm gonna kind of paint the picture of what that means for us. Typically, um, as we approach the end of tax season, uh, usually about two to four weeks post tax season, we are now planning for next year's initiatives. We're finalizing budget, we're finalizing those strategic imperatives. Um, we're locking in on, uh, on what that scope is, and we're starting to do research and design activities. But as we approach the summer months, that's normally when we kick off our initiatives. And if when you extrapolate that out to the beginning of tax season, you'll see that we normally have four to six months to file or to get ready for the upcoming tax season. Now, that's not limited on all of our initiatives, um, as we do have year-round initiatives such as modern tax and our investment in the cloud, that that will span multiple years.

00:05:13

But primary number of our initiatives are limited based on that four to six month timeframe. And then last but not least, we have a high volume of seasonal workforce. We have over 80,000 tax pros that we have to onboard for each tax season. That onboarding process typically happens in a, uh, four to six, uh, week timeframe. Uh, but we also have to make sure that they're properly trained and certified as we approach each of those tax seasons. I'm now gonna turn it over to John, and John is going to address tax complexity at h and r Block.

00:05:44

Thank you, Tony. When we think of taxes, one of the issues with taxes is the complexity. That's why a lot of people have their taxes done, uh, by the use of a, a tax preparer or through a software product that actually guides and assist them through the tax preparation process. At the end of the day though, there's really one answer that comes out. Either I owe money or I'm getting a refund, but to get to that answer does require a complex number of fields and forms. In fact, we have over 150,000 regulatory fields with that and also 15,000 tax regulatory changes that come in every year. So when you think of the regulatory fields, 15,000 changes. There's a lot of automated testing and a lot of regression testing has to be done to make sure that if you change something here, you didn't break it there.

00:06:34

Now we have changes that come in from the federal government. We also have changes that come in from the various states and Puerto Rico on top of that. Cities and counties across the us. Some, not all, but some actually have their own tax laws that people are required to file taxes. Sometimes they pull from the federal or state return, sometimes they don't. All depending upon each individual, uh, instance. Now, on top of all that, each individual has their own unique circumstances. Um, in fact, you could say that doing taxes is somewhat like create, everybody's like a snowflake. Everybody's unique. Um, you might have bought a house this year, you might have gotten married, you might have had a child, you might have adopted a child. All these situations create complexity around your unique situation, and they create complexity on how your taxes are actually calculated. To do that requires a lot of, uh, field analysis and scenario testing to make sure that we're doing taxes correctly. Uh, to help understand how that is. Let's look at a little bit of an example of how a field inherits and does calculations from other fields.

00:07:41

In this situation, what we're seeing is we're seeing a IRS pensions, annuities and how it draws values from other fields. And that field draws values from other fields, and it keeps cascading down and down and down. So as you see, when you get to the end, a change there can have a cascading impact. Further up the stream you go, you could have that far thing on the left be IRS refund, and if that was true, great, but understanding how those fields move and, and, and combine with each other and sometimes comes close to actually being a circular reference that we have to work through. Now, to do all this, we had to approach the problem differently. And traditionally what we've done is a lot of brute force regression testing. We basically had seasonal associates that would come in. We had test scripts, and they would log into the application and they would manually go through scenarios of basically, uh, sample tax returns to look for issues and manually they would test everything, whether it printed, right, whether the tax calculation was right, whether the screens flowed correctly.

00:08:45

Uh, but we knew we needed to kind of start separating our testing out. And one thing we've done is we've created a UI that we use when we're building out and actually setting up these fields. But what we've allowed to do is we've actually allowed, uh, tax analysts, these typically are folks that aren't computer programmers through a UI to actually build unit test. So we give them an easy way as they build a calculated field, and it could be take field A minus field B, that they can then build in unit tests for each different possible scenario. Now, in doing so, what we've done is we've now, every time we do a build, these unit tests get run so that if we make a change in the very far right of that previous slide, we'll unit test the far left and we can see if we actually broke anything or not.

00:09:29

Along with unit tests, we have automated test execution. So again, when those kick off, we run through and you can see we run through various state tests and we run through various unit tests and we'll go through scenario tests as well. Now, scenario test is, is simply we take sample tax returns at the highest level. So when we file with IRS, when we file with the states before that process is we actually have a process where we do certification returns. So they give us a set of inputs and we enter the inputs in and we then produce a set of outputs. Um, they validate that those outputs work. We also have our own returns that we know that we go through and making sure that as we go through our taxes, that we don't break anything. So we can have over, you know, a thousand of these scenario tests that are run.

00:10:16

And you can see we put inputs on the top, we put outputs. We'll test whether the calculations were correct. We also have, um, a term we use called diagnostics. So this may mean that the calcs are correct, but something didn't look right. It could be that, Hey, did you know that you declared more in deductions than you actually had an income? Well, those are, uh, logic that we need to put in to make sure that the client is aware, the tax pro's aware that something isn't going quite right. Um, taxes with those changes, you can imagine all those fields, all the various tax scenarios across federal and the various states required lots of testing. Traditionally, like I said, that was done manually. What we're doing now is through is automating that process and through automation, we're able to run through tens and thousands, tens of thousands, hundreds of thousands of tests in a matter of minutes. What would've taken us days, if not weeks prior to get through. Now I'm gonna turn it over to Tony, who's gonna talk to us about our journey from client server to the cloud.

00:11:18

Thanks, John. And before I review the tactical ways that we are modernizing, uh, both our tax platform and our culture, I wanted to go back and visit the decision on why we, uh, made this investment in the first place. So if you look at two to three years ago, a couple of things that we really wanted to solve for one, we want to be able to bring products to service to the marketplace sooner and reduce that delivery timeframe. Uh, the second reason is that we want to be able to support our clients where they want to be supported, how they want to be supported, and when they wanna be supported as it relates to their tax preparation services. So now I'm gonna review several tactical ways in which we're moderni, modernizing not only our technology stack, but also our infrastructure and our culture.

00:12:00

We've built and maintained large monolithic tax tax applications over the last several years. This is a resulted in a fear of change and also slower response times. I'm sure that's a familiar story with many of you. Uh, and I wanna hit on a quick example of what that means for us. Uh, typically or historically when we have made a change to our tax application, um, even if it was on the homepage or if that was a tax entity that we had to get out and ready for the season did, regardless of where that change was made, we'd have to test, build package and deploy that application. And sometimes that would take days, maybe even a week to get that out to the field. So you can see that our response times were very slow. Where have we landed? We've landed on a microservices based technology, but we've allowed our teams to build independent capabilities and work more autonomously so that we can deliver things in parallel.

00:12:55

We've separated out our core tax application from our, uh, non-tax products and services. We're changing the engineering culture. So when you think about engineering with monolithic applications, we would just build stories sprint after sprint after sprint. Now what we're seeing is engineers have a cultural mind shift where they're starting to break down work in a much more smaller, smaller components, more manageable components, and they're also making sure that their whit bin or their work and process bin is only a couple of specific items each time. And additionally, they're making sure that their tasks are less than eight hours in duration.

00:13:33

We've also are mighty grading over to a platform as a service. We're consolidating our development and our testing methodologies, our frameworks, our tooling, our database models, and also our regulatory components. And I just wanna pause here for a quick moment. I want you to think about three tax products that we typically, uh, build and maintain each and every year. We have our DIY tax product, which is our online tax service. We have our software tax service, and we also have our brick and mortar, our assisted tax products and application. When you think about all of those applications right now, we have three very large teams that support those in a very siloed manner. That means that we have three teams, three separate teams building those capabilities in parallel, similar capabilities. Uh, we've struggled with, uh, being able to integrate and, and, and deliver cross-functional, uh, initiatives in a year over year fashion. And we didn't have a lot of synergies, but now we're at a point where we're building microservices based capabilities, capabilities that are built centrally and are easily extended to all of our three tax products.

00:14:43

So I want to take a quick look at our infrastructure. And when you think about the approximately 11,000 offices that we have open on an annual basis, that's a very sizable footprint. And when you think about the monolithic application that I addressed previously, that is a client s server based application that needs to be installed locally. And what that requires is an office server and several workstations to be able to serve our clients in the office we have to invest in or do that investment every three to four years in terms of refreshing that infrastructure. When you think about our VMs and you take a look at the VMs that we have to scale up as we get ready for the tax season, especially for our DOI product, we have to build those VMs in a way that they support our, our peak transaction volume.

00:15:28

We build that and support those VMs in a way, um, that they can support that peak transaction at any point during the tax season. But those VMs are up and operational for that entire tax season, and we don't tear them down until the end of end of the tax season. And what that means for us is that we are not, um, managing our environments in a very thorough and efficient way. So where are we now? We've made si significant progress with our transformation over into the Azure infrastructure. We've implemented push button automation in our l and p environment where we can easily scale up and scale down as needed, saving us thousands of dollars each month because our l and p environment is not up on a 24 hour basis basis. We've also turned on auto scaling with ace, and now that's allowed us to be able to deploy outside of our normal maintenance windows.

00:16:18

We've typically deployed, you know, anywhere from 11:00 PM to 4:00 AM but now we're seeing a, um, a much more flexible environment and infrastructure that supports our ability to deploy outside of that normal timeframe. We're we're starting to, um, invest and look at opportunities to, to create infrastructure as code, and we're leveraging, uh, Terraform Enterprise to be able to do that with our assisted product, uh, that we're building natively in the cloud. We're also using a KS to deploy and manage our containerized applications. We've also introduced, uh, Istio service mesh that's helping us manage our microservices, and we're migrating over to Cosmos db, uh, to, for a more flexible, scalable, uh, no SQL database service.

00:17:03

So let's take a quick look at, uh, testing as, as what it means to where we've migrated from in terms of manual testing to more of a reliance on automation testing. So I'll, I'll paint a quick picture on where we're coming from as it relates to the manual side of this. Um, so typically we have two major releases. Um, as we march towards the beginning of tax season, we'll have a major release in the November timeframe and a December timeframe. Uh, when we look at the, the November release, that that typically means that we have about four to six months of change that's being introduced into that release. What does that mean for us? That means that our development, uh, about the mid-October timeframe significantly reduces in terms of the number of enhancements they are churning out on a day in and day out basis.

00:17:47

Our, our focus then shifts over to code stability and application stability. And what we do is we typically have a 5 4 3 1 regression test model that we execute against very manually intensive, and then our developers are primarily just focused on defect resolution. Um, and then we, we do a rinse and repeat, and we start that process over again in the December timeframe. Um, and then as we get outta the December timeframe, and we look at, um, some of our smaller releases, now we're going back to the, what I addressed early in the presentation where we don't take as much risk and we're making sure that each and every enhancement and defect that we review does not destabilize our tax products and services.

00:18:31

So when you look at where we've maturated to in terms of the automation side of it, um, and with our testing, the quality of our unit testing has gone up significantly. I talk about the cultural mind shift that our engineers have gone through and how we're breaking things down in a much more manageable, smaller chunks. Um, we're also seeing with the improved quality of unit testing and also the increased coverage, we're seeing lower defects, much, much lower defects, especially when you think about the repeat defects or reopen defects. Our smoke testing is, is starting to run, um, on every poll request, uh, specifically to our omnichannel engine. We're also investing in, in integrating automated UI and API testing frameworks. We're building these, uh, leveraging selenium cucumber and, uh, and other technologies. Uh, we also have, uh, the plan to then integrate those testing frameworks into our CICD pipelines so that we can, we can run those automation, um, into our pipelines and, and ensure the quality at the very step of the process, very first beginning step of the process. We have thousands of test scripts that, that are being executed on every single change before they're committed to the master branch, so that you can see our automation is playing a critical role in our ability to not only move faster, but ensure quality at the early stages of our project delivery.

00:19:54

So let's take a quick look at, uh, our teams and applications and our org structure and how we're migrating from from there. Right now we have application-based teams, um, cross-functional teams, and cross-Functional integration has been a challenge or a struggle for us in the past. We have to align our delivery timelines and, and timelines and our priorities, and it's been a struggle for us. Um, typically what you'll see is, is one team, uh, we'll start, uh, delivery on a particular initiative and we'll have a dependency on their team, but that those dependencies aren't delivered in parallel and prioritized at the same time. So what you'll see is that our testing teams actually, uh, are impacted and they're not starting testing at the same timeframe. And so the delivery of that initiative actually ends up taking several sprints rather than one sprint. If we were to plan accordingly.

00:20:42

We're now organizing our feature teams around centralized capabilities. When you think about these centralized capabilities, um, they're things like printing, um, e-signature, uh, you know, data import and data export while building out these central capabilities in a way that we can easily extend 'em as I, as I alluded to earlier, easily extend them to the other tax products and services, other people in the organization that can easily use those because they're built in a centralized manner. These, these feature teams are being built with the skill sets they need to remove, um, all their dependencies, all those handoffs that I referred to earlier, and let them move in much more, uh, free way in an autonomous way so they can deliver without the oversight. Um, from, from our top down leadership, we're very early in the process and doing very small trials with feature teams at the moment. Again, very much centered on our modern tax platform space, but as we see in the maturation of these feature teams, we're gonna then continue to look for opportunities to extend this, to extend these feature teams further out into the IT enterprise. So now I'm gonna turn it back over to John and John's gonna address the larger IT enter or enterprise impact of our DevOps journey.

00:21:57

Thanks, Tony.

00:22:01

When we started our, uh, this journey on modern tax platform a couple years ago, at the time there were, there was a handful of people that were, that were doing C-I-C-D-C-I-C-D, we had a hand, we had some teams, a lot of teams actually doing variations of agile, um, scattering of unit tests across applications, but nothing really kind of consistently applied very ad hoish in terms of when it got applied. And, and, and so we knew that, uh, as we got excited about what we were doing, we knew we needed to make a, we, the larger impact is what could we do with the enterprise? And so shortly after kind of getting our feed underneath us, we formed an internal DevOps user group. Um, what we looked to do is we brought engineers, we brought analysts architects in, and we had them start doing demonstrations and demos for our teams, but more importantly, we, we increased the span to include of all of it.

00:22:56

And so that quickly started to catch on through the use of video recordings. We recorded the sessions and that way people who may not be able to make the session could actually go back on their own time and actually watch what was presented. This really started a good grassroots movement at a, from the, the bottom level in terms of what we could do. Uh, we had demos on unit testing. We had demos on feature flag, we had demos around CICD and demos around infrastructure as code, uh, heavy engineering focused. But nonetheless, we started to get a lot of folks within the IT departments, uh, interested in some of these concepts that we were embracing within our teams. Um, after the DevOps user group started to get a lot of its feet underneath it, um, we started to notice that everybody got real excited and everybody wanted to go do things, but a lot of 'em didn't really understand where could I go to turn to for help.

00:23:50

Uh, now the user group was one, uh, avenue to go to, but we then also formed up centers of excellence and we actually started off with three of them. One center of excellence was really around more around agile project management, product management in terms of how do we get information from our customers and organize that to get ready for build as soon as possible. Uh, then we had more of an engineering one, and the engineering was focused really around what changes could we make in terms of how we manage code. Uh, we looked and we implemented, uh, code tools to do static code analysis, whether it was for code quality or security, uh, to eliminate steps, uh, and posts from a ready before, before we go to production, or we gotta do all these security scans. Uh, let's bring that up front so we can make it do much faster and learn those issues quicker.

00:24:37

Uh, they also started to, to try to come up with best practices and standards in terms of tool choice and tool technologies. Um, also what we uncovered is there were a lot of things that nobody had actually spent any time looking into yet. So what the centers of excellence would do is somebody would, would take it on, if it was early on, it was feature flag usage, it was great teams took it, they would go away and they would maybe they'd meet with a software provider or maybe they would do it their own, but they would come back to the center of excellence and provide an input or an update on how they progressed and seek any guidance or feedback they might have. Um, after those, and it was engineering, we also did it with testing. Um, we, we knew we had something going. And so the IT leadership, uh, determined that the best course of action to help, but we wanted to see this proceed across all of it was we actually created a position, uh, director of DevOps transformation.

00:25:32

Uh, this roles primary purpose was really twofold. One was to increase the amount of automated testing that we had across the enterprise. Uh, when you look at the, the test automation pyramid, um, earlier attempts at test automation really focused primarily on just automating the manual user tests that we've always done. And those were flaky, they broke, and typically they wind up getting abandoned. Uh, so we needed to make sure that we were taking a new approach when we approached test automation. The second was helping to become that, that DevOps advocate for the it, and well as now moving on into more of the HR block enterprise as a whole, there were a lot of groups, uh, whether it was engineers or project managers that that kind of used to doing things the way they've done 'em for 10, 15, 20 years. Uh, that change management aspect was needed. We wanted to make sure that we brought folks along on the journey that we could explain the benefits of why we were going after DevOps and what part they may have in helping us achieve our goals.

00:26:32

Tony mentioned a little bit organizational design. How can we organize ourselves better in terms of who does the work? Uh, you know, typically you're very siloed. You were in engineering, you did engineering, somebody else tested your stuff and you didn't necessarily do the analysis, somebody else did the analysis. So how do we come up with these multifaceted individuals that can do more than just a single skillset? Um, and then when we look at it from a bottoms up, tops down is we, we uncovered that some, a lot of things we tried was to do as much bottoms up as we could. We wanted people to feel empowered and feel that they could make decisions. At the same time, we couldn't have, uh, anarchy. We knew that there were certain controls we needed to place, and so it was trying to define what were standards and what became best practices.

00:27:18

Lastly, uh, and this is something that we're looking at as an enterprise overall, is how do we embolden our teams? Uh, if teams for many years had gone through where they had basically been received a set of requirements and a delivery date expectation and said, now go do this. Now we're changing it around and we're asking teams to, to come up themselves with what should we do, what should we work on, how should we solve that problem? Bringing the UX engineers, the UX designers, bringing the product managers in one conversation to come up with a solution and setting our target goals is unique and different. Um, secondly, we do taxes, uh, making sure that people can still be bold when it comes to how we do taxes. Now, we can't be bold necessarily in how we do calculations. At the end of the day, A plus B has to equal C, but how we set the experience for the customer, what data can we gain that is necessarily outside what they provide us?

00:28:12

Where can we gather data from other sources to pull in to make that tax interview as seamless and painless as possible for our customers? Uh, at the end of the day, we wanna wow them. And like Tony had mentioned, we get one chance and potentially that's over a 12 day window. Some days it's a one day window that you may get, and then after that you don't see 'em again. Hopefully you see 'em next year. Uh, majority of time we do, but we just need to make sure we get that chance and they come back. Lastly is, is empowering teams. We are looking to make sure that we have set up our way so teams feel empowered if they wanna take a risk, take a risk. And we're doing that in many ways. One is rewarding people who've taken risks, uh, making sure that if they fail, they still can get recognized for failing because they took a risk, they took a gamble, and that's what we're looking for.

00:28:58

Um, secondly, on decisions. Too many times in our culture, I think a lot of people's cultures, we run all decisions get ran up a chain of command to where somebody makes the call. And reality is they probably really don't know the details of what the call should or should not have been. We're really trying to push those changes down to the managers and to the individual team's hands to where they can make the call. And as long as they can, you know, logically explain what decisions were made, it's the right decision, decision we'll move forward with now. Thank you all this afternoon for, for listening to Tony and I. We really appreciate, uh, presenting at the conference. Uh, we present everything that, uh, gene Kim and others have done that have helped us on our DevOps journey. And now Tony and I should be online to answer any questions you may have. Thank you. Thank you.