Las Vegas 2019

Project To Product - Beyond the Turning Point

In many enterprises, Agile and DevOps transformations are starting to get board level visibility as the need to become a software innovator becomes critical to company survival and success.


But how many of these transformations are on track in terms of producing the results that the business is expecting? How many of these organizations are tracking the results of these transformations, rather than just the activities, such as training and tool deployments?


The fundamental problem is that the way these transformations are structured has a very different meaning to the technology side than it does to the business. Key concepts that should be shared by both sides, such as technical debt, are not part of the common language. It is these disconnects that cause large-scale transformations to fall off the rails.

In this talk, Dr. Kersten presents the lessons and flow diagnostics learned from the past year of organizations' shifts from project to product. He will then present predictions and prescriptions to help our organizations and careers thrive beyond the Turning Point.

DM

Dr. Mik Kersten

Founder and CEO, Tasktop

Transcript

00:00:02

Hello everyone. My name is Mick, and for the majority of my career, I've been on this kind of manic quest to figure out a 10 x increase in productivity and how software's built. Uh, I got this bug back when I was working at the, at Xerox Park, Xerox Paloalto Research Center, which brought us things that gene likes, like functional programming languages, and some other really key concepts like the, the mouse and graphical user interfaces. And I was working on a team whose goal was actually to find at TE that 10 x increase in programming languages by introducing functional concepts into object programming. Uh, this is why I get pretty excited when Gene starts to educate our community on things like passing by reference and passing by value. Even though I, at first, I thought that was kind of crazy to be, to be going to that level of detail.

00:00:48

But what's been so fascinating to me is that the things that we've learned in building software and the technical practices around building software are actually starting to permeate how we think about software delivery and technology and innovation at a business level. And this is why I think things like the five ideals in the Unicorn Project are such key tools for changing how we think about software delivery. So I actually continued this quest. Uh, I went and from Park, I went to a company called Intentional Software, uh, and worked for Charles Sim, the former chief of Microsoft on intentional programming. We tried to take that route, it didn't quite work. Um, and I then decided to try all these different 10 x experiments, uh, by doing a PhD at the University of British Columbia. And I didn't wanna take too long to do this PhD. I I had no aspirations to, to be an academic.

00:01:33

So I asked my supervisor, you know, how can I get this thing done in three years? And I, I hadn't done my master's yet. So it, it was quite rushed. And she said, well, the only real, real way that you can get through this that quickly is to, to do something that's a bit rare, which is to, uh, measure the outcomes of your experiments. You have all these weird, interesting ideas on how we can make developers more productive. If we can find a way to measure whether those are making improvements or not. Maybe we can get you out of, out of school faster and get you graduated faster. So Gail and I then spent all this time trying to figure out how the heck you measure developer productivity. And I think a lot of us have read the ways that you shouldn't do it. And, you know, then books like Accelerate, um, were, which show us the, the fallacies of counting lines of code and function pointers and, and those kinds of things.

00:02:20

And we came up with this novel measure, we called it the edit ratio. It actually allowed me to evaluate these experiments with professional developers. And it, it did get me out in, in 3.3 years. So through this, I learned the power of actually being able to measure these, all these great ideas that we have, figure out which ones work and which ones don't. And you'll recall yesterday, gene mentioned these kunz moments, these paradigm shifts. One thing that's been so fascinating to me is that each one of them has been underpinned by some major innovation, how we measure, whether that's telescopes or microscopes or, or tunneling electron microscopes. Uh, in order to really improve with a scientific method, you need to measure. And when we find new ways to measure, we find entirely new ways to innovate and to drive technology. So I'd like you to consider for a second, in terms of your practices, your organizations, uh, your own work.

00:03:10

How do you measure what you do? How do you measure how successful your organization is? And what is it that you measure? What are those core metrics? Are people seeing them differently? Are people seeing the same way? I went on this quest about, uh, the one that led to the book project, the product that I'll be telling you about today. Um, and three or four years ago, I started doing all these customer meetings, and for the first time with more senior IT executives, as organizations were getting more interested in turning into, uh, software innovators. And I kept asking these questions. So, what do you measure? How do you measure value? And I would get bizarrely, these, these blank stares, um, from IT executives who are spending into the millions or hundreds of millions and sometimes billions on their software portfolios, they're growing software portfolios. Uh, I would ask this question all the time, where is your bottleneck?

00:03:58

Because of course, if you've got a large production process, even if it's a creative production process, uh, we know from the last a hundred years of manufacturing or so, uh, that understanding where your bottleneck is should drive all of your investment, all of your thinking as a leader, um, in terms of how you're removing impediments and, and where you're investing. And again, every single time I ask this question, I would get blank stares back. So this is it. You try this, uh, when you get back to your organizations, ask the senior lead the most senior leaders, uh, where is the bottleneck in in your portfolio? And again, you'll get these oddly blank answers to something that should be, you know, the most obvious question. Um, you'll get all sorts of, well, I would get all sorts of answers about, well, we have to invest in more developers.

00:04:41

Uh, we think we've got a problem over here, um, mainframe's holding us back, and so on. But once you start dig in and asking more of the why's, you'll see that there isn't clear understanding of where the bottleneck is because there's no clear understanding of what it's a bottleneck to. What are we trying to optimize for? Are we trying to go from 50 to 5,000 bill a day? Um, are we trying to, you know, get the, the number of story points that were finishing a hundred times higher? Um, I remember when I first told my CFO about story points five years ago, he looked at me like I was crazy. And I think briefly regretted joining Testa, uh, because he'd actually come from a lean manufacturing background, and they had such a clear notion of what's produced, what costs are, what value is in those organizations.

00:05:22

So I realized that something was really wrong. And, uh, even a couple years back, Nicole Forger and I said, okay, this, you know, we've got our ways of measuring that we studied for our PhDs. Let's put our heads together. Let's come up with some principles around DevOps metrics. We put this article in the A CMQ on the fact that your biggest mistake might be collecting the wrong data. Nicole, um, the Dora and Dora had all this really important survey data. We realized that what was actually happening in a lot of companies were people were taking that survey data and those metrics and completely misapplying them. And the same way as I was seeing organizations misapply metrics, they were getting out of tools, right? Misapplying, um, basically user story open to user story close in Jira to be representative of customer value, um, and customer delivery.

00:06:05

So I realized that these, there's something very core here, that by not being able to measure and not having a common representation of how we measure value organizations, were going astray with the best of intentions. And so I'll recount one of, I think the, to me, the most poignant stories of project to product. Um, a top 25 bank, honest third transformation with a billion dollar budget. So the cost here are extremely large and extremely clear. Everyone's bought in executives, the board, the CEO want to transform. They want to bring all these great things that we've learned in the last 20 years of Agile in the last five or so years, 10 years of DevOps to the organization to make them be innovative. Um, again, multi-billion dollar line, uh, budget overall billion dollar transformation budget. Um, and then I got to learn and how they're measuring this transformation and the success of this transformation.

00:06:55

And they're doing it with this project management layer, um, between what the business sees and how they spend those costs, and how they decide whether things are on track in this transformation or not, um, and what it is doing. And two years later, I learned how to do these, these sets of interviews for my PhD. I do another set of interviews with executives and, um, and engineering leaders and IT leaders in this organization, asking 'em how this transformation went. And every single person said, uh, the, you know, the transformation's been called successful by all its measures, we're delivering a lot less to the business than we were before. And people on the business side said, they did a whole bunch of things I teaching for itself, and now we're actually, we feel like we're getting a whole lot less than we were before. So I realized, okay, maybe there really is a fundamental measurement problem here if this whole thing was deemed a success and there's less value being delivered than there was before.

00:07:49

So I realized, this is where I learned the concept of, of watermelon projects where everything's green on the outside, uh, for a year or two. And then of course, uh, time comes time to ship time, time, time to complete the project. And turns out it was all red on the inside. So, uh, this, there's just something fundamentally wrong. I learned with this lens of tracking activities and actually only tracking costs that these proxy metrics for transformations actually will destroy them by not giving you any sense when things are going sideways. Gene told that Nokia story this morning, same thing, proxy metrics, measuring how many people are trained on Agile, how many people are using the tool chain that completely led the company astray and result in a whole bunch of bad decisions. Uh, because where they should have been investing was in the technical debt, in the architecture of their core platforms, of the semi operating systems.

00:08:38

So we're now getting to the point that, my sense is, if we don't come up with a common set of metrics with a common language and a common way of measuring productivity and value and software delivery, rather than, again, falling back to these project plans and cost centers, more and more companies stock, uh, valuations will start looking like this. We've seen early warnings in retail. You've got a company there on the left, their stock price changed over the course of the decade. They've become so good at measuring and managing to value and at structuring becoming this gene point, now, this dynamic learning organization, um, that's able to align their software architecture, um, through microservices, their organizational structure through two pizza teams and their product value streams, the way they innovate for their customers and create a disruption machine, invest heavily in their core producing things like AWS.

00:09:26

Then you've got these other companies who've are just simply being disrupted by that because they're trying to do the same things, but they're not succeeding. Some are succeeding more than others, the one to the right of the chart, but this is just not happening quickly enough for them to prevent being disrupted. So the, this rate of disruption is now at a pace. Again, I just, I think we do keep, I need to keep reminding myself of this where it's accelerating, not slowing down. So half of the s and p 500 is projected to be replaced over the course of the next decade. And something has to change. And this is a, you know, this is basically what's so interesting about this is that almost no industry is immune. So even banking where you'd think they would have a quite fair amount of security in terms of the amount of investment they have in technology, and the fact that they've got significant control over how transactions are processed.

00:10:13

Um, tech is next of course, reading banks. And what's even more stark to me was this, uh, study published on Bloomberg that collectively banks have spent a trillion dollars on their digital transformations. So that's about a percent of the world, GDP. So that's, that's, that's a lot of the world's wealth. Um, and few have reaped the rewards and the rewards are simply not measurable. So back at that bank story, I basically felt like I saw a billion dollars of value go up in flames, which to me was a really big problem because again, everyone wanted the right thing. We could be seeing a trillion of trillion dollars go up in flames because again, these transformations are being steered the wrong way. And it's not that we as a community don't understand what the right things are to do, it's that the business has not adapted in, especially in traditional organizations, whereas it has in in technology companies.

00:11:05

So I got to thinking, this is the kind of quick history lesson part I got to thinking, has this happened before? And as, as I was writing project to product, Jean introduced me to the work of Dr. Claude Perez. Jean showed you the stuff this morning where you, she, what Carlota, uh, showed us is there've been these, basically, if you looking back, these five longer waves can also be called chondra waves of innovation. And what's so interesting about her work is the way that she's modeled the way that these have rolled out. So they always roll out with some new means of production becoming cheap, and some small set of companies mastering that new means of production. So that could be the 2000 FinTech startups that we have today, or the, the even faster growing number of insurtech startups on the market. Um, then you get into this turning point where some of those companies learn how to truly master software at scale.

00:11:54

Uh, and then, uh, at that point, the creative destruction just accelerates more and more companies that are traditional businesses from the last stage get displaced. So where we are today in terms of this wave, and Dr. Perez, um, I, I've been continuing the discussions with her regularly, uh, throughout the book and, and, uh, through to today, she still sees us and all the work, all the research I've seen still places us directly inside this turning point where some, a small number of companies have has become extremely effective at managing software delivery and innovation at a business level, and creating these amazing offerings of theirs that we use today. Those are the fangs and Microsoft. So Facebook, Amazon, Netflix, Google, um, the bats by the Alibaba and Tencent and China. These companies only exist, um, in the US and in China and taken collectively. They have currently the size of the, the economy of Japan, um, in terms of the, the amount of wealth they've accumulated to drive this kind of innovation.

00:12:53

So that's the taken collectively, those nine companies are equivalent to the third largest economy on the planet, and they are leaving everybody else in the dust. So what Dr. Paris predicts is at some point other companies start to learn. I think it's been happening quite slowly in this particular turning point, but other companies learn, start learning how to be software innovators. And that's really the amazing thing about this conference is this is where we learn how to do that, how to take those practices that work, uh, and tech startups and unicorns and tech companies and bring them into our organizations. And as Gene said this morning, the faster and more effective we can do that, the quicker we will get into this deployment period where production capital actually takes over, um, rather than financial capital. And where you end up with what can be this golden age of innovation as more and more companies, um, adopt these new ways of working and bring them into their particular market segments.

00:13:45

So one thing that Dr. Per also notes is that with each of these waves comes a new kind of managerial innovation. And one thing I wanna make perfectly clear through all of my learnings is the managerial innovation that's gonna get us through this turning point is, is not project management. So, um, it's, it could be what Gene showed this morning, um, but it's not treating like people as like people being cogs in the machine that works for building the Hoover Dam Gantts were created for building the Hoover Dam. Uh, but if you've got creative and complex work, it just doesn't apply. Even Ford realized that, that he needed to invest in his staff to help them master their work, um, because it was more complex work. And of course what we do in software is, is even more complex. So I got to thinking, how can we actually shift?

00:14:33

How can we make progress through this? How did some of these comp, how did some of the other companies, uh, who made it through previous Turning Points survive? Because we went from a world with, you know, I opened up Life Magazine, uh, um, at my mother-in-Law's house from the 1930s, and there were about 50 car brands in there that I did not recognize to, to the car brands that, that we recognize today. And to learn more about this, I visited one of our largest customers, BMW group, you heard from them this morning and in their flagship lipstick plant, as I walked through it, I, I learned something that I just had not learned or understood from all that lean literature and Toyota production, um, uh, reading that I had done, which is that just this embodiment of lean principles and how it really is different than than I was thinking about software.

00:15:18

So first of all, the lean principles from Jim Mack's book are precisely specified value by product. Pro product embodies that fifth ideal of customer obsession. Customers consume products, you can call them services, you can call them different things. You can't ignore internal customers, but the lean principles that stated are precisely specified value by product. Identify the value stream for each product, which is why you're hearing this word more and more. Thankfully, make value, flow without interruptions. 'cause you know, those, those three ways of DevOps are still key to the flow and feedback. It's how you get to continue learning and let the customer pull value from the producer. So production is all about and de delivering. Customer value is all about pull. It's not about your internal activities, it's about what's being pulled by your customer and by by the market. So contrasting what we see in it today with what I was seeing, and again, the one of the, uh, most advanced manufacturing plants on the planet, um, is this, these integrated production lines that were beautifully orchestrated to what we've got today is these disconnected tool chains and handoffs and basically people throwing, uh, requirements, stories, whatever they call, whatever their abstraction is, uh, over the fence to it, right?

00:16:27

Or the support desk throwing a bunch of tickets over or incidents. Again, this is exactly what we're trying to, trying to break through, but what was so interesting is everything in car production is managed as these products. That's a, that's a primary concept and, and abstraction rather than project. It's about flow and value rather than about tracking activities. This is a really interesting one, is in car production lifestyle, is that everything is architected around flow. So rather than this one massive enterprise architecture that's gonna meet every current and future need, uh, which is what I always like to build in, in terms of the, the SDKs and frameworks I was building, um, you actually are making trade offs specific to the flow for that product, which means you might actually have some redundancies in the architecture, which means you might not go containerize everything tomorrow, but you'll focus on a cloud native architecture, uh, for a fast moving product value stream while using, using a strangler pattern for something that could actually stick around a bit longer.

00:17:18

And this is the key one, everything was optimized end to end. So this focus on telemetry local to one part of the value stream, uh, be that change success rate or be that user stories completed, um, that's all secondary. We still need to track all those things, but the primary thing being measured is lead time. That's, that's the number one predictor of company performance and, and automotive. And once you start optimizing, optimizing and measuring end to end, and this again where the measuring, measuring is the key thing, you start actually seeing the customer's perspective of time and the value, and you start measuring to business results rather than these proxy metrics. So the core story and the, the, the, the core, I think, contribution of project to product is to provide a framework to show, to identify what flows in software delivery, um, to build on some of the steps and missteps of what we learned through academia and industry on this.

00:18:10

And to pro provide the set of very simple customer centric abstractions and business-centric abstractions that our leaders can understand. I think that we as technologists already get them, um, you know, at tasktop we've got dozens of different work items types roughly, basically based on the scaled agile framework. Everyone understands how those work. But again, my CFO was completely, you know, uh, did not think that any of those things made sense. Our developers and product managers need to use storypoints. He did not quite see the value in them. But if we take a customer-centric view of value, if you look at what's being pulled by our customers, of course our customers want more features. That's what makes them decide to use your mobile experience, um, rather than the competitors because you've created a user experience that delights them. So customers pull features, they pull value, they don't pull releases, they pull what's in those releases, and you just have to make sure they're able to pull it very frequently.

00:19:01

Uh, defects because sufficiently complex software has defects, has incidents, and our ability to resolve them quickly, um, and focus on meantime to repair, uh, rather than the meantime to failure is critical. Again, those things are, our value streams need to be optimized for that. Risks are now a first class part of software delivery. So data privacy, um, compliance, all of those kinds of things that make sure we're providing a trustworthy user experience to our customers. And debts, technical debt infrastructure, debt value stream debt itself, problems that we have in the flow within our organization are also critical. So these are the four flow items, and they're mutually exclusive and comprehensively exhaustive, which means if you do less of one, you, you do more of one, you do less of the other. If you've just had your teams implement a bunch of compliance, a bunch of work for the new reg regulatory rules such as GDPR, you're going to get fewer features.

00:19:53

And the ability of our business leaders to understand those trade-offs is absolutely critical, as I'll illustrate through the next few stories. So the goal of the flow framework is to provide today's business leaders and today's technologists with a common language of understanding that and a set of abstractions for implementing this new approach to measuring software delivery. And when you think of those four flow items, again, you might be using existing agile frameworks such as safe say, say five is actually quite compatible, uh, with a scaled agile framework as our, again, other with a, with a flow framework as our other approaches. Think of this as the layer above, as the layer that you expect your business leaders to understand, um, for example, as a way of elevating the need to understand technical data at a business level rather than just at, at the level of the technologists.

00:20:40

So the way it works is at the bottom, and this is a key thing. The ground truth of what we do, of the work we do is captured in those tools. It's captured in in the development tools, in the support tools and the planning tools and so on. It's all there. And this is the amazing thing. We think that software is this intangible thing, but the level of information that we actually have in our tool repositories is that such a high fidelity, we can find that information and the flow within those tools. Then the key thing is we have to look beyond the tools and to actually model and define product value streams. And I think one of the big, um, I'll get back to this towards the end, one of the big misconceptions that there's this one value stream for your company, no, you've gotta set, if you're thinking of customers and product innovation, you've got a set of these product value streams, you define them, and then in the flow framework, what you do is you measure flow distribution.

00:21:26

So this is how much of each of those flow items you're gonna focus on on this release. This is in your backlogs. You still use your agile tools for tracking your backlog sizes. This is what's actually flowing through a product value stream so that you can look at what should be the amount of focus that we put on technical debt versus feature delivery. Is it 20% as I heard from the, you know, last three people I asked, or do I actually need to tailor it to the needs of this product value stream? As I'll show you in a moment, the flow metrics are flow velocity. So this is a throughput rating, how much you got done over a period of time of each of those flow items, flow efficiency, the ratio of weight and active states flow time again. So how long it took end to end to deliver this and flow load, which is a work in progress metric of how much load you put on the value stream and that these are correlated and this is the key part to business results.

00:22:13

So a value metric for each product value stream specific to that customer. This could be revenue, this could for an internal API, this could simply be adoption of that new API as you're trying to move off some legacy data backends. Um, the cost for that value stream. So not the cost of ops, not the cost of infrastructure, not the cost of development or testing, but the end-to-end cost of that whole product value stream of everyone involved, of of we actually measure hosting and cost separately for product value streams, a quality metric. And then this absolutely key one, a happiness metric, so the happiness of the staff working on that product value stream. And I'll show you in a moment why that's so important. So here's a, here's a, a story kind of through the lens of the flow framework. This is something that was shared by Nationwide Insurance a year ago at this conference, um, where we were working with them and helping them understand basically from the business perspective and the customer's perspective, uh, how long it took to deliver value to the customer.

00:23:10

And so we did some measurements and the number was a, it averaged out to 120. So it took 120 days, uh, to deliver value to a customer, which, you know, might seem like a long time. And of course then, you know, the reaction from business leaders and CIOs is, okay, in that case we need to hire way more developers so that we can get things to customers faster. Now, when you actually start measuring a value stream and you measured across these value streams, you cannot, you can look the layer down. So we said, okay, well how long, how much time is spent in development? And it turned out it was 2.5% of time was spent in development. So you can hire three times the developers, you're gonna get basically no more velocity out of that because the bottleneck here was not development, it was all the wait states developers had on.

00:23:56

And it was different in different value streams, in some cases on security reviews, in some cases waiting on screens because this company actually, they were, they were constantly waiting out, they were trying to innovate new user experiences. So the really important thing is that by taking this different way of measuring time, not measuring only code, commit to code deploy, not measuring only how long it takes to open and close the user story, but measuring flow time from end to end was actually critical. Um, and note that there's a distinction in the flow framework between lead time and flow time. Because we have so many more requests on RIT backlogs, then we actually have capacity to do that work. Lead time starts when a customer makes a request flow. Time starts when you take work into the value stream. The moment you start doing any analysis on it, you measure both, but you optimize to flow time again because of the size of those, um, of those, of those backlogs.

00:24:46

And so the key thing here is relating this to, um, Gene's, five ideals in the unicorn project. It's critical that we take a customer focused version of time and of delivery rather than again, focusing on these local optimizations of the value stream and making development go twice as fast while delivering no more value to our, to our customers. So it of course, as you're doing this, the key question is when you're trying to get from 120 days to to 14 days, let's say, um, you're asking where is my feature delivery bottleneck? So we dug a little bit deeper, and in this case we realized there were way on, on a couple of the key value streams that were very critical to the business because they were about the innovation. There were too few designers that were causing too many wait states for developers. Um, so there's been this pretty big trend of course, as, as our systems of engagement get more sophisticated, companies like Atlassian gone from one in 25 designers to developers to ratios like one in nine and even smaller in some cases to become more innovative.

00:25:44

And of course, by actually digging into where those weight states were, you can do this with data because in some product value streams you might be perfectly fine in terms of your UX expand with well with others, you're completely constrained and that's where you need to hire. So that's, you actually get that metric through full efficiency. And again, it provides you this, uh, data-driven way of approaching this next ideal of improvement of daily work. So not only at the individual level, which individuals tend to be very good at, they know what they want, but at the level of how you invest in the organization across multiple sets of teams by identifying their bottlenecks. Um, and you do that again by measuring efficiency. The wait states versus active work states across the end-to-end value stream, not a, not just an agile team's work. Um, so another interesting one is why is delivery slowing as we add developers?

00:26:32

And too often the answer to this, actually the most common answer I've seen to this, um, in deployments of the flow framework over the last year is that the software architecture has been created again for some massive enterprise architecture rather than to be aligned to value stream flow. So we now have taken this principle where the only time you invest in architectures, if you can increase future flow rather than making these overly generic architectures as, uh, developers like myself, uh, really tend to enjoy doing. So the really way that, the way that you track this, the way that you plan this is by focusing on flow velocities, understanding will we actually be able to get this next set of features done faster if we invest in the cloud native technology and have AB testing? And this is exactly how you make the business case for those kinds of investments.

00:27:16

So again, this is back to that ideal of locality and simplicity where we want this not just at the level of the code base, we want that next set of innovation for the customer to be much easier because the value stream and the software architecture were aligned. So, um, and another key one is why are the teams working on this product less happy? The number one reason we've seen for that is technical debt is causing overly high flow load. So there's too much of backlog of features. The business is always feeling like they're not getting enough. Uh, and it's this basically the, you saw Gene starts on this today. Um, it's this death spiral of more and more load meaning more and more thrashing, um, and less and less output. And again, what we want is that focus on flow and joy on that ideal.

00:28:00

And to do that, we basically overloading value streams. Not investing in things like tech debt reduction does not give you a chance to do that. There's statistics that show that developers are working on tangled architectures are more likely to quit, um, uh, an organization. So making these investments is absolutely critical. So I'll give you a just a quick story right now of how we do this. At tasktop, we connect our value stream and it doesn't really matter what the tools are, you connect across the tools to get these end-to-end flows. Our flows tend to start in tools like Salesforce, um, and and support desks go into product management tools, JIRA and and so on. Um, and so here's a quick story for from earlier in this year where what we saw, we were putting out the very first release of this new product. This was the first early access release, the first beta release.

00:28:45

And of course we did the thing that I was just telling you you shouldn't do that you would think we wouldn't do, um, which is we overloaded that product value stream with features that were, you know, felt very critical for that first release. We knew that the more of those features we delivered, the more successful that early access release would be. So you see the flow load spike at that point. Uh, once that happens, you can see something very interesting happen, uh, basically in real time on flow time. Um, so the flow time gets about 10 x slower, which means developers are thrashing and we're getting way less done. So just completely counterproductive. We all do this, but the difference for us was that we saw it and we were able to adjust again at a business level and actually help the teams, which of course knew what was going on, which is they were now thrashing and doing way too much at once.

00:29:33

Their whip their flow load was way too high. So of course we start, you know, the only way to really finish this to fix this is we actually do need more capacity, right? And so we start hiring, um, and bringing staff onto this product value stream because that's, that's the only way we can get more capacity. And we had something interesting because we do take this locality and simplicity, um, ideal of genes pretty seriously. Uh, we for this new product, we move to a new architecture and some new programming languages of Scala and TypeScript. Um, in addition to, to all the Java that we've got, myself and our VP products, Nicole Bryan, we were actually, as the teams decided the set of teams on this product value stream, uh, decided on this architecture, we were quite concerned that our, we would have trouble finding Scala developers.

00:30:16

And so of course as soon as we saw this, we're like, okay, you guys all made the big mistake, um, why, you know, we should not have gone with Scala. And so we basically were I think at one point hours away of just top down saying, that's it. Scala was a mistake, we're pulling up Scala. Uh, the team said, okay, calm down. Um, just give us, you know, give us two or three more sprints. We really do think people can ramp up. It's not as bad as you think. And, and we actually calmed down. Um, and we saw the flow velocity go up as people really did get ramped up. It was not just the promise of ramp up, we saw the ramp up. And so again, they were able to very easily prove to us, um, to their leadership that we were wrong and they were right.

00:30:59

So again, this is, this is the kind of thing that you get can get when you've got a common set of metrics and, and a consistent way to measure. So the whole goal is again, that we get away from these anecdotal ways of looking at, um, delivery and we actually start being able to track flow, um, in this much more tangible and common way across our value streams and understand flow, distribution, flow, load flow efficiency, and in the end, do what we need to do for our teams, which is remove bottlenecks and weight states and impediments for them. Um, this is what will enable us to transition again from the business being waterfall, while we've got agile and DevOps principles kind of in the middle, um, into this end-to-end flow orientation with a common set of measuring and a common set of metrics. Um, again, the whole point being direct visibility on the product value streams rather than these layers of indirection a really a new way of looking at budgeting because once you've got this flow orientation, you start being able to invest incrementally, um, and you start being able to do interesting things like basically move, you know, don't wanna blow up your teams, you stop doing the, the crazy project management notion of assigning developers and other IT staff to multiple projects.

00:32:04

Everyone works on a single product value stream, but the teams become a unit, the product value streams become a unit. And so for example, we will actually look move teams between product value streams when let's say we need an API team to be closer to a delivery team. Uh, over the past year since the book was published, I've absolutely found some interesting pitfalls that wanna really quickly take you through around this. So defining the entire product model upfront is, is one of the key ones thinking that you can know exactly what your end-to-end product, portfolio of products and sub-product is going to be without measuring. And it's amazing that again, this, this is that project mentality. What you wanna do is start small and then measure and adjust and refactor it every quarter. Again, bringing those programming principles in, um, waiting till things are done, till the teams are agile or a hundred percent agile or 70% agile or whatever those numbers are to measure, you can measure waterfall if you want.

00:32:54

It actually gives you a business case to stopping waterfall because you want to increase reduce flow time. Um, and then this is an interesting one, um, is basically over focusing on just the external value streams. So just the business and just the customer facing ones. Your internal value streams are absolutely critical. They're basically what drive the productivity of your entire organization, of every developer building a business application. So understanding that value stream architecture and noting that the technology giants and unicorns actually invest much more and the value streams lower down, including the tool chains themselves, um, that's what drives productivity in the value streams higher up. Whereas in enterprise organizations, that tends to be inverted. So basically I think where we need to head right now is to look at how we bring our organizations through this turning point to get this this period of wealth generation.

00:33:45

I think measurement and the right kind of measurement is critical to that. And just to wrap up, um, you do that by moving from project and cost centers to these product value streams of si from silos and proxy metrics to flow metrics and business results. And from this fragmentation to this integrated connected value stream network that allows your people to collaborate and to thrive. So, uh, with that you can find the book on Amazon, all author proceeds of the book go to, uh, program supporting women and minorities in technology. And uh, thank you very much.