Las Vegas 2019

Forecasting Using Data—Using Historical Data for Demand, Capacity & Project Planning

This session teaches you how to forecast capacity or delivery dates using a team's historical data. Probabilistic forecasting allows planning to take into account uncertainty and things that might happen (risks) and help communicate those plans

It will cover -

- Probability and Probabilistic Forecasting basics

- How much data is needed for reliable forecasting

- Predicting the arrival rate on incoming work

- Predicting the capacity of teams receiving work

- Building achievable plans across one or multiple teams

- Tracking progress and communicating status

TM

Troy Magennis

President, Focused Objective LLC

Transcript

00:00:02

Hi, everyone. Welcome. Welcome to the full costing talk and, uh, first rule of forecasting. If you start late, you end late. So there you go. My work here is done. Um, the slides, you can download the slides right now. I know that's what most people, you can download them now and then leave, uh, get to lunch early. Uh, say if you're taking photos of the important slides, I thought I'd just put them, uh, put them out there straight away. So we start all of our talks off here at DevOps. Oh, does with what are our passions? And I'm a, I'm a little bit too passionate about data visualization, which is why I got married in my forties. So, but helping people see data and understand it and interpret it and make good decisions by it is something that, uh, you know, I pay too little attention to as a developer.

00:00:50

And I guess I've gone through sort of, uh, my career I've realized just how important it is. Sharing information to others is a very important skill to have. And you have a passion that I have, which is more of a pet peeve, but I couldn't, I had to write it in a positive fashion is that I see people trading predictability for value or too eagerly. You know, it's all about predictability, predictability, predictability, which a lot of you can do a talk on forecasting to learn how to improve. Well, when we do that, we actually trade possible value because sometimes the things which will drive our company value the most, uh, really risky and really uncertain. So we need a way to try and help others understand that with doing the most valuable stuff. And we're going to screw up from time to time. This isn't live. Is it? I don't know if I'm allowed to sweat. So again, everything I'm going to mention in this talk is free and available. You can go and grab it and contact me there's cards. You can get the slides here, there's cards on the edge of the table and the water cooler on the way out. So you don't have to take notes and take photos unless you really, really want to.

00:01:56

So I want to restate what forecasting is about to me. I know forecasting is about, um, getting people to set commitments so they can be belted up later about missing them. But I don't want you to think about forecasting in that, in that vein. I want you to think about it, that we need a forecast to understand that we don't know enough about a system that's delivering or an input of, uh, where we're going to get our help desk tickets. We need something, something to compare against to gain the insight that we don't know as much as we thought we did, or the system has changed underneath us. And we don't know why. So forecasting is about setting a baseline of what reality could be. And it's not a single occurrence event. What you have to do is you have to model forecast, compare a reality that actually occurred realities, right?

00:02:47

Forecast can never be right. The forecast will always be wrong. The model will always be in complete cause new factors take place. So our job is to whip around this cycle as fast as we can. And you can't that if you forecast three months before you start a project, and then you complain that you didn't hit the date in nine months, time, it doesn't make sense, does it, but that's exactly what we do day in and day out. So I don't care that models are wrong. I expect them to be wrong. I expect my forecast to be wrong because I'm using as a learning tool to tell me that I understand or don't understand the system well enough to even go on the record about making it predictable and if it goes wrong and it's for a reason of delivering something of high value, good job team.

00:03:31

So this is what I want you to think about when you forecasting something in the future, you should only believe it. If you were able to forecast something in the past. So first check you have to do on that forecast check and compare reality is to go back a period of time, go back, go back six months and use that the free months of data from there. So I'm a straight amount of six. Use that data, build a model and see if you could actually predict what actually you knew happened in your system to some recent period of time. And only if you get that model, you go around the loop a few more times to work out what your biggest factor is that caused the error. Should you even consider forecasting into the future? And then what will happen is over time, your models or drift because the system will change and more unplanned work will come in or you will take on risky or work.

00:04:29

And your forecast will no longer align with your model, which is great information because it means you now have to think about, uh, what is, was that an intended consequence or was that an unintended one and react accordingly? So a lot about forecasting. And when we look at the work that Deming and Shewhart did in the, in the seventies and early eighties, there was about telling people to calm down when things were wrong. Like just let the, let the system settle. What was special calls and what was common cause variation. We wanna, we want to very be able to very quickly to take something special, something new, something unanticipated has happened. So here's the first two takeaways or my takeaways are going to be on a black slides and all the content is going to be on white. So forecasting is detecting earlier that you're wrong.

00:05:19

It's not about setting a date and expecting to hit it. It's about knowing on the journey towards that date, that you've deviated from the path to expect it. And until, you know, you can forecast something slightly back in history, don't contemplate forecasting in the future with any, and she's a patient of success not going to happen. So we're going to move on now, like in a half hour track, I'm probably not going to teach you how to do forecasting. It's not my intent here. I'm going to try and set the scene about what you should expect and what you can expect and who to ask when you've got questions and problems. Okay. Because in our world the most common way forecasting is to just extrapolate out what has happened in the past, going forward. And it, and in this case, this was the Cuba throughput for a hundred teams inside a software organization.

00:06:09

And yeah, they were roughly at the beginning of 2012 there that we're doing about 2000 items a week. And by the end, I was doing about a hundred thousand items per week. Looks like a nice stable project who thinks that's a nice system to forecast. You have got a few hands going up there. So here's the data that drove that queue early of projection. Notice it's got some, some weird stuff going on there, right? I mean, and that's a, you know, a data analyst or a forecast Costa, you know, which, which trend would you like me to project? So, you know, which one gives me the answer, most likely it's what you wanted. So depending on the timeframe, you're looking out on doing a forecast, it's the methods we use, where we just take a projection line and extend it may or may not be the right tool for the job.

00:06:56

And we all worry about, well, we've got to estimate stuff. If we just estimated better, it would all, all turn out well and worthwhile. Yeah. Okay. Yeah. Stick with that belief. All right. So that's around the December time period. Anyone got a guess as to what that was. It was holidays now yet related to holidays here with the HR policy. Now notice that that throughput to terrain happened every year on the same period of time, but it wasn't like after it finished, there was this huge burst of pent up half finished work. And this was only three or 4% of the staff taking vacation. That period of time, the HR policy of course, was used it or lose it by the end of the calendar year. So your superheroes and the people in the teams, which were most constrained, you didn't have time to take the vacation through the year.

00:07:51

So they had a we're faced with a situation of use your annual leave or lose it. So they did the right thing and used it, but the dependency chains between those groups and because they were the constraint just meant that you had zero flow for that period of time, what's it costs say a hundred teams. I, people per team, you know, that's what 800 people, what's the cost of their salary alone. You might as well have just sort of set fire to it. And it's an easy fix, right? Just say you lose your annual leave on the date from joining the company or something like that shifted to spread it out throughout the year or better still. Um, don't put staff in situations where they don't have time to take their annual leave throughout the year, encourage them as a manager to take it elsewhere.

00:08:38

Anyway, what looks stable over a long term, isn't stable in the short term and depending on what you're forecasting, you're going to have to get good at understanding that. Now, when we set out to forecast, uh, this type of problem about what a future value might be, there are three components to it. Trend is what you saw in that nice blue, nice curvy line. It's a long term increase or decrease. Sometimes it's not linear. Sometimes it grows exponentially. As we add people, we get better and so forth. On top of that, there'll be a, some sort of pattern. One, all mole, like think of traffic flow, for instance, right? What would be a pattern in traffic flow maybe day of week, it's slower onset of insight entities during the week, maybe time of day, right? They're patterns, which are predictable. That alter the trend. They bump it up or bump it down based on a fixed amount.

00:09:31

So everything moves up and down on the trend line, but you've got these patterns which are on day of the week and time of day there. And then on top of that, and you've got these special causes or noise where, uh, there's a blizzard, doesn't happen all the time, but when it happens, it's a hugely impacting problem. And whenever we're trying to predict a future value, we've got to be thinking about these three cases and it will vary which one matters most in our forecasting domain. You know, long-term, the trend is probably the right one to look at. You know, you don't want to micromanage, um, these, these abnormalities of pattern or noise, but if it's medium term a month out, also, you might want to look at trend and seasonality because if your project started just before Christmas, you're a month late before you even started the project.

00:10:15

And that's going to matter. We S we think that we can just use the same model throughout the year. You can't because the special calls, special concerns. Again, I sort of start overtaking your, um, your policy. So you gotta be starting to think about in your work, okay, we're growing. And we're adding people at a certain rate. It's probably going to affect our trend. Um, we know, or if we're a help desk, we know that on Monday mornings, we get a lot more, um, password reset requests, because we've had two days of chance for people's passwords to expire, not one and so forth. So it matters which problem you're trying to solve and forecast with this. If you're doing it yourself, I'm going to give you a tool to start off with. If you have a data science team, they can quickly build you a model, which probably understands the trend and a little bit of seasonality.

00:11:07

You need to help them and train and coach them on what your noise and special cases are. And they're going to be different depending on, on where you're at. So what factors might impact demand ice cream sales? What factors are going to increase demand of ice cream sales temperature, and your right temperature does impact ice cream sales. How might we model the simplest way to model temperature change? What might be a nice way to put it in say four nice, easy categories season. All right. So if you were trying to forecast ice cream sales, you would want to make sure you adjusted for the season pattern of during summer. We sell more ice cream. Uh, during winter only I buy ice cream special cause variation. This is the big one. I was, I worked as a VP of technology for, for Sabre and Travelocity. And you know, the things which made me not sleep at night was we're gonna run a new ad campaign.

00:12:08

Uh, and then the amazing race, oh my God, we're going to sponsor the amazing race, right? Do you think that didn't have a impact on the load on our servers and the capacity that we needed to be ready for? And even then it was good. Got downstream because last minute don't column used to run. When it's gone, it's gone campaign. So we'd go from nothing at 8:00 PM at night to three and a half million hits in the next sort of six minutes back to nothing. Again, you try managing that server farm. That's why I'm great. I had hair, but it's important stuff, right? So how, no matter how good your, you, you need to help any data science team understand the factors which are going to work with you. And you need to work with the business to understand how this stuff is going to affect demand on your servers and people that might need to be on the help desk. Because down the track, even when these happen, people buy these tickets. We have an uptick of help desk calls because a plane's being canceled or bad weather and so forth. I managed to preside over the biggest, um, Michael Jackson's concert ticket sales. He tragically died. We had enough capacity to handle 6 million tickets being sold in, in, in sort of four and a half minutes. We could refund about one a week.

00:13:24

So no matter how good you think you are at it, you're going to get caught by these special calls, concerns, Iceland volcano out of the blue note, not predictable by us. I mean, we knew it might happen at some time, but we certainly didn't know it would happen when I was in the UK, trying to get back to, uh, back to Dallas. And it changed the way that hotels were sold. So even inside hotels, all the demand structure for the staff they needed for housekeeping completely changed over time, context, context, context. So to get you out of being able to do this easily, I built a spreadsheet on sort of do that sort of thing on date night, my wife and I sit down and we knock together a spreadsheet to do various purposes. Um, this one just takes a series of dates of things that have happened in the past for you.

00:14:10

And it goes and projects it out. Now it doesn't just project it forward. It projects it back as well. So the blue line, yeah. Oh, there we go. A big button there. The, the orange line is the trend over time, and you'll see where slightly getting more and more work here. The blue line is the actual data. And the forecast line is the dotted line. Notice I ran it backwards and I read it backwards, said that I could highlight automatically which ones are possible special hoses. Cause if they're happened once they might happen again, now the full cost is as good as we can do. Those special causes were to a large extent, unpredictable forecastable unknowable in advance. So those errors and bad forecasts are absolute gold. And I see people try and minimize the number of outliers they get, where you should be trying to maximize it.

00:15:05

Cause that's where your most learning is. And no matter who you work with in your data science teams, inside your organizations, you want to sit with them and work out what these factors are and if you should incorporate them into the model in the future and they change over time. So again, you can just, this is a spreadsheet, there are no macros. You just put in a series of dates or a series of numbers, and it does this time series forecasting for you. And it looks for multiple different types of patterns that day of week, a week of months, sort of stuff, just in case there's a cadence of work cycle in your, in your organization. So special causes. I'm going through that. There you go. No, what the fact is, are important in your context, trend patterns and special Coles, and what will be the best predictor in your case will vary depending on what you're trying to predict. If it's the shorter term, it is the more you have to worry about the seasonality and the special cause trends. The longer term, it is the best. You can use a train lines standard sort of regression lines and, um, know what you're getting to.

00:16:10

So forecasting duration and dates, and, you know, everyone says, you know, okay, thanks for that, Troy. But how long is this thing gonna take and be within a week, over six months, please. Um, and that's what we're sort of having to try and do. We're having to try and forecast these, these very unpredictable sort of events with, um, Supreme accuracy. So we look, we went in, when you're faced with a problem like this in forecasting, you go and copy someone else. That's what we do is develop. This is what we certainly do as data scientists. So go and look at the way a Google maps has sort of presented its full cost for travel times, which is a time-based sort of forecasting system. And, and they do a couple of things which are quite, uh, quite valuable. They don't give you one option. They give you a multiple offer options.

00:16:52

Why? Because they're leaving the context to you. If it's date night, take a bit longer, stay away from the kids for an extra couple of minutes. It's like, Hey, taking a bit longer to get to where you're going or public transport, if it's the second day. So the second thing they do is they don't commit to an arrival time until you actually leave. What are we doing? Software? We forecast to a date before we've even formed the team, placed the ads to get staff. You can do everything about choosing which option you're going to take with duration that you would do with date. In fact, people reverse the data in their head to sort of saying, oh, so that one's shorter. So just stick with duration as long as you can, don't go sort of, um, committing yourself to a start date until you start.

00:17:40

But when you do start, keep going through that cycle of the model and refining it over time to make sure that you understand when you're deviating and when your model is being impacted by more special cause or seasonality patterns you don't understand. So even when we're forecasting, if we're working generously forecasting for that hundred team organization, um, we would have been able to get an early indication during the Christmas period of time that our dates are going to slip out. But if we just stuck with average data for over the 12 months, that would be invisible to us. It's the errors where the value is. So contrasting software planning to Google maps. If you give one forecast now, even though your teams can see the multiple approaches for delivering it, stop it, stop giving multiple options and getting someone to sort of saying, this has benefits. This has benefits over to you.

00:18:33

If you're giving a calendar date for undefined completed or start dates work like you just sort of the team's not formed yet, but it's going to be done on the 16th of November. Stop, make sure you're doing that analysis and comparing the options, just using duration. And when you do start continuously forecast to make sure you know, earlier and where you can react with a much smaller push and nudge and have many more ways of solving the problem. Then if you just waited until it was late near the end. So why, why Troy? Why would you go through and not just use that projecting that trend line all the way up? Well, there's something that we, that happens when we use an average trend line to forecast the future and it's to do with the fact that most outcomes end up being sort of on a submitter symmetrical sort of distribution of some kind and normal distribution.

00:19:26

In this case, I stole the picture from Wikipedia. Um, 50% of the outcomes will happen auto before the date that you give, if you projected out an average line, 50% of them would happen after. So you really up in a time cost probability chance. I know it's in Vegas and that's fair odds, but you know, you might want more than that. If it's your, uh, if it's the team developing the software for your heart pacemaker, you know, you might want to make sure that there's a bit more rigor in that it's going to be ready when it's inserted into your, into your body. So this is how it works. So to solve that problem and be able to go on the record with a higher degree of certainty with the predictions we're going to use, we have to do something and good probabilistic forecasting, which is nothing more than sort of saying where normal math is.

00:20:13

We take the average amount of work, the amount of work we have to do and divide it by the rate that we do work. And that gives us, uh, a duration, what will we do when we do probalistic masses? We say, well, it's about 20 to 30 stories and we get it between about one and five down a week. And that's between four to 30 weeks. So people get uncomfortable with that wide range and they get, um, uh, a bit, a bit stressed about it, but it is actually the actual fact of, with the variability that the team has estimated because you haven't got real data just yet. Um, the right answer, the answer lies between us. It's just unusually wide and unusably wide. And what we got to get you to do, though in this session is to insert your actual data of your team's actual pace into that denominator.

00:21:00

So that you're actually can then set a saying at the rate, this team is currently delivering. This is when the output's going to be, and then we get a bit more precise using some fancy sort of math to work out the way 85% sure. 90% sure. 95% sure depending on where you want to go. And as we get higher in the probability, we get closer to the 30 weeks. But what happens is there's very few instances where the 30 weeks is the right answer. Let me show you how it works. So you've got a team where you've got some historical data of how a story points, velocity or tickets completed, or a work completed. And then, you know, there's a backlog of work that you have to get through. Well, what we do is we could just take the slowest of the previous set of throughputs that we have and, oh, we take the average and projected out that's the regression line.

00:21:47

So that's sort of, there's 50% chance of occurring. 50% of the outcomes will be to the left of that date. 50% will be to the right of it. And we can take the worst case scenario and just project out my team. Then we can take your teams and project out the fastest that we've ever done and bring that up to the line. And then we can get all crazy, have a few drinks, um, and instead of start throwing up random ones. And if we do that, if we just pick sampling from the historical data randomly and projecting them off into seeing where they would cross an intersect with the, you finished everything line, you'll sort of see that we start forming this sort of distribution of possible outcomes and all that. When someone says they're 80% certain in a probabilistic forecast of rain, of weather, of snow, we're in Vegas, I'm always saying he's 85% of those lines were to the left of that point.

00:22:43

So only 15% of them were over to the right. So now we can go on the record with a more precise probability. So again, there's a tool for that to get you started is not the most complex tool that simply does that division math that you just saw. You enter in a start date when you know it a low and a high guest for the amount of work that you have to complete. And initially an estimate of how fast are you going to deliver that work? And, you know, it does exactly what that other sheet did. You put your actuals on it, so you can sort of see when abnormality happens and something special calls is slowing the team down. Um, you know, you read that by stepping back at 14 feet and just roughly running your line out of the VP for many years.

00:23:27

And I did that once. I sort of said it's about December and that was, uh, not seen as being professional, but it's actually better than what they were doing, which was just sort of taking the average and progressing it. They just didn't realize it. I left. So that's what we're doing when we're going, we're going from a 50% chance of that regression line to an 85% chance. So initially the dates will be longer than people like, and you'll have to tell them why. And then you will have to sort of say, you know, well, you know, given that we only, we should be missing half the work, uh, I think we should only miss sort of 15% of the work and we make a deal on that. And you're a much better situation to set capacity using your team's actual historical rate to actually get to this point.

00:24:10

That's hard to read and looks mathy and geeky. So I converted to a table with a red Amber Green best practice. And, um, I just sort of try and spell out the wording there or what it is. So again, the simplest possible MVP for doing probabilistic forecasting of your work. You need to do better. Start here, don't finish there. And that's the date you would have given. So it's about 14 days different, which, um, uh, you know, two weeks difference between 50%, the date they expected to. When I think we would have really had a chance of getting it. And then as shoe team start working, if throw in their real data, you throw that back in the samples of your history and it stops using the estimate now and starts getting the same pattern issue teams have. So if there is a seasonality pattern in it, it will be replicated in the forecast in this way. And we do some fancy stuff with, you know, mumps and stuff like that. But you start off with a nice wide range estimate, uh, to compare options. Um, you, when you choose one, you do a drill, do a bit more analysis to maybe narrow that range, but you still want a range and you still want it wide. When you start getting actual data, you start removing the estimates and using real data. As you go along to get your system profiled and modeled correctly.

00:25:24

All right. Now, how do you present that data? That nice color chart. If I know a lot of people are using it over, they've stopped me in the hallways here, stop using it. Um, there's another spreadsheet which does this on mass. And the idea is, is that when we're in the room and someone says, we want to change the order, or we want to do something else as well, the more immediate we can say, no, we don't have the capacity to do that. The better impact it has because once we leave the room they've already got in their head. Oh, okay. That problem solved. I'm going to get a and B when really they're only going to get AOB. And that's the, what the analysis is going to tell you. So you really got to find ways to bring the capacity argument into the rooms where people are starting to set expectations because the moment it leaves the room, their expectation is set.

00:26:12

So you got to this spreadsheet just gives a nice, simple thing, a ticket across. And then when they get upset about a missing feature for you, you sort of say, well, you want us to start that sooner and you change the start order. And now something else gets an immediate X and they go, Ooh, I really want that as well. And he has led to saying, you know, it sucks to be you work with me to make that. So which might be splitting feature a and B and a one in four and getting a, the most important part of it that they really wanted. So, and I hear this, the objection is, well, how much data though? How much data do I need before this stuff takes place? And you're balancing, I know the real world of statistics is we want a large amount of data, but in our world, it gets stale so quickly that using that out of context and stale data actually increases the error too much.

00:27:05

We pay too high, a price for that data. So there is some fancy statistical reasons why seven samples, whether you're dating or whether you're forecasting software is the right amount of data to have to understand roughly what good and bad is a better or worse is. So who dated more than seven people before getting married. I'm a coder. I dated one married her, you know, so it sort of bet you sexy. People have much more vigorous standards than I do. So, uh, less than three, you're better off using a guest because the data isn't good enough yet to really give you a, um, a good realm about seven to 15 samples is, is the right amount to do. Please delete every bit of data you have after that point, because worst thing is someone uses it forms an average and that infects your projections. Um, if you're using this system, yeah.

00:27:57

Just people say, why don't you connect to tools because I don't want you to grab all the data. I want you to type in the seven that, that, that match. So seven samples don't look back too far. So why do we do probalistic forecasting it's because we want better than a coin toss all it to Val full costs coming true. How do we get that? We start off by doing mathematics on ranges of estimates, and then we move into using real data into those estimates as we get it, um, be really, really aware that you're going to have to sell to people that there isn't one answer, there's multiple answers. And we're going to keep on top of it over time to make sure that those outliers don't actually affect us. And that we understand the model well enough that we're trending down in actual data like our model is saying.

00:28:44

And if it doesn't when it doesn't, we understand why. So you've got to balance recency with sample size. That's probably one of the biggest areas after start. Date is number one. Um, stale data is number two reason that I see why forecasts really fail using any method. You know, whether you're averaging or you doing sort of good probabilistic sampling. So again, the slides are there about, we have to end our slide decks with what I need help with. What I see our industry is facing as the two biggest impacts on predictability. So here's number one,

00:29:25

The guy on the bicycle, albeit who has a slower average speed and a higher amount of energy expenditure is easier to predict than the travel time for the traffic on your ride. And this is important because we often run our systems, uh, at, at high utilization and we without flow, we really can't tell where it's going to be and, you know, time of day matters and stuff like that. And he's my community, New Zealand. Um, you sort of see the point, right? You don't want to be stuck around, uh, in this traffic. And it doesn't matter if you have 10 X teams, if you're overloading all those 10 X teams and they will not be able to use the power that they have, because they will be constantly impeded by a constraint somewhere else in the system. So this is a very important business problems to solve is the understanding in our industry, that utilization makes things completely impossible to be predictive because what's happening is we're actually trying to predict on this very steep curve where lead time changes dramatically by 10 to a hundred times, just with a very small amount of change of utilization, like unplanned work, a drive by an outage, having your, you know, the teams you don't want to be on two, or you don't want to be forced to predict the teams of have all the experts in the organization, because they're the ones that get pulled off.

00:30:46

You want a team of really average developers like me because we never get asked questions. We never get pulled away from what we're doing. And, um, you know, it's, uh, uh, takes one absentee person to get to that point. Second, second biggest problem we need to solve and help people understand is dependencies. There were four people that had to be seated at a restaurant before they would take you and take you to the table. The chances of actually being seated on time are one in 16. There's only one case with a, with a set of dependencies that you all arrive on time. Every other case, 15 of them, at least one person is late. Now, when we're delivering something it's very similar to this, right? We can't deliver until all these four sequential steps are done. So our odds a very, very low, and it's very lopsided in the odds. And if you've got a team infrastructure architecture like this, where there's seven levels, and we went and found a story which had to travel through 17 dependencies to get to the top, what's the chance of any feature of full costs that you do delivering on time.

00:31:55

So the former is one chance and two to the power of the number of dependencies is one to 7, 1 28. There it is him, her, whatever it is, that's your chance of delivering on time, the odds to become incredibly stacked against you. So if you're dealing in the probability game and we're here in Vegas, so we should be every dependency that you remove doubles your chances of delivering on time. So why we're in this trend of making nice small pizza teams of three or four or seven people, if we could actually find a way of, of, of bringing together the groups and the skills needed into one team, five teams, one in 32, we have a much greater chance of delivering on time, uh, with that, thank you. Um, just remember manage utilization, help us understand it, help us understand the impact and, uh, help, uh, your companies understand the impact of dependencies and how it really S you know, puts you against the odds of delivering on time. Uh, so again, there's cards at the exit with, um, with the links that you have to all the spreadsheets and stuff like that. So they're on the Ivy desk called the water cooler. Um, you get all the slides that Bitly forecasting does capital F capital D. Um, I'm able to stick around for questions if, uh, if you want, but I know you're hungry because you're 10 minutes late. Okay.