Las Vegas 2018

The Data Behind DevOps - Becoming a High Performer

How do you become a high-performing technology organization?

Over the past four years, the State of DevOps Report has shown how high-performing IT teams decisively outperform low-performing peers. This presentation presents highlights and surprises from over 23,000 responses.

Dr. Nicole Forsgren is Co-founder, CEO and Chief Scientist at DevOps Research and Assessment (DORA). She is best known for her work measuring the technology process and as the lead investigator on the largest DevOps studies to date. She has been a professor, sysadmin, and performance engineer. Nicole's work has been published in several peer-reviewed journals. Nicole earned her PhD in Management Information Systems from the University of Arizona, and is a Research Affiliate at Clemson University and Florida International University.

Jez Humble is co-author of the Jolt Award winning Continuous Delivery, published in Martin Fowler's Signature Series (Addison Wesley, 2010), Lean Enterprise, in Eric Ries' Lean series (O'Reilly, 2015), and the DevOps Handbook (IT Revolution, 2016). He's spent his career tinkering with code, infrastructure, product development and consulting in companies of varying sizes across three continents, most recently working for the US Federal Government at 18F. He is currently researching how to build high performing teams at my startup, DevOps Research and Assessment LLC, and teaching at UC Berkeley.


Dr. Nicole Forsgren

CEO and Chief Scientist, DORA


Jez Humble




This morning, uh, I had described Brian Eno's concept of s and the characteristics that, uh, mark great Seniores, one of which is a rapid dissemination of tools and techniques. Uh, I also mentioned how the goal of science is to explain the most amount of observ phenomena with a few number of principles, uh, confirmed deeply held intuitions and reveal surprising insights. Incidentally, I was just reminded backstage 20 minutes ago that this is what PhD candidates and philosophy majors refer to as the principle of parsimonious. So the next talk is a great embodiment of both seniors and science. I've had the pleasure of working with Dr. Nicole Forsgren and j Humble in the state of DevOps report for five years. It is without doubt one of the things I'm most professionally proud of, and it's difficult to overstate how much I've learned from that effort. I'm also delighted that both Nicole and Jez were able to commercialize this research into a company called the DevOps Research and Assessment Company, which we affectionately refer to as Dora. And that all the research in the underlying theory and methodology we're put into the book, accelerate the Science of Lean Software and DevOps Building and Scaling High Performing Technology Organizations is a book that I routinely cite from. My favorite page is page 1 38, where it describes under what conditions you can assert prediction, not just correlation. Without further ado, Dr. Nicole Forsgren and Jez Humble.


Thanks everyone for joining us. Uh, we pulled a quick switch and decided to actually talk about if you don't know where you're going, it doesn't matter how fast you get there. So let's start by talking about getting better and measuring performance. So we'll start by outlining some common mistakes that we see. First talking about outputs and outcomes. And what's really important here is focusing on outcomes, not just outputs. Also individual and local measures versus team and global measures. And then we'll walk through some common examples really quickly. So the most common we stake we see here is lines of code. I'm sure we all know that this is a bad idea, right? I also get asked about this every single month. I can't blame anyone for this because it's really easy to measure, but it's really tough, right? More is better, not necessarily gives us blood software gives us higher maintenance costs, higher cost of change, okay? Well, on the inverse, less isn't necessarily better either because all that can possibly lead to is one line of magical, magical code that can do a million things and then no one can actually read it. So no one can ever maintain it. <inaudible>, <laugh>, the best thing that we can possibly do sometimes is to solve a business problem with the most efficient code possible.


And actually the best thing you can do is delete code and solve a problem. That's my favorite.


That that really is the very, very best delete code. But how are we gonna measure that and reward someone for it?


That there is actually a story in, um, the story of how the MAC was made about someone who deleted several tens of thousands of lines of code and put that in their weekly report about how much code they're written. They were asked not to enter a number in that box. Again.


I like it. Okay, here's another example,


Velocity. So, um, who here is doing agile? Okay, who here? Um, like has their velocity measured and compared to other teams? Okay, sorry,


I'm not saying you should feel bad,


But your managers should feel bad. Yes. Um, so this is a terrible idea. Uh, velocity is a capacity planning tool. It's not a measure of productivity. And the idea of velocity is we'll see how much we can get done this week so we can not commit to too much work next week, uh, or however long your sprint is. Um, so we can make sure we can predict roughly when we're gonna be done. But when you use that to compare teams, that's hugely problematic for a couple of reasons. Firstly, velocity is extremely context dependent. So you can't actually take a velocity in one team and compare it to another team. Secondly, if you are measured by your velocity and some other team asks you to help them with their task, but your manager is like, get your velocity story points this week, how likely are you to help them actually achieve their goals? Not very, I'd suggest. So measuring team velocity tends to cause people to not work very effectively together. And in an enterprise where we have dependencies between teams, that's a terrible idea because I might get my points, but the organization as a whole probably won't get what it wants. So, uh, for this reason, bad idea


Is this me too. Yeah, utilization. So, uh, people love to measure utilization, especially CFOs because they're paying a lot of money for their expensive resources and they want to know those people are busy. Um, hence, uh, my friend had a T-shirt that said, Jesus is coming look busy. That's your CFO <laugh>. Um, however, you don't actually want to have utilization at a hundred percent. Uh, results from uh, uh, from from Q theory and math show us that if a system is a hundred percent utilized, uh, you'll experience the maximum possible lead times. Uh, what we actually want to optimize through is for is throughput not utilization. And in order to achieve the highest throughput utilization must be well below 100%. So


To sum up, remember outcomes not just output, right? Not lines of code and global measures, not just local measures. So no matter how quickly we're improving performance, and I promise I'll give you an example quickly, where are we going? So here's one good example for performance metric that we have used. We've been using software development and delivering. It's an outcome measure, not an output measure. It measures how well we're develop, developing, and delivering code. And it also captures measures intention, right? This is also really nice to quote my good friend Sam Guckenheimer. He says, if you ever capture only one metric, you know exactly which metric will be gamed. So it's nice having measures intention.


We have speed and stability here, both throughput and stability, our speed measures or deployment frequency, or when the business demands lead time for changes, code, commit to code, deploy. Notice again, these are global, right? You have to collaborate with people all the way through that value stream. We also have time to recover and then change fail rate. These are our stability metrics. So I know everyone here is thinking this is nice, but like what do we know when we compare our high and our low performers? So here's what we see in our 2018 study. Our highest performers see 46 times more frequent code deployments and over 2,500 times faster. When we talk about stability, we see 2,600 times faster time to recover from incidents and seven times lower change fail rate. Why do we care? It lets us beat our competitors to market. It helps us pivot when we need to pivot.


It helps us get our features to our customers, whether our customers are end users, internal, external, if we're a commercial enterprise, if we're government or not-for-profit. This is all very important. Again, speed and stability. By the way, these go together. We don't see trade-offs, and this is true for five years of analysis and data. They go together and if you take one thing away from this, um, talk, please let it be that. Now, this is another thing that I really, really love. Okay, I'm gonna change my mind. If you only take one thing away, let it be this <laugh> jk. Take a look at this high performing group. In this year's data set, we take a very, very well in all the data sets. We take a very data driven approach. When I run the analysis to identify who is in the, uh, data performance profile categories, I take those four metrics, right?


The two throughput and the two stability metrics. And I kind of throw 'em in and we see where the data cut points are. Take a look at that high performance data set. It's really large. This should be incredibly encouraging. High performance is available to everyone. That elite performance group, kind of up at the top, it's a subset. The highest performers are continuing to optimize on all measures. So yeah, everyone is still like the best, are still killing it. But you know what? The industry is moving. The industry is changing. Excellence is available to everyone. You just have to execute. Yeah, it's kind of hard and yeah, it's kind of tough, but you know what, this is not reserved for just a handful of like tiny annoying like unicorn companies hanging out in the bay. Everyone can do this. Okay, here's an eye chart. Sorry, it's hard. I promise I'll we'll send out all the slides if you wanna know what this looks like. Again, the elite performers are optimizing on speed and stability. Deploying daily lead time is less than an hour. Time to restore as less than an hour change. Fail rate is zero to 15%. Low performers are struggling. Deployed frequency is between once a week and once a month. And so is lead time for changes. Time to restore has slipped. Time to restore is between one week and one month. A change fail is between 46 and 60%.


Now, don't forget availability.


This year we extended our, uh, outcome measurements to also include availability in production. Um, so we've looked at measures of software development like lead time, deploy frequency kind of is parties do with development, parties do with, uh, release or deployment. Uh, change fail rate is a measure of the quality of your release process, if you like. Time to restore is, uh, about how quickly, obviously we can respond and remediate incidents into production, but we also wanted to look at availability as well. So availability is obviously the extent to which users can access your service. Um, normally measured in terms of service level objectives such as, uh, uptime or downtime per per unit time, such as downtime in a month. Um, it's also about your ability to make promises about your uptime to your customers and to be able to keep those promises as well. So we looked at both of those things together. What we find again, is that people are not making trade-offs as you might expect, but also very cool to see, uh, people in that elite performing group, a 3.5 times, 3.5, five times more likely to have good outcomes in terms of availability. So we can say that throughput and stability and availability move together and together they predict better outcomes in terms of organizational performance, both commercial and non-commercial measures. So we know what good looks like. The next question is how do we get there?


So how do we get there? Um, is anyone here familiar with my maturity model rant? Yeah. Uh, so if you're not, maturity models are dumb. If anyone tries to sell you a maturity model that ends in, oh, let's probably say level five, they're just selling you something. So how do we, I don't have time for this. Like jeans in a hurry and you only gave me 30 minutes. So find me. I'll tell you why they're dumb. There's lots of reasons they're dumb. So how do we get better? We need to identify our constraints in terms of processes, capabilities, things like technology, process, culture continuously improve. Okay? Now what does this look like? So what do I mean when I say capabilities?


Well, Nicole <laugh>, it just so happens we recently released a book where we talk about our research program


And you can get a free one tonight,


Seven 15 to eight o'clock. We are giving away copies of our book. Signed, sponsored by


Zaia. I feel like we'll be here all night. Tip swimmers,




<laugh>. <laugh>.


Thank you. Thank you.




Um, so there's,


'cause no one can read this. Sorry.


Yeah, I mean, sorry, you can't read this at the back. Doesn't matter. Um, it's a big diagram. It's on our website. You love find


It. We, we call this the BFD, the fd. That's what that means.


The big and fulsome research diagram, <laugh>. Um,


So the key things that I wanna point out on this diagram, uh, obviously you've got software delivery, performance, speed, stability, availability, predict organizational performance, uh, commercial and non-commercial. What predicts software delivery performance? Well, there's four key things we are gonna mention. Number one is culture. We measure culture. We'll talk about that later. Culture predicts both software delivery performance and organizational performance. And then we've got three groups of capabilities that predict, um, both software delivery, performance and culture. So how do we change culture by implementing practices, by changing behavior. Three key groups of practices. Number one, technical practices. Things like the practices that make up continuous delivery. Lean management practices like managing work in process, having visual displays that allow us to see quality, uh, in our system, uh, and respond to that quality information. Being able to get feedback from production and have a lightweight change approval process.


And then some lean product development capabilities such as working in small batches, making the flow of work visible by using, for example, Kanban boards, uh, or uh, burn up diagrams, uh, cumulative flow diagrams gathering and then implementing customer feedback. And then enabling teams to experiment with ideas for new features and for process changes, if you're working in an agile organization and you get requirements and you must implement those requirements and you're not allowed to change them or add your own, it's not agile. Um, same for process improvement work as well. So all those things together impact both culture and software delivery performance. So this year we found some new technical practices. Uh, we extended our model really


Quickly. Anytime you see an arrow, that's a predictive relationship. This goes beyond correlation. So as Jess mentioned, but wait, there's more.


Yeah, predicts or impacts are the words you can use when you see arrows like that. Uh, so this year we looked at a whole bunch of new stuff and there's way too much to go through in the time available. So we're just gonna give you some key highlights. But we looked into, we looked at continuous testing, which is shifting left on testing, not just automating it. Um, and we've gotten into trouble for this, for saying, for talking about test automation, um, from the testing community. And we wanna make really clear, we're not talking about getting rid of testers. Absolutely not. Testers are critical to effective software delivery. Manual testing, like exploratory testing, usability testing. Those are critical activities. The crucial thing is they should not be a phase after dev complete. They should be built into the development process and perform continuously throughout the delivery lifecycle. Uh, and then some other things that we're gonna talk about in a little more depth right now,


Uh, really quickly, everything in bold is things that we've added this year. Anything not in bold, we have reconfirmed from prior research. So a couple of things we wanna touch on 'cause we're running outta time, uh, database. So integrating database work again, positively contributes to the software development delivery and availability, which this year we called SDO performance. So this is super important, right? The database change process is important. Databases are hard. So what does this look like? Interestingly or not, it looks a lot like shifting left, right? What does DevOps do? It integrates operations into the chain. Communications config management, including teams, visibility. So we all know what's happening, what's happening when we include the database, when we have these schema changes. Um, another thing we looked at this year,


Monitoring and observability. Uh, so we actually took some care to define both monitoring and observability because experts in the field will tell you that they are two different things. Um, so these are our kind of short definitions that we used. We find that if you are doing these things, you are 1.3 times more likely to be an elite performer. So having monitoring and observability solutions in place predict software delivery and operational performance, uh, speed, stability, availability. Interestingly, what we found, uh, is that people perceive monitoring and observability as being they the same thing. They don't perceive them as being different things.


If you wanna fight, come find me, <laugh>. We, we can stats fight about this. It's fine. It's really interesting. We talk about the implications of this from a research standpoint in this year's, uh, 2018 accelerate stated DevOps


Report. And it doesn't mean they aren't different things, it just means that people don't perceive them as different things


Among the study stuff, stuff, stuff we can research.


Words, words, words,


Words. Okay? Also cloud infrastructure helps of course.


Yay the cloud.


Not only if you do it right, who here's in the cloud working on stuff in the cloud. We have some cloud. That's


A lot of people. This


Is okay everybody. We're gonna play a game. Play along.


This is your exercise.


Exercise. This is your exercise. Everyone put your hands up, you hear the cloud.


Keep your hands up, keep your hands




So, um, nist, this is a bit like PE class where you have to hold your heart hand up for a really long time. So get ready. Hopefully have a support hand.


You're gonna hold your hand up for a really long time.


Yeah, if you're doing well, you will suffer more such as life <laugh>. Um,


I love this game. Okay,


The National Institute of Science and Technology, sorry the National Institute of Standards and Technology, uh, defines five essential characteristics of cloud in what is their smallest and shortest paper that I have seen of theirs. So characteristic number one, if you do not meet this characteristic, please put your hand down. So characteristic one is that anyone in the organization must be able to self-service the cloud resources they want on demand. So we know we love ServiceNow. ServiceNow's great, but if you have to create a ticket in ServiceNow in order to get your server in the cloud, it's not a cloud. Put your hand down. It's just very expensive data center. So cloud, hands down, if developers can't self-service the resources they need on demand.


Okay, good work, y'all. Alright? That rules out most people. Okay, next up. Number


Two, if you cannot access your cloud stuff on all the devices you would like to be able to access it on, put your hands down. So the whole point of


Cloud, yeah, so many devices, workstations, tablets, phones,


You should be able to get it on all those devices. Yes, broad network access. Okay? Number three, resource pooling. So a single physical piece of hardware should be able to support many, many different virtual instances. If that's not true, if you're not seeing resource pooling, most people kind of have that. So I wouldn't expect to see many hands go down. Okay,


Next up wrapping


Up rapid elasticity. So you should have the illusion of infinite resources or as Nicole said in this slide, burst


You like magic <laugh>. Yes. Okay, last one.


Okay, measured service. So what this means is the service metered, you only get charged for what you use. You're not paying the capital costs.


Okay? Okay, we're about <laugh>.


Ooh, that's not good.


What Let's, let's give all these people a round of applause. Woo. By the way, y'all match our sample because of the people that said they were in the cloud, only 22% of them are actually in the cloud. It's fine. So do you ever talk to people? Yeah,


This happens all the time. This is why we did this research. 'cause it winds us both up. The people are like, we're in the cloud and it's like, really?


But I don't see the benefits. 'cause


Because you're not doing it right. I


Love you, but you're not in the cloud.


Uh, it's just a very expensive data center.


As Corey Quinn said, you can't slap a sticker on your data center and call it the cloud.


However, if you are actually doing all these things, what we find is that you are 23 times more likely to be an elite performer. So cloud can really make a huge difference if you meet those essential characteristics that NIST defines.


Again, remember earlier when I said you should be super encouraged. High performance and elite performance is absolutely possible. You just have to execute. We know how just do it. I I it's hard, right? I know it's difficult, but you can do it. We know how NIST tells us how short white paper. Okay, next up, open source is good. Who hears using open source? Woo. Okay. Highly correlated with SDO performance. Elite performers are 1.7 times more likely to be using extensive use of open source components, libraries, platforms, and they're more likely to plan on expanding this. So you're in good company. Um, also outsourcing is bad. Now we're talking about functional outsourcing here. If you take all of dev, all of test and qa, all of ops and you just throw it out somewhere else, it's not gonna work. Low performers are almost four times as likely to be using functional outsourcing. And if you think it's gonna save you money, not you, if, if someone in your organization thinks it's going to save you money, the costs often far exceed this. And we outline this in the paper, uh, in the report this year.


You can do the math basically using cost of delay. When you do functional outsourcing, you tend to have to batch up work into big batches. Your high and low value features get mixed together into one enormous batch that takes months to deploy. You can calculate the cost of delay of delaying those high value features. Uh, and that is will typically, um, outweigh the money or save. And


You've probably seen this, right? You're probably waiting for some dope feature 'cause like they're not done with the project. That sucks or the release. Okay? Slow and overly cautious is also bad. This is where like Professor Nicole gets all scoldy with people. I'm sure you've heard of someone who thinks that deploying twice a year is still a good idea because we're gonna be real careful. We're gonna do lots of testing, we're gonna do tons of quality control. Everything's gonna be amazing. Who here knows of someone who's done this?


Have you got a friend?


We have a we have friends in other organizations. Super. Not us. It's fine. Not us. It's not me. I heard of a person I know, I know a I know a guy. Uh, we did additional analysis and I found this group. That's, they're adorable. They mean well, it's fine except it's not fine. Here's the thing. They deploy between once and six months. Lead time is between once and six months. Here's the thing. Their change fail rate is actually better than low performers. They do see evidence of this. This is great. I'm real happy for them. Also, when they go down, they go down hard. They go down between once and six months, right? Because their blast radius is massive.


So it takes them months to restore. It


Takes them months to come back. Blast radius is huge. They can't debug it. They don't know what to do. C their customer site, they cus their customers might be able to access something. But how long does it take to restore service in the back? Data is down, systems are down, infrastructure is down. Everyone's putting out fires forever. It's a bad idea.


And and a great example of this is when you have security issues where someone's broken into your system, it could take months for you to do all the analysis and triage, um, and forensics. You need to actually work out what's going on before you can fully restore service. For me, security is one of the biggest, uh, use cases for what we're talking about. The ability to, uh, deploy with speed and stability. When there's a CVE announced, how long would it take you to identify all the systems impacted by that C-V-C-V-E, apply the patches, get those patches deployed into production. That's a key goal for an a key use case for short lead times and reliable releases. If you cannot repeatably and reliably release changes, you can't respond quickly to security issues. And, and that's a huge risk.


Also, who hears heard of, right? Or Sony, like this is a thing. Okay? And 5% of teams hit this, by the way. This group has the highest use of outsourcing of all groups most highly correlated. Uh, when we look at outsourcing in the diagram, it's negatively predictive of software delivery performance, okay? Has a negative impact. Um, quickly we can influence culture. So often people ask us what this looks like. Uh, I want to quickly win DevOps. Bingo, <laugh>. We know that bad practices hurt right now. What do I mean by culture? If anyone's been to one of our talks, this should be familiar.


Yeah, we've been talking about this for five years. Um, but if you haven't seen it, this is a, a model created by Dr. Ron Westrom, who's a sociologist studying safety outcomes in healthcare and aviation. Uh, he has a typology which you can use to look at, uh, organizational culture. How do we deal with cooperation between different teams and within teams? How do we deal with people who bring us bad news? Do we shoot people who bring us bad news? Do we ignore them Or do we actually train people to bring us bad news as soon as possible so that we can react to it before we have catastrophic or cascading outages, uh, or failures? A responsibilities ignored because we'll get into trouble if something goes wrong. We don't want responsibilities. Are they defined narrowly so we know who to blame when things go wrong or, or do we all share risks because we know that we succeed or fail as a team?


Do we encourage or discourage bridging between departments, uh, between different parts of the organization? And then the two critical things. How do we deal with failure and how do we deal with novelty? And these two things are connected in organizations where failure leads to punishment. No one will ever innovate because by definition innovation is doing things that have not been done before. If you know that if something goes wrong, you're gonna be punished, you will not innovate, which is why psychological safety is so important. If you want a team which will innovate. So if you're on the left and you probably know where you are and you wanna move to the right, how do we do that? Well, that's what the practices we talked about Do. We've shown that those technical, uh, lean management and product management practices all positively impacts culture. So that's how you change culture.


Although that's really interesting because we've been telling people that for years. They're like, no, what else can I do? I don't just wanna make smart investments in tech and process. I know. What else can I do? Okay, so we investigated that this year. The first thing we dug into is, um, what can leaders do, right? What can we do? A couple things we can do is give our teams autonomy, right? We can set targets, we can set targets and outcomes, and then we can give our teams autonomy to decide how to achieve those outcomes. What we found is that, uh, similar to other environments, similar to other contexts, this works in technology as well. By giving our teams autonomy, it then contributes to both voice and trust. What does that mean? Um, it contributes to voice. Our teams feel safer to speak out about what's working, what's not working, both in the tech and in the teams. And it makes them feel more trust in their leaders. Those two things then in turn contribute to that culture that j just described. And then turn, it contributes to software delivery performance and organizational performance. And


Just one little note on that, autonomy also means that teams are involved in setting those goals and targets. Goals and targets that are imposed top down can be problematic. If the teams are like, well, that's completely unachievable. So autonomy includes teams being involved in setting the goals and targets that, um, they're measured by. The other thing we looked at is, uh, retrospectives. Uh, so these are called sometimes learning reviews in agile. In the agile movement we talk about retrospectives. Uh, we also talk about blameless postmortems. And the idea is reflecting on what we've done so we can learn and get better in the future in a way that's focused on system improvement, not blaming people basically. Um, so we find that creating, uh, instituting retrospectives and making sure we're learning from them positively predicts culture. It also impacts a climate for learning, which is essentially creating an organization where learning is treating as treated as an important investment that the organization will invest resources into, not as something which is, well, you should be doing that in the weekends on your own time. Uh, make sure you are working really hard to get your story points this week. Um, so retrospectives predicts climate for learning both retrospectives and climate for learning. Predict organizational culture. Uh,


By the way, climate for learning is a nice revalidation from the 2014 study. And we have found in our work at Dora, our assessment work, uh, that this is particularly useful and impactful because we work in technology. Things are moving and changing and, and, and racing ahead so, so quickly that when organizations can leverage their climate for learning, their transformations are much more successful. Now, quickly, in a minute and a half, we have left some of our other favorite data findings. Change advisory boards are useless,




Asterisk, asterisk, uh, they're good for notification systems. They're not good for like approvals. It's actually negatively correlated with stability. So if you think you're helping, you're not sorry. Things like lightweight approval processes and peer review are super helpful.


So like peer code reviews, for example, um, or pair programming. Um, uh, and I, the other thing I would add to change advisory boards, we think they should move to a governance function. Thinking about how can we create more effective change management processes should be about governance, not inspecting individual changes, which is a terrible idea.


Absolutely. Um, industry doesn't matter. Um, I took a much more rigorous approach to this analysis this year and I know a few people out there are thinking, oh, but that won't work for me. I'm highly regulated and then somebody else is thinking, but I'm even more highly regulated. I love you also, um, no excuses because there is no statistical significance for any industry. And I tested over a dozen industries. So, um, it's, there's no statistical significance for software development or delivery across any industry. Um, and then finally, integration times and branch lifetimes. So


The most controversial thing I talk, have talked about for 10 years is saying that it's better to develop off trunk than on uh, long li lived feature branches. Um, so the data has borne this out in many different ways, but basically you can work on feature branches, you can use GitHub branches, um, or, or pull requests. The crucial thing is that you are merging into a shared trunk or master at least every day. And ideally within hours rather than days at time.


Yes, okay. TLDR for anyone who is busy on Facebook or Twitter. Instead, technology matters. Cloud works if you're actually doing cloud, the good and the bad and we can influence culture. Okay? If you want the latest goodness 'cause we did not even cover it, uh, you can get that in this year's accelerate. State of DevOps report,


Go to DevOps, go to research, go here or go to bitly slash 2018 slash DevOps report.


Or if that's way too hard, quick take a picture of this,


Get a bunch of


Free stuff and, and my magic will send you all sorts of good stuff.


Nicole with the subject. DevOps,


Thank you so much and we will see you all tonight.