Presentation by Dr. Stephen Magill & Gene Kim (Europe 2021)

Presentation by Dr. Stephen Magill & Gene Kim


(No slides available)


Gene Kim

Founder and Author, IT Revolution


Dr. Stephen Magill

Vice President, Product Innovation, Sonatype



Hi, I'm Steven Miguel. Uh, I've been doing academic research in software analysis, security and programming languages for more than 15 years. First, as part of my PhD work at Carnegie Mellon and then at other universities and industry research labs. And over the last few years, I've gotten more and more interested in the practice of software. So open-source development practices, how enterprises approach software, how they use open source and really how to best contribute to these communities by improving tools and practices. And so I'm going to be talking about some really cool research, uh, Jean and I have done in that space.


Awesome. My name is gene Kim, and I've been studying high performing technology organizations for 21 years. And one of the funnest projects I've gotten to work on is a state of DevOps report. And that was with of course, Dr. Nicole Forsgren and just humble. And this was across population study that spanned over 36,000 respondents over six years. And it gave us a great glimpse of what high performance looks like and what are the behaviors, uh, that create high performance.


So, uh, in this work, uh, we're going to be looking at open source. We're going to be looking at usage of open source, and it's probably not as surprised to hear that almost everyone is using open source, right? Not Friedman CEO of get hub complains that 99% of new software projects include open source components. And so this means they inherit any open source vulnerabilities. So if you're out there and you're using open source, you want to be remediating those vulnerabilities as they arise, but even better, you'd like to stay ahead of that vulnerability curve and avoid being in this position of having to remediate. So, so how do you do that? Um, well, uh, you have to consider that when you use open source, you're not just importing some code you're, you're adding developers to your team, right? Those open source project contributors, they become contributors to your software. Um, and this will help you, you know, this increases agility, uh, imports, great security practices, or will it, you know, will it import great security practices or is it going to be holding you back? Is it going to be a source of vulnerability, right? Like what are these, what are these extra developers bringing to your team? Um, and so that's what we dove into, you know, what practices lead to good security outcomes. Uh, and what should you look for when you're choosing these open source components?


And by the way, uh, uh, Steven and I were sitting in the GitHub universe, a session, we heard that Freeman say that, and we're like, well, when you invite these developers into your house, are they going to actually help you build your kitchen? Or are they going to trash your kitchen? There's other ways to tell. So, uh, I had mentioned the state of DevOps research and it was such a fun study because it linked cultural aspects with, uh, technical practices and architecture. And so on the next slide, in terms of setup, uh, when some friends reached out for, uh, to me some years ago to, uh, talk about this potential research project and being able to look at the, uh, uh, uh, data from the Maven ecosystem, I jumped at the chance because it was a way to look at what update behaviors looks like in the wild.


So for those of you who don't know what Maven is, a Maven is to Java, what NPM is to JavaScript, uh, what PI is to Python, Ruby is to gyms. Um, and so on the next slide is someone who, uh, benefits so much from the Java ecosystem and Maven, uh, because I, my favorite program language is closure. Uh, it was just an opportunity that was irresistible. And so when this came up and I looked at the data set that was available, that'd be made available to us. I immediately reached out to my friend, uh, Dr. Steven McGill and asked if he'd be willing to collaborate and, uh, jump into the data and see what,


And while I spend most of my time in the Haskell ecosystem, it certainly gives me an appreciation for functional languages and the role that, that Scala enclosure play as these functional languages targeting the JVM. So super exciting to dive into this space, um, and just to see what we could find in this wealth of data and the questions that, uh, the key questions that we asked were, uh, really twofold some about the open-source side and some about the enterprise side on the open source side, we wanted to know how do these projects manage their own supply chains. The nice thing about open source is all the dependencies, all those transitive dependencies, they're all open source as well. So you have a great amount of data that you can aggregate label do analysis on. Um, and that was our focus in the first part of this work, um, that, that we went and embarked on.


And we wanted to look at not just summary statistics on like how many projects are out there, how many vulnerabilities, but really deeper analysis that describes the behavioral patterns we see, uh, and ultimately would lead to guidance and insight for people who consume open source. And then on the consumption side, we wanted to look at how do enterprises manage their open source supply chains, what governance tools and practices do they employ, and what impact does that have on security and productivity. And then in the course of looking at both sides, we actually observed some really interesting facts about the value of working together, uh, having enterprises deeply involved in open source and, and really welcoming community contributions, uh, back into corporate open source projects as well. So, as Jean said, this all started two years ago, when we learned about this data, learned about, you know, what you could learn from looking at Maven central data, looking at the Maven ecosystem.


And that really led us to do this deep dive on update and security practices. Um, and that turned into chapter three of the 2019 state of the software supply chain report. We started that work in 2018, published the report in 2019. Um, and we had such a great time with that research. We teamed up with Sonatype again, to do further analysis for this year's report, the 2020 report, um, and that, and this time in 2020, we focused on the consumer side of the equation looking at at how enterprises manage those supply chains. Um, but first we're going to dive into, uh, this 2019 that summarize the 2019 results. And in that work, um, as we said, we at Maven central, there's really an amazing amount of, uh, of projects of components of data out there on Maven central, um, 310,000 components, 4.2 million, uh, versions of those components of individual jar files.


Each has its own vulnerability history, its own dependency graph its own API changes with each version, um, uh, which can cause problems. Uh, almost 7,000 of those also have associated GitHub repos, which gives us a whole other set of data that we can correlate against. Um, that includes metadata on team size, uh, commit frequency. We can look at individual commits and the history of code changes, um, and then of these components about 9% had a vulnerability associated with them. Uh, but then when you look at the dependency chain, uh, and those transitive dependencies, um, that increases to 47% of components have some sort of vulnerability, uh, that impacts them during the period, when that component is, is the current version.


So this is how we put it, wrote up the findings. And this went into, as, as Steven mentioned, the, a status, uh, chapter three in the, uh, state of the software supply chain report. And I really want to thank, uh, uh, Dr. McGill as well, and the amazing team of Sonatype who helped make this happen. So that was Bruce Mayhew, uh, Geisey Mahmood, uh, Kevin Whitton, Derek weeks, and Matt Howard. Uh, and this is so fun to be able to take a look into the components that some of use use every day, that is all, uh, being taken and leveraging the Maven ecosystem.


So, um, one of the things that we looked at, uh, for a year, uh, the first year and we're pulling into the second year is, uh, we took the state of DevOps reports, it perfect metrics, and, uh, started to just think about what the analogs might be in the open-source community. And so, uh, uh, the it perf metrics, uh, of course, is deployment frequency, um, uh, code deployment, lead time, meantime to repair and change success rate. And so the ones that, that we sort of linked them to is a release frequency. In other words, when do open source projects, uh, release a new version, um, in terms of organizational performance, uh, you know, we linked that to popularity's potentially as measured by a number of key top stars or number of forks, uh, or number of downloads per day, uh, within Maven central. And then meantime per store, we looked at, how long did it take to mediate vulnerabilities when they're disclosed through, say a CVE disclosure? And so, um, uh, this kind of shaped our thinking in terms of what do we look at as the, um, deep dependent variables, in other words, uh, uh, we'll lay out all the independent variables and we want to see which ones are, uh, perform better.


Yeah. And, and so we structured our research around a number of hypotheses and the first of these was really about that is faster, better question, right? So we, we find that that's the case in the enterprise, um, in the state of DevOps research and accelerate research, um, will we find the same thing, uh, in the open source community? Uh, and we did. So we found that projects that release more frequently, uh, they were two and a half times more popular in general. So if you know, people using your open-source software is your goal, which I think it is for most projects. Um, they're having better outcomes in that space. They also had larger development teams, so more contributors, more active projects, uh, they were also more likely to be foundation supported, uh, and they were more secure. So the fastest 20% of projects by release frequency also update dependencies 18 times faster than other projects. And that update cadence really correlates strongly with security outcomes. So a range of, of great outcomes, uh, result from just releasing more frequently, moving faster, uh, with your deployments on the open-source side.


Awesome. So hypothesis number two was the notion that, uh, projects that update dependencies, uh, more often are generally more secure. And, uh, we found that that indeed was the case, uh, those that were, uh, the most secure tended to update 1.5 times more frequently, they had 50% of 530 times faster remediation times, uh, and there are 173 times less likely to have out-of-state dependencies. So that means that they were not only updating themselves, but, uh, they're making sure that all of their dependencies were up to date. And I think a great example of why this is so important of the benefit of staying secure is to prime faces, vulnerability, uh, that came out in 2017. And so, uh, it turns out that that issue was actually fixed years before. So if you had just a current, you never would have had, uh, this vulnerability. Um, and, uh, so, but those who didn't, you know, suddenly were, uh, taken over by Bitcoin miners, um, and actually had to, uh, do drastic things to update those dependencies now in production. So I think this validates the notion, this comes from Jeremy Long, the founder of the dependency check project, that one of the best ways to stay secure is just stay up to date on the dependencies. And that means updating it in daily work.


Um, and so as part of that, uh, staying up to date, uh, process, you want to make sure that you're pulling in dependencies that are themselves being good about staying up to date. Um, so what do you look for? How can you find those projects? Um, so one factor, as I said is just general update frequency. How often do they release right update behavior and remediation behavior tend to track each other, um, and release frequency is correlated with both of those. So, you know, really good outcomes just come from projects that release more frequently, um, and, uh, further for projects with larger development teams and higher code commit rates. They are also generally better at keeping their dependencies up-to-date. So the top 20% of projects by size at 50% faster update times, and they released 2.6 times more frequently, uh, they were also more likely to be foundations supported, revealing, which reveals that foundation support is also an important aspect and something you can look for, uh, that influences project quality.


Okay. Hypothesis number three. Uh, it seems like pretty obvious. We thought that a practice to have fewer dependencies will be, uh, have an easier time staying up to date. Right. It makes sense that, um, you know, as you a smaller surface area, right, it'll just be easier. And that turned out to be not to be true. It turns out that, uh, components with more dependencies, uh, have better mean time to update. In other words, uh, uh, they updated the dependencies faster and, and Steven just pointed out one sort of very kind of startling observation in the last, uh, finding, which was that, uh, the most popular dependencies, the ones that are most secure have more developers, um, which means that there's actually a link between, uh, the number of dependencies and the number of developers, active LPARs as measured by the number of people making commits in a given month. Um, so that kind of brings up the question. Is it that, uh, update the increasing number of dependencies causes you to have more developers? Or is it the other way around when you increase the number of developers? They tend to pull in more dependencies. Um, but, uh, it is, this was very surprising. It turned out not to be the case, uh, that, uh, it was the other way round, those that have more dependencies or the ones that have better MTU, very stuck. It's very surprising. Yeah.


And, um, and even more surprising, I think our most surprising finding, um, from that year's research, uh, was I thought this is for which was we, we expected popular projects to generally be better, be better about staying up to date, be more secure on average. And we found that was not the case at all. And we found a lot of really strong statistically significant differences between, uh, different factors and the impact that had on security and update performance popularity was one of the few that did not. Right. Um, and, uh, so, you know, if, if you take one thing away from this part of the talk, let it be that, you know, you can't just lean on popularity as, as a proxy for quality. Yeah.


And maybe one unsettling thing is that, uh, you know, in my own personal experience, whenever I want to solve a problem, look for a component to solve it. Uh, generally I look for, I use this heuristic, I look for the project that has the most number of stars and the most number of forks. It turns out that's actually a very bad heuristic. So the question really becomes what, what juristics should we use instead in terms of what open-source components to use.


Right. Yeah. And, and looking at that release frequency, uh, is one of the key things that you can easily access and evaluate when you're looking at projects because of a popular project. That's not releasing frequently, it's falling behind in some respect when it comes to its transitive dependencies. Um, so given the importance of updating and staying up to date, uh, why is everyone not just completely up to date, right. Why are all dependencies not just, you know, brought to the current version on every release? And so we looked into this. Yeah,


Yeah. In fact, I think one of the best papers that described just how problematic staying up-to-date is, uh, it comes from this amazing paper by, um, this group of researchers, uh, in, uh, Brazil. And so they, what they did is they monitored 400 open-source projects for, um, 116 days. And during that period, they detected 282 potentially breaking changes. And so we did the math on this and, uh, in words, the breakage rate is sufficiently high that given enough time, your probability of having some breaking change is, you know, approaching a hundred percent. And so I think this definitely resonates with anyone. Who's had the experience of being afraid to update the dependencies because, uh, this seems to suggest that, um, you're afraid for a reason, right? Updating dependencies so often breaks, uh, your code. And so this is, I think explains one reason why people don't update their dependencies in their daily work because it's problem is potentially problematic.


So, um, one of the things that we did at the, uh, um, after we saw this kind of startling data, is that we put together a survey just to see if we can understand the psychographics of, uh, these kinds of higher performers that are updating their dependencies in the daily work. And we found a very astonishing find that, uh, we, we clustered them and, uh, there was a high pain cluster cluster and the low pain cluster. In other words, uh, we had the, those organizations that associated, uh, updating with high levels of pain. Um, they were three times as likely to strongly agree, but there were also two times, six less likely, um, to, uh, to consider patching, uh, to be painful. There were 10 times more likely to schedule updating dependencies as a part of the daily work. There were six times more likely, uh, to strive to use the latest version.


Where does an or in minus one, um, they had, there were 11 times more likely to have some sort of process to add for adding dependencies, uh, 10 times more likely to have some sort of process for removing problematic dependencies and 12 times more likely to use automated tools, uh, in terms of, uh, enforcing policy around updates. So, uh, this was, um, a startling finding when you see kind of, uh, uh, multiples like this, you know, that there's something very different between, uh, the high, medium and low clusters, very much like we saw in the state of DevOps research. So this very much focused, uh, where we wanted to go in this year study.


Um, one of the other things that, uh, guided us and excited us a lot was taking a look at, um, uh, the data to actually see what the migration behaviors are from version to version. And so this is for, um, hibernate. And so what the arc shows is, uh, every migration from a source version to a destination version. And what you see here is that there's almost two distinct populations. There's the one on the left and the one on the right. And they almost never meet. In other words, uh, the, the extent of the changes are so vast that, uh, use tend to stay in one island versus another. And this is sort of what you don't want to see, right? Because you want to stay up to date. Uh, it means that you have to take this very painful, um, change, um, change. You have to somehow jump the chasm. And so this, uh, I think would suggest that some people who are stuck on the island on the left will never make it to the right. And so once they stop, uh, releasing security patches, there'll be forever left behind, you know,


So this, this made us really realize and come to appreciate, um, that there's a difference between different components. So some of the, some of the updates, uh, friction is due to component choice and the choices that certain libraries make about when and how they migrate from one API to another. And so in contrast to the hibernate example, this is showing the spring framework, very different looking graph, right? So here, there's not two separate populations. You see that everyone is able to, you know, from whatever past version they're at, uh, they are able to successfully migrate to the latest version. Um, there's also, you can see a higher density of arcs landing in those latest or almost latest versions. Um, and so clearly the framework is structured such that it's easier to update the community as a whole tries to stay on the current version of the software. You do see some red arcs there. Those are, those are people sort of stuck in vulnerable versions of the library. Um, they're making progress towards the recent, but not there all the way. Um,


And I think the red marks indicate where the target state ended up in a vulnerable component. Right. So they updated it, but they updated it from one that was actually insecure.


Yeah. Maybe you should have jumped a little farther.


Yeah. And so here's one, that's a different, uh, sort of, uh, archetype. And so this one is, this is Joe to time. Yeah. So this one shows that there's almost kind of this homogeneous set where people can upgrade seemingly from any version to any other version. And so this, our hypothesis here is that, uh, these are versions that actually have very few breaking changes, um, that you can basically go to any version and it's not going to break functionality. So, you know, and when I think about what it, what it would look like for organizations to be able to just easily quickly dependably reliably switch from, uh, whatever version onto the latest version, I think this is the sort of, um, distribution that we would like to see. So, uh, this is something that, that we didn't have the chance to explore as fully as we wanted to, but, uh, will hopefully guide future research.


Yeah. I think we see a bit of the role of security vulnerabilities and pushing update behavior here too. So here there's no vulnerable versions involved. So update behavior is driven more by just wanting to update or needing new features. Um, whereas here in the spring framework, there are vulnerabilities against those older versions that are sort of pushing everyone forward, you know, here, uh, there's a more uniform distribution of versions in use. Um, so yeah, so, uh, next, so in this year is in the 2020 report, we really wanted to focus. We'd looked a lot at the open source side of the equation. We wanted to focus on the consumer side of the equation, really the enterprise usage of open, open source, um, and see what sorts of practices are associated with better security compliance and productivity outcomes. And so we interviewed, uh, over 500 developers, actually 528, um, developers on a range, uh, from a range of companies across all industries, uh, asking about their security and performance outcomes, as well as their DevOps practices, uh, to see which of those really contribute to high achievement in these areas. So we asked questions like, do you centralize your CIA infrastructure? Do you automate software governance? Do you contribute to open source? Uh, are you confident in the security of your deployed ARDA, uh, applications?


And so the kind of the hidden question that we were trying to, uh, answer was, you know, can you have it all, can you be more productive and be more secure at the same time? Can you simultaneously advance the objectives of security as well as advancing the objectives of development? Uh, can you have both?


And the answer is yes. So we, we found, um, not only can you achieve both, uh, high risk, like good risk management outcomes and high productivity gains simultaneously, but a remarkably large percentage of the companies we surveyed are managing to do just that. Uh, and so to see that visually, uh, I want to describe this diagram, which is really the centerpiece of, of this year's report. Uh, it shows companies, uh, all their companies that we interviewed plotted on a 2d grid based on, uh, that on the X axis, their self reported level of developer productivity, uh, and on that vertical axis, their risk management outcomes. So in the upper, right, you can see this group of companies that performs extremely highly on both dimensions and not surprisingly, these companies tend to adopt core DevOps principles of automation and consistency. And in the lower left, are those companies at the opposite end of the spectrum, they have poor productivity outcomes and poor risk management scores.


So these might best be described as sort of DevOps paddle they're they're early in their journey, right. Then there's a lot that they can still gain, uh, by adopting better practices. Uh, then we have the sort of stereotypical risk averse companies in the upper left, right. That focus solely on risk mitigation and achieve it via mostly more manual and inefficient workflows. Um, so they managed to chain those good security outcomes, but at the cost of productivity, uh, and then there's the move fast and break things companies in the lower, right, right. They were prioritizing productivity above all else. Uh, and this often comes at the cost of risk management. Um, all right. So what are the different colors, uh, here? This is actually the coolest thing. Um, so we, there are colored, not just based on the quadrant they're in, although it kind of looks like that. Um, we didn't, what happened is we didn't actually just directly measure productivity and risk management. We asked 11 questions about, uh, various aspects of risk management and productivity, and then clustered companies in that 11 dimensional space then projected down onto these two dimensions. And, you know, I think it's just amazing. It shows sort of the, the importance of these two sort of meta dimensions, right. Uh, that everything's sort of collapsed into really just a spectrum of productivity and a spectrum of, of security and compliance.


And by the way, the question that always comes up is like, how do we pick the clusters? And as Stephen just touched on, uh, the fact that, uh, that we didn't try does, so you basically plot these in 11 dimensions of space, right. And then you basically create these centroids to try to minimize the distance between the clusters. Um, and then, uh, as Steve mentioned, it's projected into 2d, um, and it was when he saw this graph, it was like one of those, uh, exciting aha moments, like, because it just beautifully explains, uh, the behaviors that we see.


Yeah. Yeah. And, um, and so the other thing we can do now is dig deeper, right? So not only are these clusters very well-defined and segmented, but we can start to compare across clusters and ask what are the differences in practices from the low performers to the high performers, uh, within these groups. Um, so we're going to talk about that now. And, uh, first focusing on the high performers versus security first, right? So they're both achieving great security outcomes. Um, but what's the difference? Why is one achieving substantially better productivity than the other?


So I'm one of the things that, uh, very much like we did with the state of DevOps report, uh, we then started to, uh, uh, uh, measure the difference, uh, numerically. And I think this is so exciting when you take a look at the high performers against security. First they're 50 times more likely to be using some sort of software composition analysis tools to 77% more likely to automate the approval management analysis of dependencies. Uh, they're a third, um, more likely to enforce, uh, governance policies within their continuous integration system system. Uh, yeah, I suppose some sort of a manual process. Um, and I'll just put a little footnote there that it wasn't, uh, there wasn't a huge, um, it wasn't like they were all centralized or all distributed. Um, that was something that, that didn't pan out. They're 51 times more likely to maintain, um, some sort of centralized bill of materials and then 96% more likely to essentially scan all deployed artifacts for, uh, security and license compliance. So, uh, I, I load that. I think the goal of science they say is to explain the most amount of observable phenomenon with the fewest number of principles, uh, confirm deeply held intuitions and reveal surprising insights. And I think, uh, uh, this absolutely does that, Steven, does that resonate with you?


Yeah, yeah, that's right. You know, I think it, it really speaks to the importance of automation and uniformity and, and, you know, establishing these consistent workflows and achieving these great outcomes, uh, productively. Yeah.


Um, maybe one more color commentary is that, you know, to me, when I see these, what it says that, uh, is that security objectives are being integrated into developer's daily work, um, and it's being integrated into tooling and automation, and, uh, there's obvious some sort of centralization in terms of consolidating the best, no knowledge of, you know, how to do that. Yup.


So now I want to compare the high performers and the low performers. So these are sort of the two extreme points in terms of the clustering, and this is really where these differences get stark.


All right. So, uh, the stats here, 15 times more frequent deployments, 26 times faster detection that the vulnerabilities exist 26 times faster to actually remediate those vulnerabilities, uh, six times more likely to, um, uh, can you read that for me?


Yeah. To have developers be productive when switching teams feed faster, there's more uniformity in the software development process,


Right. So this is our way of exploring, um, our team serve, uh, all standardizing where teams have a high degree of portability, um, or are they all creating their bespoke, um, uh, ways of doing things which make it very difficult for developers to switch between teams and then 26 times faster approvals, uh, to actually use a new open source dependency. Uh, so, uh, the notion that, uh, you know, you can actually create processes to add new dependencies without being burdensome and being slow.


And so the groups clearly have different focus areas, but I want to close back up and look again at the full dataset and this graph shows the central aids of each group. And when you look at just the centroids, you see that actually on average, the security first cluster is not only more secure than the low performers, but also slightly faster, slightly more productive. The productivity first group is on average, still less productive than the high performers. Um, but it, you know, is achieving better, still slightly better security outcomes. So we see that actually, uh, each group is sort of a stepping stone on that goal of, uh, towards that goal of high productivity and good risk management. And so you can get there by starting with security or starting with productivity. Um, but you know, I think everyone wants to trend into that upper right quadrant.


Well, this is such a cool treatment of the clustering data. I've never actually seen like the contour maps used to, uh, see the, uh, uh, where we reside in, uh, the clustering spaces. So that's awesome. Yeah.


Uh, and then as a final bonus, uh, we looked at, uh, something that we called open source enlightenment. And so this was a subset of the questions that really touched on various aspects of not just usage of open source, but support and involvement in the open source community. So, uh, things like executive support for open source contributions back to open source. Um, and we call, we combined those into, into one combined factor called, uh, open source enlightenment. And, um, we want to talk a bit about what we found about companies that have high levels of source enlightenment. And, um, one thing is, uh, that this sort of support for open source, this involvement in open source, uh, leads to substantially higher job satisfaction and, um, as well as better security outcomes. And I think the job satisfaction sort of makes sense, like, you know, people get that community support, you know, those are, those are sort of positive organizations.


They clearly have, um, you know, uh, great principles and care about their employees engagement in the broader community. Um, but the, the security outcomes were kind of surprising. Uh, although on reflection, it makes sense, right? When you're deeply involved in an open source project that you're you're you, that you're using, uh, you'll be more aware and more quickly aware when vulnerabilities are discovered there. And when you need to move to a new version of that project, it's probably easier for you to do that because you're more familiar with that code base. So it really does pay not just to make use of open source, but to get involved in open source.


Right. And you probably have a better sensibility of what's coming because you're probably more aware of the roadmap. I mean, one more thing that we found very surprising here was that, that we also thought that, um, one of the sort of counter, um, markers of performance would be to what extent are organizations, uh, having to maintain an internal fork of, of, uh, uh, of an opensource version. And we, that actually didn't pan out. And I think we sort of, misworded the question, uh, the goal, the intent was to find people who are stuck in an island and having to back port security patches. Um, but of course, in order to contribute, you know, you have to be at some point in time maintaining an internal fork, uh, you know, even if only for a couple of days.


Yeah. Yeah. Let us know, come tell us how you engage with open source and how you manage that in your internal workflows. Cause I think, you know, that would help us inform next year survey for sure.


Awesome. So, uh, I think the summary, uh, finding is that you can actually be more productive and be more secure at the same time. And so I think so much of dev ops is w uh, those people who believe that what's good for dev is also good for operations and vice versa. And I think what we're finding here are evidence that what is good for security can be good for development and vice versa. So at these two, uh, faster, more productive developers, uh, more secure components being used in production and, you know, happier, uh, developers as well. So, uh, we know that happiness is also strongly linked with organizational performance. Yeah.


So, um, if you're interested in reading more, please go download the report. You can actually even send an email to SSCR at muse, not dev, I've got an auto response set up there where it'll email you a link to the report. Um, and, uh, yeah, if you have thoughts, questions, uh, you know, please email there as well, or email me or Jean, uh, and thanks again to the Sonatype team, including Bruce Meho, gauzy, Mohammad, Derek a weeks, and Matt Howard, uh, great working with them and just a ton of help in pulling all this data together,


Help we're looking for Steven.


Yeah. So, um, and any additional hypotheses to test, those can come in the form of anecdotes. Right? Often we learn a lot from anecdotes and that informs questions that we can then ask in the survey and, um, and really get a broad sense of, you know, whether these patterns hold more generally, um, and then stories about how you choose components. So what components are easy, what are hard to update? How do you take that into account when you choose new components? Uh, we really wanna dive deeper into, you know, what makes some libraries much easier to stay up to date with, uh, versus others,


Absolutely. Uh, describing kind of what intuition and what juristics you use given a, you know, an array, a broad choice of kimonos to use, which ones do you choose and why we would love to know that. Uh, and by the way, if you're interested in any of these topics, uh, you will love the, the closing keynote of the conference, which is Eileen. You should tell she's a principal software engineer at GitHub, and she describes the amazing heroic journey of upgrading, um, uh, rails, uh, um, GitHub, the seven year journey to go from, uh, rails three to rails, five rails tutorials five anyways, this incredible, uh, story. And she has some phenomenal lessons off or leadership. So, uh, uh, it's phenomenal for so many reasons. Stephen,


Thank you.


Thank so much.