Las Vegas 2019

A Data-Driven Look at Open Source Software Supply Chains

In a year long collaboration with Gene Kim and Dr. Stephen Magill, we objectively examined and empirically documented software release patterns and cybersecurity hygiene practices across 54,000 commercial development teams and open source projects.

In this session, we will present evidence on the outcomes of that research, highlighting organizational and technology practices that enable exemplar open source teams to deliver 50% more commits, release new code 2.4x faster, and remediate security vulnerabilities 2.9x faster, all while delivering a level of value that makes them standouts in terms of popularity and adoption.

DS

Dr. Stephen Magill

CEO, MuseDev

DW

Derek Weeks

Vice President, Sonatype

Transcript

00:00:02

Okay, we are, uh, we're alive. Hey, uh, hi everyone. And welcome this afternoon. Uh, I am Derek weeks. I'm a vice-president at Sonatype. I'm also co-founder of the all day DevOps conference. I'm joined today by Dr. Steven Miguel. Uh, he is CEO of muse dev and principal scientist at Galois. Uh, and we are going to talk to you about some research that we've been collaborating on, uh, with some others over the last 10 months or so, or a year years old by now. Uh, so an organization's journey to excellence begins once it ceases to sacrifice quality for speed. Now, wait, we're we're at a dev ops conference, right? We talk about speed all the time, and we know that high velocity high, uh, exemplary dev ops practices, uh, deploy 67 times more frequently, uh, according to Dora and the state of the, the, uh, or the state of dev ops report from, uh, Dora.

00:01:07

We know they have seven times lower change failure rates. Uh, we know that there are 2,600 times faster to recover, uh, when failures happen within their, uh, within their organizations. And we also know from the state of dev ops report, that these organizations are 1.7 times more likely to extensively use open source within their environments. So over the past five years or so, almost six years now, uh, I've been studying the use of open source components within software development, uh, around the world across tens of thousands of organizations. One of the things that we saw last year, and we've documented this in a report we're going to share with you is that in the Java development realm alone, about 10 million Java developers last year consumed 146 billion download requests of Java open source components. If you're a JavaScript developer, you have JavaScript developers in house six and a half million JavaScript developers around the world consume an average of 60,000 components a year, and they are downloading as a gross population, uh, over 11 billion NPM packages a week.

00:02:28

The consumption of open source is prevalent in dev ops practices, as well as non dev ops practices. What this means is when we look at the average enterprise, uh, like your own, we studied over 12,000 enterprises using open source within their development practices, just in Java alone. The average organization is downloading 313,000 Java open-source components on an annual basis. They are downloading these from over 2,700 different suppliers or open source projects within the community. These are projects or suppliers that they are relying upon to write code for them. And to source that as the suppliers, uh, externally, because I don't want to write the code myself and we're relying on over 8,200 different versions of those, uh, projects or individual releases from those projects, uh, from the, um, uh, within those downloads. But we know not all of those downloads are created equal. In fact, almost one in 11 of those downloads had a known security vulnerability at the time that it was downloaded into your enterprises.

00:03:45

Now, what does this really mean from a, you know, we're consuming all of this open source? Well, it means that 85% of the applications that we are building have, uh, are composed using code from these external suppliers code that we didn't write ourselves that is making us more efficient. That is allowing us to be a much faster in our development practices as a result. And it was part of this knowledge of this massive consumption that's happening in the speed of development and efficiency that's happening. That paired Stephen, myself, Jean Kim, uh, Bruce, uh, Bruce Mayhew, Ghazi Muhammad, and another, a number of other security researchers and data scientists to pull together the state of the 2019, um, state of the software supply chain report. So we spent almost a year collaborating on this research, and we're going to share a lot of that research with you today to give you a sense of kind of what research it was.

00:04:50

We walked through a software supply chain, which every one of you has within your organization, or relies upon where you have open source projects, uh, open-source projects that contribute that code to, uh, internet based warehouses. Uh, that code is then downloaded from those warehouses into your software development teams and built into the finished goods and software applications. So we looked at over 36,000 open source projects for this research. We looked at over 3.7 million releases across that software, uh, or across those components. And we also studied over 12,000 organizations, developing applications using open source components. We surveyed over 6,000 developers this year for this study, and we also evaluated over 86,000 applications that were built using open source components to get a better idea of who the best open source projects were and who the best suppliers are of this code. So I'm going to hand off to Steven McGill. Now who's going to walk you through first part of the research from the report, uh, and then I'm going to come back and cover the second part. Okay. Thank you.

00:06:06

Right. So yeah, I'm going to walk through the analysis that we did of the open source ecosystem and some of the results that we found there. Um, and the first thing I want to talk about is this notion of faster is better. So we're at the DevOps enterprise summit, uh, and as Derek mentioned, we have all these stories about how, um, speed and, uh, release velocity deployment velocity are really tied up with a lot of positive outcomes from a technology and business standpoint. Um, and we've heard that in anecdotes during the experience reports that we've heard, um, at the conference, uh, this, uh, this year, and then we've also heard it, uh, empirically and rigorously validated, uh, by Nicole Forsgren research and, and her work with her team. So this faster is better premise does this ho it holds in the enterprise, but does it hold an open source?

00:06:54

So can we find the same sort of signal, uh, the same connection between velocity and positive outcomes in the open source world? Um, and there's no reason to think that this would necessarily be the case, right? The enterprise and open source community. They're two very different worlds. Uh, at the end on the enterprise, we can achieve multiple deploys per day, right? Um, in the open-source world. Um, it's more about version releases, uh, using semantic versioning to communicate API changes, uh, and pushing out new code, you know, on a, on a several month timescale. Um, at the end on the enterprise, you have a consistent group of developers. There's some turnover and people switch teams and so forth. Um, but by and large, you have a predictable set of developers. It's much more fluid on the source side with developers coming and going, and many just contributing a single code change.

00:07:43

Um, on the enterprise side, our dev teams are well-resourced or, um, if that sounds ludicrous to you and you're snickering, um, at least predictably resourced, right? Maybe the budget's too low, but, you know, it'll probably be that same number or close next quarter, um, right on the open source side, those resources are highly variable. It's very difficult. It's much different doing project planning when you don't know how many developers you're going to have next week. Um, so the two different worlds on the other hand, um, they're similar metrics on each side. So, um, we can actually find analogous attributes on the source side that correspond to some of the things that we're interested in and we track, um, on the enterprise side. So deployment frequency, this key metric on the enterprise side does correspond in some sense to release frequency. How, how often are open source projects, releasing new versions, uh, the timescales and the cadences are very different, but they're not like as concepts.

00:08:36

Similarly, when we talk about organizational performance metrics on the enterprise side, things like market share and profitability, um, I would argue the open source analog for that is popularity, right? Open source contributors. They contribute because they want to, uh, they want their work to have an impact. They want to improve other developers lives. And so that means getting their code used, right. It's corresponds to popularity. Um, on the enterprise side, this meantime to restore is a key metric of how well the organization responds to, uh, to incidents and, uh, and downtime and so forth on the open-source side. A very similar situation occurs when you have a security vulnerability that's reported and you have to push out a new version that mitigates that vulnerability. It's the same sort of all hands on deck. Let's push a new release as quickly as possible. A scenario that happens when you respond to an incident in the enterprise.

00:09:26

So to answer, this is faster, better question. We can look at these analogous attributes on the open-source side, and we can compare a release frequency and popularity and see what sort of connection there is there. And so going into this research, this was our first hypothesis. We wanted to explore, do projects that release frequently have better outcomes. Um, and we did find supporting evidence for this in the data. Um, so projects that release frequently, if you look at the top 20% by release frequency, um, there are five times more popular on average, they have 79% more developers, uh, and they're supported at greater rates by open-source foundations. Um, and these are all statistically significant differences in the, in those attributes. Um, and we don't, uh, we don't know. So I'm, I'm stating here correlations, right? We haven't looked into, uh, the causation aspect of this to see whether, um, these projects are, um, more popular because they have more developers or they attract more developers because they're more popular.

00:10:24

Uh, uh, we, we don't know which is the leading or lagging indicator, but, um, that's one connection that we found. Um, we also looked at, um, these more security, relevant metrics, so things like meantime to restore and time to remediate vulnerabilities. Um, and I'm going to talk, I'm going to spend most of it actually talking about, um, those security relevant and update and sort of responsiveness relevant metrics. Uh, but first I want to say a little bit more, um, following on to what Derek said about how we constructed our data set. So we started with, um, Java components that were published to Maven central. That was the starting set. Um, and then we then filter it out components that we didn't really have enough data about to, to analyze in a productive way. So for example, um, we're looking at things like update frequency. Um, so a component has to have published an update, right?

00:11:12

If it only ever released one version and that's it, that's the only version of that component. That's not helpful to us. Right. So we filter those out. Um, we also look just at components over the last five years because, um, you know, development trends, technologies and tools, um, have all changed over time. And we wanted to find, uh, interesting, uh, correlations in the data that holds for, for sort of the current development environment. Right. Um, we also filtered out components. There's a lot of components that just aren't part of the software supply chain, right? So like they don't use any open source libraries and they're not themselves used by any other components. Um, and so there, there's, they're just sort of isolated, not part of the supply chain, not part of a dependency tree. And so we filter those out. So we do all of that, um, sort of selecting for what we need to collect the attributes that we wanted to analyze.

00:11:58

And we get down to this course that of 36,000 components. And that really is what we focused on. And then for that 36,000, we had enough information to compute a variety of attributes, right. So we looked at popularity, um, measured as the average number of downloads from Maven central, uh, for that component each day. Yeah. Um, we looked at the size of the development team, um, the development speed. So how often was code committed to these repositories, uh, release speed, uh, presence of CIA and so forth. Um, and then a couple of, um, security and update related metrics. So security and updates speed. And, um, most of these we have for all 36,000, there's a couple that are based on GitHub metadata. So, um, size of team and development speed, those are both based on information that we get from, uh, the get hub, uh, statistics about our project.

00:12:49

So those we only have for products that were hosted in GitHub, there were about 10,000 of those. All right. So these last two, these last two metrics, security and updates speed. They're a little bit more complicated. Um, but they're really the core attributes that we studied here. So I want to spend a little bit of time talking about, uh, how those are defined. And so basically they're both measures of how quickly an application responds, how, how responsive a group of developers are in various scenarios. One is more security oriented, and one is more oriented just towards staying up to date. Um, so to demonstrate these, I've got a graph here of, um, a typical set of software releases, right? So we have three components here, a, B and C, um, and you can view time as sort of marching along from left to right. And dependencies here are solid lines.

00:13:35

So see, depends on a and B, right. And in particular version 2.2 of C depends on version 2.2 of a and 2.2 of B. Um, and we have a couple of, uh, interesting events here, this vulnerability B event, right? So, um, this is indicating that at some point in time, someone discovers a security vulnerability in component B. Um, and so there's some period of time now where B is vulnerable, right? Someone knows about a vulnerability there and it hasn't been patched. And then we're assuming that version 2.3 of B actually mitigates the security vulnerability. They patch it, they put out a new version that doesn't contain that problem. Um, and so C because C depends on B also inherits this vulnerability, and it has a certain vulnerability time. But if we think about, um, things from the perspective of CS development team, um, really the core time period to focus on is, is this one here, right?

00:14:28

Cause there's this time period where, uh, B has released a new version, that version patches, a security vulnerability, and then the clock sort of ticking for C to incorporate that themselves, right. That's the first point at which they can, they can fix their upstream vulnerability. And so we're going to call this remediation time for C, um, and we can compute, um, this remediation time for individual updates and average it for components, uh, and talk about, uh, this TTR metric time to remediate. And then we have a, and so that's a security oriented metric. We then have just a general update, uh, oriented metric, um, which is the time to update. And so since B is a release that takes some time for C to adopt that release, that release happens to be security relevant, but there's other releases that aren't security relevant like a here, uh, publishes version 2.4, again, the clock sort of ticking to see how long does it take, see to adopt this new version of a right.

00:15:20

And so, um, when we consider that, uh, that time period for all updates, whether security relevant or not, uh, we did call that time to update GTM. And then we also look at, um, uh, what we call stale dependencies. So there's this idea that, um, a component might release a new version, but not update all of their dependencies. And so those dependencies that are lagging behind, uh, we call stale and we, um, we account for those as well. So here is released version 2.3. It was, it was out, it was published when see released version 2.2, but see, did an update to that, to that newer version. All right. So those are the key metrics, time to immediate time to updates still dependencies. Um, and then we're going to explore a number of questions about those metrics, um, and how various open-source projects, uh, behave, uh, with respect to update hygiene, security, hygiene, et cetera.

00:16:08

And I want to start with the security relevant one. So this is time to remediate, um, that, that security metric I was discussing before. Um, and this is just a graph of, uh, the TTR behavior of the entire population. Um, and so we can see, first of all, I want to focus on this median, uh, TTR. So down there at the far, the Madonna at the far left, uh, shows 180 days median remediation time, which means that 50% of the population, um, takes more than, uh, six months to remediate a security vulnerability in independency, right? So that's already not great. Right. Um, and it, it gets even worse if you look at the far, right? So the top 5% of projects take over three and a half years to adopt security relevant changes. And those aren't projects that just never, it's not like a security patch came out and they just never adopted it.

00:16:58

It came out, they did eventually adopt it. It just took them three and a half years. So clearly if you're selecting projects, you know, if you're selecting dependencies to fold into your software supply chain, you want to be down here, right? You want to be identifying these projects that are attending to this that are keeping on top of updates, not just to their direct dependencies, but to sort of everything in their supply chain. Um, and so one question we had is, so this is the, this is the section of the population that's good from a security perspective, but do they up, are they just good about updating in general, right? Are they just paying attention to security or are they just keeping up to date, right. Which behavior dominates. Um, and so what we find is that they're actually closely connected. So this is a graph of, uh, the time to adopt security, relevant updates versus the time to, uh, to adopt a non-security relevant updates.

00:17:48

Um, and you can see there's a correlation here. So the correlation coefficient is actually 0.6 in this case. Um, and so, uh, there is a sense in which the two track each other, um, and actually if we, you know, so that's one way to view it. We can also dig a little deeper or slice it different ways and find that, um, if we look at, uh, MTTR is an MTT use that are within 20% of each other. So trying to characterize similar behavior in terms of security and non-security relevant updates, 55% of the population falls within that. And so that's sort of, if you imagine like a cone, you know, let's slice near that, uh, diagonal line. Those are the projects that we're talking about there. Um, and in particular, if you think about like the opposite of this, are there projects that don't update frequently, but still managed to say secure?

00:18:36

Uh, we don't see a lot of that behavior, right? So if you look at the, um, the projects that managed to maintain better than average security behavior, while having worse than average update behavior, only 15% of the population falls into that category. So this is interesting. So first of all, so that's hypothesis two projects that update dependencies more frequently are generally more secure. Um, and we've found a variety of evidence for that. And that's an interesting finding because, um, it gives you another way to evaluate the quality of projects, right? Like, okay, we're, we're all interested in security. That's like the top line concern. And so we can evaluate that by looking at security update behavior, but a lot of projects never have a security vulnerability reported against them, right? So we have no data for those projects, so no way to evaluate them. Whereas we have update behavior data across the board, right?

00:19:25

Every project we know how frequently they're updating. Um, and so if we can say, uh, frequently updating is a good sort of proxy for quality and security and so forth. That gives us another thing to latch on to, to include in our set of criteria for how we evaluate these projects. Alright. Um, so I thought this is three a, was that projects with fewer dependencies would stay more up to date. This seems intuitive, right? Like you have less to keep on top of, so you'll be better at like getting it done. Um, we actually found the opposite. So components with more dependencies are actually better at keeping them up to date. Um, and this was so not what we expected that we dug a little bit deeper, um, and found that what's going on is actually projects with more dependencies, tend to have larger development teams and then projects with larger development teams and to be better at keeping things up to date and just sort of attending to these project hygiene concerns. Um, so you can see, you can see this relationship here, I've grasped the number of dependencies versus the average size of the development team. Um, and you can see that that increases as dependencies increased. And so, you know, the more, the more and more dependencies you bring on, uh, the more pizza you have to buy.

00:20:42

All right. So I thought this is for, um, and this is, I think like the interesting finding of the report. So like if you focus on one thing or take one thing away from this talk, think about this. So, uh, we went into this thinking probably more popular projects will be better about staying up to date. They'll be better about it offering security updates, et cetera, et cetera. Um, again, we found no evidence for this, uh, in fact, um, if you look at, so first of all, there's plenty of popular projects that have poor update hygiene, but that's not really surprising. There's always outliers, right. Um, but if you dig a little deeper, you find that popularity doesn't even correlate in any sense with update hygiene. And in fact, even if you just focus in on like the most popular project, say the top 10% by popularity of the population, um, there's no statistical difference between the update behavior of those projects and the update behavior with the rest.

00:21:39

Um, so popularity is not a good proxy for, uh, update hygiene and security and so forth. Um, and so, you know, like I said, if there's one takeaway, don't base your decisions just on popularity, right. Look at other things, consider other factors. And, um, so to see sort of some of the behavior that's behind that and, and get more of a sense of how, um, how different groups of, uh, project maintainers behave. Uh, we broke the data down further and looked at various clusters to see, um, to see what behaviors were in common between different types of projects. Um, and so we broke it down into five different categories, uh, two that have exemplary update behavior. So these are in the top 20% of update hygiene, uh, and then three that are not. And so one interesting thing is that among the exemplars, um, there's representation from both small and large teams, right?

00:22:29

So there's a sizable set that has small development teams, an average of 1.6 developers per team, and are still staying very on top of updates. Um, and then there's the large exemplars which have on average, almost nine developers on their team. Uh, they have exemplary update behavior. Um, they're very likely to be foundation supported, um, and they're high on popularity. And so this is sort of like the open source industrial complex or open source foundation con complex, I guess. Um, you know, so it's, it's that, you know, Apache foundation, et cetera, you know, Linux foundation, those sort of supported projects, those big projects that are really, um, sort of setting a high standard for quality. Um, then we have the laggards which are sort of behind an MTU height, a stale dependency count, um, just sort of not keeping up, um, interest. One of the most interesting classes is this features first-class.

00:23:18

So I talked about, I talked about these projects that, um, are popular, um, but are sort of not keeping up from a security perspective. Um, some of those are in this category where they're doing frequent releases. Um, and so they have the release bandwidth essentially to stay up to date with, from a security perspective, but they're sort of not, not spending their effort there, right there may be prioritizing features or something else, whatever they're doing. Um, they're not attending to security. Um, they're, uh, sort of, uh, yeah, prioritizing features and then there's the cautious group, which, um, has good update hygiene. So they, they keep up to date generally, but they're not at the latest version. So, um, you see them adopting updates, uh, basically a version or two behind. So they're not falling, they're not falling behind and staying, you know, completely out of date.

00:24:07

They're generally staying secure. Um, but they take a more cautious approach. All right. So these are, this is that data represented graphically. Um, I've got different colors, I've got, uh, the different groups tagged with different colors. And so you can see, first of all, that exemplar category is all the way here at the left. Um, as I said, these released quickly and tend to be more popular. Um, and so if you're sort of sourcing project projects for your open source supply chain, uh, you should try to draw them from here. Um, this is a representation of what I was saying with, um, popularity, not being a good guide. Uh, you can see that in basically how far this box, uh, spreads to the right. Right. Um, we see, uh, on the x-axis here is average days between release. So how, how quickly do you release updates? How up-to-date are you staying? Um, and then on the top is popularity. And so you can see there's some popular projects that are not great from an update hygiene perspective. Um, all right. So now I'm going to turn it back over to Derek. So I've been talking about the open source supply side of the equation. Derek's going to say more about the consumer side and what's happening in the enterprise when we look at, uh, how they deal with open source dependencies.

00:25:15

Cool. Thanks, Steven. Uh, so the, the thing that we found as we were going through the, the research, one of the things that we were looking at it was, we saw all these open source projects that were updating frequently, and we saw this exemplary behavior, but we wondered, okay, well, what are developers doing in your enterprises? How are you behaving with dependencies? So while we were doing this in the middle of doing this research, we said, let's go out and survey a bunch of developers, which we did. We surveyed over 658, I think is yeah, 658 a as the number. And we asked them about how are you managing dependencies? Do you have a process for managing your dependencies? Um, do you have any automation that you're using to manage dependencies? Do you have a process in place for removing troublesome, uh, dependencies, uh, when problems arise in there, uh, are you using the latest version of dependencies, uh, in projects that are out there in your environment?

00:26:14

So the surprising thing about the survey was how many organizations and developers said that they had these practices in place. I think we all kind of expected about nine or 10% of these organizations to say, yes, we're doing this. I think we all kind of felt like these answers might be more aspirational, uh, versus, uh, what they're actually doing, or they're just representing. We have a part of a process in place, uh, to do this. It might not be the most mature process, but there is some part of a process that exists, or some piece of automation that exists to support us. It may not be comprehensive in that realm, but when we looked at these practices and we began to, uh, identify the clusters of exemplary behavior versus, uh, non-exempt bars, what we found out is that, uh, in the exemplar behaviors, they were 10 times more likely to schedule updates of dependencies.

00:27:13

They were 11 times more likely to have a process in place, uh, to update dependencies. They were 12 times more likely to have automation in place to support these practices. But the cool thing about this and the finding at the top of the mountain here is that when it came to updating dependencies, whether there was a security vulnerability in place, or it was just the practice of updating dependencies, that it was a lot easier for these organizations, they were less likely to consider that activity painful if they were doing this frequently. So when you're climbing the mountain every day, when you're updating your dependencies every day, and you have the practices it's pretty easy, or, or it feels easier to do this, if you climb the mountain once a year, uh, you're trying to update your dependencies once a year or once every other year, it's going to be a difficult, uh, trudge up the mountain.

00:28:06

Uh, and that's just like, Hey, you know, can we do one deploy a day per year, right? Or, uh, you know, one, uh, uh, multiple, uh, messing up my analogy. But if you're trying to do multiple deploys a day, right. Deployments get a lot easier, right. When you're doing one deploy every six months or one deploy every year, it's a lot harder. Right? And so we're seeing the same behavior within the, uh, enterprises and how they're managing their, uh, open source components and dependencies. The other thing that we also found through the research, and this was part of the survey, we went out and surveyed over 5,500 developers. Uh, and this one we asked the, uh, organizations to identify, are you using, uh, dev ops practices? Do you consider yourself to be mature in your dev ops practices versus having no dev ops practice? Uh, and of those organizations, we asked them, do you have an open source policy in place?

00:29:06

And if you do, do you follow this, uh, this process where automation exists more in, uh, these environments, that was also part of the survey we found where automation existed, it was more difficult to ignore. So developers are aided by information about the components. What's a good component, what's a bad component. And they're two and a half times more likely to, uh, apply, uh, those, uh, governance policies in those organizations where automation, uh, was more present. The other thing that we wanted to understand is what open-source components are everyone using out there. Uh, and what's the quality of those components. And one quality that we looked at, uh, across the 68,000 applications was what is the age of the components you can see within the, this chart that 51%, uh, yeah, 51% of the components are, uh, three years old or younger. So they've been developed or released in the last three years, but half of the code within the applications that you are all building is three years old, uh, or three years or more, uh, old three, three years old or more.

00:30:18

Um, so one that means you're relying on part from suppliers that have been out for a long time. And this makes a difference because when we look at the vulnerability defect ratio within these components, you see that the components that are younger than three years old have a 9.3% defect density. And those that are older than three years, three years have a 15% defect entity. So if you just had a rule in place that says your developers can use any open source components, they want, as long as they're three years old or younger, then you can reduce your security defect density by 65%, just on that practice alone, you don't have to say use more secure components. You just have to say, use the latest, newest versions of these components, and you will by default, uh, uh, or by consequence remain more secure. As part of that, we also saw in this survey that, uh, exemplar, uh, dev ops teams were relying more on tooling to tell them about security information and security, uh, security issues within their applications versus the, uh, laggards or those that didn't have DevOps practices, not relying on tools as much.

00:31:34

So when tools are present, um, one they're alerting developers to more information, uh, we saw in one of the previous slides, they're more likely to follow that information that's provided by their tools and by the automation and therefore staying, uh, more secure as a result. And at the end, the, the part of the research that we showed as well as in managed software supply chains, where you're looking at the quality of these components and the attributes that are being consumed among the enterprise, um, they're staying 55%, uh, more secure, or have a lower security defect density than those, uh, an unmanaged supply chains where 20% of the components and the applications developed an unmanaged software supply chains had known vulnerabilities when the applications were built. So we went from about, uh, 8% of the downloads being known vulnerable to 20% of the components that were used in the, in the applications being, um, uh, being vulnerable.

00:32:35

So as we wrap up the, um, we wanted to offer, uh, some quick takeaways from like, okay, you got this data, what do you do? There's more data, uh, even in, in the report. Um, but the first thing, uh, first take away that all offer is you have to start with observability. Um, you have to know what you're using. Uh, if you don't know what open source you're using and what you're consuming, you can not do anything to change using, you know, what picking the right quality from the right suppliers. You have to have an active, uh, view on what you're consuming within the enterprise and where it is within your applications. So,

00:33:11

And then, um, you know, pay attention to the criteria that you're using to select these components. Don't just use popularity, as we were saying, um, you know, a better proxy is maybe release frequency, things like that. And then, um, you know, I'd ask everyone to be good source stewards if you're making contributions to open source, uh, you know, think about updating dependencies. These, I w I was surprised in the research by how many, uh, popular components just have very out of date dependencies and sort of transitively are importing, um, security risk. And so, uh, you know, paying attention to fixing that up. And then as you contribute to open source, if you put a project out there, um, there's a bottom aim for four releases, uh, you know, 80% of your dependencies up to date that will put you in that exemplar category, um, and ensure that you're, um, you know, you're one of these better performing open source projects and not, uh, you know, not introducing vulnerability into the supply chain. Yeah.

00:34:03

So the quick and easy enter, if you want a copy of the slides or the state of the software supply chain report, or the 2019 DevSecOps communities, uh, survey, um, my out of office messages on. So if you email weeks, its own type.com, uh, today it's only on today, uh, it has links to the report, so you don't have to register to download it, uh, or anything like that. Uh, I tested it, so it does work, uh, and you can find those and download those and re re, read the research yourself. We didn't cover all of the complete body of the research in this presentation, uh, today. Um, but, uh, thank you very much for attending. We really appreciate it. Uh, I know the other sessions coming in, but we'll be available for questions. Uh, the other thing I would say, just because this is recording my out of office message is not on all the time. So if you were watching the video at some later date, uh, please say, Hey, I was watching your presentation at does in Las Vegas. Could you send me those slides? So I know what you're referring to. There's always someone that just sends me an email with blank, with no reference. And I'm like, what, what is this about? So, uh, just a note for those watching the video. So thank you. Thank you.