VendorDome: What does it mean to be a developer in the era of cyber crime?

We’re at an inflection point. With cyber criminals increasingly attacking development ecosystems and environments, the role of the software engineer is changing. And, whether you're ready or not, developers now need to take responsibility for security; in some cases becoming the front line of this new battlefield. But, what does that mean practically? How can you manage faster innovation and improved security? What resources are available to take charge of your increasingly complex role without burning out? How does this change your relationship with security teams? Join Sonatype and Cloudbees for a discussion on what you as a developer can do to protect your innovation, protect your software and protect your sanity. Moderated by Hope Lynch and Stephen Magill.


This session is presented by Sonatype and CloudBees.

BF

Brian Fox

Co-Founder & CTO, Sonatype

SM

Stephen Magill

Vice President, Product Innovation, Sonatype

SL

Sacha Labourey

Chief Strategy Officer, CloudBees

HL

Hope Lynch

Sr. Director, Platform, CloudBees

Transcript

00:00:13

Hi, and welcome to the Sonatype and CloudBees vendor dome. Um, I'm Steven Miguel, uh, VP of product innovation at Sonatype. Um, and I'm here with, uh, hope, uh, Lynch, senior director of platform at CloudBees will be moderating the session, um, which also includes Sasha elaborate, uh, chief strategy officer at CloudBees and Brian Fox, a CTO at Sona type. Um, and what we're going to be talking about today is, is just being a developer in the age of cyber crime. Um, you know, the incidence of cyber crime has been increasing year over year. We've seen increases in all the attack types that we're familiar with, things like ransomware and data breaches, and we've also seen new attack types emerge. So several of these new attack types target the software development practice itself, uh, either the tooling that we use, uh, in development or the software supply chain, uh, and this is really increased the set of risks that developers have to consider as they set up pipelines, write code, deploy applications, uh, and what was already a pretty big security burden that developers face, uh, has been growing and becoming more complex. And so that's the general topic we'll be discussing today. Um, we'd like to keep this interactive. So if questions come to mind as we're discussing these topics, uh, please paste them in the track for slack channel, uh, and we'll bring them up to the panelists. Um, so with that, I'd like to start with a round of introductions. So we'll go around and everyone can spend a couple of minutes introducing themselves. Um, why don't we start with you, Sasha.

00:01:42

I know everybody, my name is . Uh, I'm joining you from Switzerland. I am the co-founder of CloudBees. I've been the CEO for a decade I'm chief strategy officer. You might know CloudBees as being the enterprise Jenkins company. We've done that. We're doing that, but we're doing a lot more things as well. We're delivering an end to end solution for, for software delivery. We're focused on helping enterprises, which means that we both want to simplify as a life of organization while at the same time recognizing their specificities and the complexity say typically a quote-unquote enjoy, uh, with lots of different tools and processes. And so-and-so, I'm very happy to be here with everybody today.

00:02:28

Great. Uh, Brian,

00:02:30

Hi, I'm Brian Fox I'm co-founder and CTO here at Sonatype. My background is in software development. Uh, early days I was working on the Apache Maven, uh, still involved in Apache and those types of projects. Um, you know, at Sonotype we've been, you know, on a, on a journey, starting from build tools around Maven to helping people modernize their supply chains and really manage, um, the dependencies that are going into it, the code quality of the custom code that's going into it. And, um, you know, the last, uh, four, probably four or five years, you know what I've been spending a lot of time trying to evangelize, focusing on the supply chain and some of the novel attacks that have been going on there. I think, um, interesting topic and one that not enough people are really paying attention to. So thanks everybody. I think this'll be a good conversation.

00:03:21

Great. Thank you. And, um, why don't you tell us what a senior director of platform is all about

00:03:27

Senior director of platform is about the story of the unified platform. Uh, what does that mean? How do we, uh, connect with our customers and make sure that we're helping them, uh, get what they need so they can see it on their opportunities. I've been in technology for many, many years. I've, uh, spent time in the server room developer, um, Azure work transformation, you name it. So, uh, this topic is near and dear to my heart as well.

00:03:59

Great. Um, and then as I said, I'm VP of product innovation at Sonatype. Um, I was CEO and co-founder of muse dev, which was acquired by Sonatype, uh, in March. Um, and so, you know, focused on, on products in the code quality space and, and really, uh, being able to deliver results about code quality to developers and bring more, uh, insight into the development process. Um, so I'm really excited to be talking about these topics with everyone here. I want to just start with, um, really this w what we led with, which is expansion of the developer domain, and the fact that, uh, developer responsibilities, uh, have been growing. Um, they have significant new responsibilities that started with ops, right with the dev ops movement and having developers take ownership of a lot of those operational aspects. Um, but then it's advanced into security, you know, secure development practices are now sort of expected to be part of everyday work, something that developers are aware of.

00:04:51

Um, and then we talked at the beginning about the new types of attacks that the, uh, developers are facing and have to be aware of. And, you know, humans only have so much attention and capacity for productive work, right? How do you split your day? And you split your attention among all of these, uh, concerns, um, and, uh, you know, LL put it out to the audience. Uh, if you have any stories about, uh, the difficulty balancing these things, we'd love to hear them or questions that come to mind there. So I, I guess the first question in this topic is just what practices and tools can developers adopt to help them better cope with, with, you know, this huge array of responsibilities.

00:05:30

So again, I can maybe start, uh, obviously you, you, you need tools, right? You're not going to guess, uh, what goes wrong, uh, without any tools. So, so tours are always a, a great, uh, great support for that. Um, but what we're seeing is that there is no magical tools that's going to solve every, every problem. And especially in bigger organization, you end up also with some history. So one of the problems that a lot of realization will will face is that they're not going to have a program off. We don't have a tool, but we have a lot of tools. We have too many tools. And, uh, even, so you, you find differences between those tools. You're so fine, the holla among those tools. And so, uh, you're at risk of getting Patura off information on top of the false positives that we always enjoy.

00:06:19

You, you get also the, the actual was a true positive, but it's multiplied by maybe five tools. So all of this is being shifted your way as a developer. So you get that quote unquote tsunami of, of, uh, of, of feedback. Uh, sometimes, uh, we talk about, you know, I'm not sure if you know how we feed the mushrooms, right. But, uh, you throw a, uh, S it to, to, to the mushroom to feed them. So sometimes the developers feel like they're being rushed rooms right there in captains of dark. We opens the box, we throw, uh, a list of, uh, false and non false, positive. We close and go deal with it right then. So is that's I think, uh, what was a product we might see is the other topic I think we should also, uh, obviously cover is how do you categorize that work? Right? Because in some cases it's more important than, than, than features, right? It's urgent to fix that. In other cases, it's perceived as technical debt, so we'll get to it when we will get through it. So there's a classification of this also, uh, uh, is handled very differently in our organization. And obviously if you got hacked in a big way, you're more likely to consider it as a higher priorities. And if you never got hacked or you never know you got that.

00:07:36

Yeah. I think that's a good point that, uh, sometimes tools just add more to your plate and, uh, you have to think about the value that you're getting and whether it's really worth it on balance. And there's, um, I don't know, a lot of approaches to trying to fix that balance. Uh, Brian, do you have, uh, thoughts on how this plays out, especially with respect to the supply chain?

00:07:59

Yeah, I think Joe in slack just kinda kind of summed it up a bit. Um, you know, I think part, part of the challenge is finding tools that work with the style of modern development and not, um, just like bolt on. And, and what I mean by that is, you know, the, the way we develop an a, in a continuous everything world of dev ops dev sec ops world is different than it was not really that long ago and tools that are designed just to produce findings and throw out developers or throw into the mushroom box. Um, don't really work. Um, you can apply more pressure and try to force developers to do something about it, but it's not the right way to solve the problem. You need to think about finding tools that are built natively to work in that modern environment that are intended to be just part of the flow that's going on.

00:08:50

That's the first part. And the second part is sort of speaking to what Joe is saying here is that you need to empower developers with the visibility, um, to see these things, and organizationally need to care about them. You know, if the only thing you care about is shipping code on time, and you're not caring about security too. You need to say, well, it has, we have to ship secure a good quality code on time, um, and, and enable the development to be able to do that and measure it because you get what you measure at the end of the day, right? So if you're, if you're just bolting on security at the end of the process, you shouldn't be surprised that you're not going to get very good results. I mean, I think that's what it comes down to and in this modern world,

00:09:29

Yeah. I've heard some people talk about it. Mushrooms is that, but also that, um, security is sometimes not treated as a first-class citizen of dev ops. You have a separate security team, you have the dev ops team, and sometimes they talk, sometimes they don't. Um, but that lends its own challenges. So if, if there was a, an ideal relationship between security and organizations and the DevOps team, do you have an idea or thoughts of how they can collaborate because we still need separate security teams for some things.

00:10:12

Yeah. Yeah. And, um, I I'm, I agree. And I'm glad too, that, um, quality was mentioned in there. So, you know, it, it strikes me that not having time, like Joe mentioned in slack, not having time to attend to security, you know, alongside the development that's being asked, uh, you know, the future development that's being asked of dev teams. Um, it's not so different from not having time to go address tech debt or handle architecture, you know, mindfully. Um, and so I think, you know, w when you look at, uh, when you look at the sorts of, uh, practices that can be helpful here, the sorts of tools that can be helpful here, you know, it's, it's good to think beyond security because the practices that promotes low tech debt and promote code quality are the same ones that will, you know, help, uh, have a process for, for attending to security as well. It's really about all of those non-functional requirements, right? The software features have to get out there, but there's a bunch of other things that are important besides just that functionality.

00:11:12

Yeah. And I think empathy comes, uh, there's a notion of empathy for customers. Uh, maybe more than empathy, any emotion, uh, is important maybe more than in other area. Uh, you know, I was saying before, if you have never been hacked, maybe, uh, you behave differently after you got hacked, is there is a, a Naval shoulder learning processes that burns your brain in some way. And so I'm not suggesting that you all try to be hacked once, but, but I do think so is that, is that building, is this, uh, is this a perception of what it means and what type of impact it has, uh, is pretty, pretty important? And so it doesn't mean you need to get hacked again, but it means that part of learning, what it means is to, is to hear from companies to hear from engineers that had to go through that and share their experience and say, well, we thought we were maybe invincible, or maybe we didn't care because yeah.

00:12:12

Why, why us? Right. And, and in Zen, we realized that we got code by surprise and here is what it's meant after. And, and is, there is kind of a trauma as it goes visit, and a good way to, to create that empathy and stories is also to, to, to talk with people about, uh, about that trauma, uh, and, and understands that you don't want to get any close to that place. And so, uh, you know, Joe was talking about it, uh, before on slack, right. We never have time or incentive or not set to, to write, uh, senior software. It's, it's, it's essentially like being a fireman or a firewoman, uh, you, you you're, um, you're, you're interesting only when something bad happens, but the rest of the time, like, why are we paying those guys again? Um, do we really need that, that those people sitting around here, not really. Right. And, and so we, to change that, I think, uh, a lot of that has to do with is actually, uh, emotion or understanding what it means to be on the wrong side. Yeah,

00:13:17

Yeah. Nothing, uh, nothing helps focus like a failure. Right. So, yeah, I can be very motivating. Um, I wanted to, uh, return to what hope was saying about dedicated security teams, you know, I think, uh, you're right. I hope when you said that, like, there will always be a role for dedicated security, whatever tools you pull in, or they don't replace the security team. Um, but they maybe changed the nature of, um, what that team can focus on. Um, I was curious if the other panelists had thoughts on, uh, you know, what, how the role of security teams might shift, uh, with sort of honoring tooling.

00:13:53

I think, I think I can answer that and also answer Joe's question here on slack. So Joe's saying if we integrate tools into development that put the brakes on, you know, problems, we get pushback of why we didn't ship on time. Right. So that's kind of to the same thing I was saying before, if the organization doesn't value that, then you've got a misalignment, um, you know, depression followed up and said, we did that in false positives, broke us down. I mean, I've spent a lot of time thinking about how, how we can approach that, that solution for users. And, you know, I think, um, being able to of course have precise data is key. False positives are gonna kill everybody. It becomes a broken, smart, uh, smoke alarm, but it's not only that simple. You need to be able to, um, have everybody in agreement and security comes into this as well about what's important and how important it is.

00:14:45

So you can't block builds on everything. Um, not unless you had no problems and no application we've ever analyzed started out on day one with no problems. It's just not a realistic starting point. And so you need to be able to, um, think about the solution from ways that include security, risk, quality, risk, legal risk, because at the end of the day, developers have to deal with all of these things. You can't satisfy just one side of the equation, but you need to be able to prioritize those set, the different types of violations at different levels. Um, you know, so that the level tens of whatever, you know, uh, AGPL and a security level, 10 are showstoppers, other things are warnings, right? Um, that is the approach that leads to success. And as you start to eliminate those, you can bring the bar down a little bit.

00:15:33

So now you're dealing with fives and above and things like that, but it requires precise data. It requires the ability to have policies that are, uh, very contextual because the policy for one application may not apply to all the others. You know, you might have code that is a service versus something that's shipped or something that's literally buried in a bunker is different than a wa uh, web facing application. So you need to be able to nuance all of those things. So at the end of the day, when you present findings to the developers, they're accurate and relevant for their context, otherwise it's easy to rationalize them away. And if you're blocking on everything, that's not going to work, the tool is just going to get unhooked and then, and then you've ignored. Right? So, um, you know, th that's how I view the world of how you approach kind of these problems here that they're pointing out in slack Is helping to define the policy, right. Not being the cop necessarily, but if you have the right tools and everybody agrees on the policy, um, it can become more blameless.

00:16:38

Yeah. I, a hundred percent agree with you, Ryan. It's, it's, it's, uh, I think we're also in a, uh, in a good way, right? In a culture where as a pipeline drives a lot of behavior, right? So we see everything as this movement. And so we think about gates, but, but, but I agree with you, it's, it's, uh, it's about the other whole risk, risk assessment, right. And some of it needs to be a go, no, go, if, uh, it can, you know, you talked about AGPL or, or, you know, like you have an S3 buckets, that's open for everybody and so-and-so, you might, you might need those, those big, uh, trigger points. But the rest of the time you need that more, it's more important. I think, to have kind of that brains, it's able to look at everything you're doing at all time and give you an assessment of is this is kind of good. This is not good, is this is actually good and prioritize your work, right. Because not everything is an emergency, not every thing is to block your flow. So a hundred percent agree with you, Brian.

00:17:35

Yeah. I think, um, there's also flexibility and leverage to be gained from, uh, targeting different points in the development process for different sorts of warnings. Right? So like you have the ability to report certain things in the IDE have pre-commit hooks for other sorts of things have, you know, uh, see, I fail on certain types of errors and other things you might just monitor for in production. Right. And so I think sort of taking advantage of all of those integration points, um, and surfacing the right sorts of things at the right points, um, can really help, uh, to broaden coverage.

00:18:07

Yeah. So many people I hear, especially coming from security, or like, I just want to break the build and it's like, Hmm. Is that really what you mean? Because if we're talking about a vulnerable dependency, um, you've got probably hundreds, dozens of developers who need that dependency to continue working on their branch, run the tests and you block that for all of them. Well, one person or two people are actually making the update. Is that what you meant? You want 90% of your team's widdling, their thumbs while one person fixes it. I don't think that's what you meant. That's what you ask for it. When you say, I just want to block the build. What you really mean is I want them to know about it during development, so they don't ignore it. And maybe for some of those cases, you want to block the release, um, because they ignored it, right? So it's a carrot in the stick, but, um, you know, sometimes, and, and that's kind of back to my first point about tools that are kind of native in this environment, uh, consider these types of things, tools that are just looking at it in a, in a very stovepipe kind of way, don't consider what actually happens if you're failing all the things all the time, right. It's just not going to work.

00:19:13

And as a result of, so as a fact that developers are, you know, you talked at the start, uh, Steven about how first they got exposed to ops more, right. And so you have this increased scope that they need to adapt to. But if you think about security, security is not just about code security, right? So recovering code here, we're talking about, uh, as Brian just said about libraries and so dependencies being, being, uh, at, uh, a problem. Uh, but, but you have data, you have a run time environments, you have identity, you have a bunch of things, is that, uh, is that might come from, from, um, um, that might come from, from source code, but might not. Right. But, so we talked about infrastructure has code. So you codify this environment in your code. So your so need to validate this is, this is okay. Uh, if you operate a SAS, maybe you want to make sure is that as a data from specifi customers, uh, or if a company is a European company, maybe it can't leave the EU data center and so on. So all of the things participate with this is this risk assessment, and NCN all of those go back also to the developers in some way. Uh, so it's, uh, yeah, it's, it's getting wider. It's not just risk of, I think,

00:20:33

Um, I wanted to move on now. We might come back to this topic in a bit. But, um, another thing that we wanted to discuss was, uh, the government regulations that are coming out in this space. Um, so there was the, uh, cyber security executive order that came out a few months ago. Um, that was, you know, sort of describing a standard or, you know, asking for agencies to develop, um, minimal standards for things like software, bill of materials, things like software testing. Um, and so, you know, I, I wondered if anyone had thoughts about, you know, the role that those might play, um, you know, sort of the, uh, how that guidance fits into, uh, what you see as current industry practice, um, you know, what would, what would your recommended minimal standards be? Uh, maybe we can spend a few minutes on that topic.

00:21:22

I think it's a necessary shove in the right direction. I think there's going to be a lot more, um, but certainly the federal government buys a lot of software. And if you include the turtles all the way down, the people selling to people who sell to the software ultimately are going to be asked for a bill of materials. So I think it has a nice amplifying effect at some level, and is causing parts of the industry to care about Villa materials, where they didn't previously, you know, I think, um, ecosystems that were sort of component centric like Maven, Java, JavaScript, new, get, you know, these kinds of things. Naturally, we're thinking about that because they, how they assembled software was front and center, I think in embedded systems and CNC plus plus while there was reuse of code, I think the mentality was often more we're using source code than about the modules.

00:22:15

And therefore thinking about it as a bill of materials was less front and center. And so I'm seeing some of those ecosystems starting to ask these types of questions where previously it was not top of mind, they had other concerns. So I think, um, I think it will, will help move the needle. There's certainly going to be more and, you know, th th the standards wars are already on with several different types of standards and no, no real consensus. I think that's always going to be the case, you know, so, um, it's going to be challenging, but at least I think it's fired the gun to get started.

00:22:53

Yeah. W w we're, uh, we're facing, uh, I, I think we're facing a radical shift in, um, in, in, in how we perceive software as, as much like any physical assets. Right. And so, uh, the army is defending countries, right? So, uh, you have military personnel who, who defends a country. Uh, but if you look at a lot of countries around the globe, um, cause there is nobody to defend organizations or as a country, uh, companies like for, for any type of attack, right. Is the same thing is not true. Uh, if I look at that suits alone, for example, we're, we're, we just acquired a bunch of, uh, . Uh, we acquired the Bunshaft, uh, trucks, uh, in case, uh, I guess, um, uh, enemy attack by road or something like that. But we have absolutely zero cyber security units in the country, zero. And so, uh, it's going to take a wide, but it's realizing is that essentially what used to be a physical threat that has been handled as such at the, at the country level, uh, either through police or the army now needs to be expanded to also include cyber crime and the lots of countries.

00:24:08

Uh, we're very, very far from even thinking about those problems.

00:24:17

Okay. So do you think we can backtrack for one moment? Joe has a good question and it was something I thought of as well, but he's asked how do you balance the speed from the open source ecosystem with node news, yang, et cetera, with new requirements to understand your dependencies and the risk from them, right? So you have all of these wonderful tools you bring in, how are you monitoring all of it? How are you making sure you're staying secure, um, risks and dependencies. It's a challenge.

00:24:52

I feel like that's what we've been doing. It sorta type for more than 10 years, trying to, trying to solve that problem. You know, it's, it's all the things I talked about before. You know, making sure you have the precise data and, and the contextual types of policies, you know, you need, um, tools that can give you visibility all the way down that dependency stack, even by the way, if the manifest is incorrect, you know, we see a lot of cases where, um, you know, things get into applications that weren't listed in the Palm or in the package that Jason, because they were included by copying and pasting. Um, that's important also, if somebody were to, um, you know, in certain malware into your supply chain, your manifest, doesn't say, give me the hacked version of this component. Um, and yet that's what you've got.

00:25:39

Right. And so the analogy I always use is like, it's like judging a recipe for a cake as safe to eat, because it doesn't say there's poison in it, but the cake might still have poison in it. How do you know, how do you test it? You need tools to be able to actually look at that. Um, in addition to what the manifest says is, is there, uh, that's a, that's a part that a lot of people miss, um, but once you have that accurate bill of materials, then you can start doing the things that I was talking about before with the pipeline and managing it and managing violations to whatever your policy is, you know, saying what things are good, what things are bad, um, and putting those gates in place.

00:26:16

Yeah. W we're, uh, working obviously with, uh, companies like Sonatype for, uh, you know, clearly, uh, B leadings, the pace on that front, when it comes to managing dependencies and making sure you, you do gets the best out of the open source ecosystem, not the worst. Um, and, uh, we also launched the CloudBees, uh, not long ago, a compliance tool and NCID is really to look at the number of, of aspects that, that includes, uh, taking the input of, of tools like, uh, certain types too, but also looking at things that happen. And Craig was talking about that, right? Uh, on slack, what happens post production once you deliver, how do you scans that the right keys have been used that you have containers that suddenly are not like two months old, still operating in, in, in, in, in, in, in, in, in production and so on.

00:27:06

So, so, so again, leaving the pure pipeline view of the worlds, that's, uh, that's great. That's a, that's an amazing step. The also having kind of analysis at all time, uh, and, and you're seeing a bunch of solutions based, for example, on a PA or a policy, uh, architecture, uh, agents, uh, and, uh, that makes it possible to scam a very wide aray of, of, uh, constraints and feed all of that information back to the teams. And obviously there's a as a part of, of as a constraints. Uh, and, and I think Joe was talking about balancing, right? How do you balance that? So that you're just, don't add five times more stuff to developers, but you kind of filter out things, categorize things, prioritize things. So you get a sense of what's urgent versus less. So,

00:28:06

Yeah, I think that's right. There was another question in slack. I wanted to bring up, um, as a little bit higher on, uh, education and the fact that, you know, there's a lot expected of developers in the security space, um, but not so much training on, you know, what that means to, to have secure development practices, um, thoughts on what, what could be effective there?

00:28:28

Well, I don't think we need less training in anything. Right. But again, it's all about balancing things. Uh, and so, uh, and, and we talk about securities. That's cool. Uh, but, but if you think about compliance in a wider sense, right. What's Vokey or not, because if you're a bank, for example, uh, some things might be not alone, but they're not the silly security issue from a coding standpoint, right. You might be writing perfectly okay, code, however, the bank just doesn't want those kind of things being done. And, uh, so you have all of those compliance, who's constantly evolving. And so what we see right now is also a lot of teams being burned out because you have compliance teams constantly trying to retrain teams and say, oh, by the way, you can't do that. That's new, what's this you can do now because we fixed some other problems. So you have this constant stream of, of noise. Um, and so I think it's also part of the tool to make it possible, not just to say you did wrong, but here is something we need to fix. And here is why, and to learn along the way, I think the tools can act as a medium to learn as you go, rather than get a PhD in, uh, seizure programming.

00:29:44

Yeah. Yeah. I agree that there's a sense in which a, the right tool is the one that, uh, lets you do the right thing with, without all that training. Right. You know, the, like in the security space you have that security team, they're, you know, they're the experts they can sort of, uh, be, you know, a guiding voice and, and look over your shoulder and offer advice for various things. But there's a lot of things that are just, uh, formulaic. Right. You know, and, and you want automation around those, um, so that you don't have to spend manual effort, uh, you know, first learning to recognize those and then just recognizing some things that could be automated.

00:30:22

So done another one in from gene hygiene long time though. See, um, what is the most optimistic thing you've seen to help developers, uh, rise to the level that the outside threat demands, um, said, he's in awe of the carnage of the effects of solar winds code that God and how proper supply chain is and CIC D have been put in the spotlight.

00:30:46

Uh, it's hard to find optimism in this topic, but, um, I would say that the fact that, uh, developers generally seem to be aware and care about this problem as evidenced by, you know, the usage of at least free and open source tools to help solve this problem. Um, for me, that's in stark contrast to, it's probably around 2011 when we tried creating some eclipse plugins to surface this information and nobody was really interested and I talked to people and they, you know, well-informed developers would say things like, well, I just have to worry about the AGPL. We have a security team and a firewall for everything else. Right. So that's where we came from. I was an 11 now in 2021, we at least have developers scrambling to kind of try to find tools to solve the problem. So I, we still have a long ways to go for sure. But, um, we're getting there. The, the awareness is there at least.

00:31:46

Yeah. I think it's, it's, it's part of, so things once, you know, it, it seems very obvious, but before that you had to spend a lot of cycle trying to, to even make sense of it. Right. Meaning when you were telling people that, uh, you need to add better, your quality, your, uh, your security and so on as part of the process. So you get to inspect the process rather than expect the output seems pretty obvious, right. Uh, uh, gene, uh, has been writing about this for, for more than a decade now. Uh, so, uh, it, it makes total sense. However, if you, you should look at, at how things work in the field, you still have a lot of, uh, behavior where, you know, people compiled source code and Zen uploads the bits from their computer to, to, to the artifact repository, uh, that completely any, any type of embedded security you might have in some. And so, yeah, as, as, as, as as said, right, it puts a spotlight on it. So maybe it was a cost to, uh, to, to pay as an industry to get a change. And we're talking about empathy and emotion and getting a learning through emotion, maybe that's what it took to them.

00:33:03

Do you, um, do you think the, sort of, um, the, the development, the risk of the development process itself, so, you know, what you just described Sasha about, you know, that's can, you know, be uploaded and in non trackable ways, you know, there's the, uh, sort of dev environment or leads artifacts sort of issues that like how it happened with code cuff, other sort of supply chain attacks manifest, as you know, corrupting something about the development process itself, do you think, um, do you think the security team ends up owning those? Or do you think there's going to be sort of a separate or maybe a sub team within security? You know, it's just sort of focused on, uh, the development process itself and securing that.

00:33:44

Well, I think to me, it's, it's, it's really part of the DNA of an organization. So it's not every single team that needs to suddenly get smart, to learn how to do things. Uh, there needs to be a shared DNA. Things that gets built as an organization is that we all agree is a baseline that we're gonna follow. So that's how we do things here, right? That's part of your values, that's part of, of everything you do. And any teams that gets on board, it needs to be onboarded this way. Uh, it actually needs to be hard to not do it right. And you need to make it easy. You need to make it desirable, uh, to, to go this way versus the other way, because quite frankly, if you are in such an environment where everything has been set in a ways that, uh, you know, it just, it just goes with the flow. It's fantastic. Right. Why would you want to do something else? And so, uh, so yeah, I think it needs to be a key tenant of, of the DNA of an organization.

00:34:47

I think you, uh, basically echoed a question from Christopher, but I think maybe you answered it at the same time where he said, how can we make this area more simple for developers simpler is more secure. Right. Uh, so yeah,

00:35:03

Yeah, yeah. And, um, you know, I think there's, there's some great examples of that that come to mind, right? Like if you have, um, you know, we spend a lot of time, uh, or I spent a lot of time doing research for the software supply chain report, which is something we publish every year at Sonatype, um, just sort of looking at dependency management practices and so forth. And, uh, you know, one of the things that comes up there, you know, when we do surveys is, um, you know, it's much faster, much easier to do the right thing and organizations where, um, there is some sort of official process for a dependency, right? If you have a workflow, if you have tooling around evaluating a dependency to decide whether it's okay to pull in or not, and, and picking something that's approved by that process, pre-approved, or automatically approved by whatever automation is in place, you know, if that gets you unstuck, so you can keep developing, you know, within like an hour, instead of waiting a month for approval, um, then you you've just made the easy path, the more secure path. Right. Um, it's also, I think the case that, uh, the easiest dependency to manage is one that you don't have in your code base. Right. So think carefully about everything that you pull in. And, you know, I think that goes to those for coding as well. Right. Uh, you know, uh, but the only bug free code is the code that you don't write. Right. So keep things simple, keep things small.

00:36:27

Uh, you know, Jean's poking at an interesting thing here. Um, you know, since around 2017, you know, I I've been trying to raise awareness of the supply chain stuff. So he's asking the reaction of when I, when I first heard about the scale of solar winds and some of this stuff, I mean, I have a lot of flip answers. Um, but, uh, I feel like, yeah, it was inevitable. And, and, you know, one of the things I've been trying to drive awareness on of late is that, uh, so many of these modern attacks are actually focused on the supply chain and your developers, you know, in 2017 ish, we started to see some trends where some of the attacks were actually focused on stealing keys of open source publishers, which was new and novel at that time. Um, and then it continued to happen. And it's evolved now into these supply chain attacks, trying to get into your developers, not so much trying to get stuff into your code, to ship to your end users, although that happens, certainly that was kind of what happened with solar winds.

00:37:24

Um, but some of those attacks have used those entry points into development to launch into a tax on the rest of the company itself. Right? So the, the development infrastructure often has keys to the kingdom, especially in a cloud native world and production that can be used to, to Tran transition into other parts of the org. So I try to remind people that you need to think holistically about your developers, your development infrastructure, all of that. Um, and not just focus on, I'm trying to secure the product that's coming out of that. If you take that to the factory analogy, it's like all of Deming principles, those things are really focused on making better cars, and you should do that, but just doing those does not make the factory safe from somebody who's trying to blow it up. And that's kind of what we're really dealing with right now. You know, many of these supply chain attacks don't care if they get into the end user product, they don't care if they pass the unit test and actually get shipped. Because all they're trying to do is run on your computer to put a back door. Right. Um, and, and so many traditional security practices are not designed to deal with that. They're trying to inspect quality into the car.

00:38:37

Yeah. I think that that's super important. The, the, the idea that, um, yeah, stay, you know, staging is an important environment, too, right? It can't be all about production. Um, you know, all of these are entry points into the process and, uh, you know, with code Cove and some of these other attacks we've sort of seen yeah. The vulnerability of the development process itself. Um, and I, you know, I think it's, it's interesting too. And independency confusion attacks are in this space too, where like the automation that enables us to be more efficient and more productive from a development standpoint is being exploited to, you know, sort of surreptitiously bring things in, you know, with that there's less awareness. If there's automation without proper monitoring, there's not awareness of what's happening and you need those checks.

00:39:22

So we have a question from Laura with American airlines, how do you get developers to care about security vulnerabilities, even if their application isn't critical to the operation, but it's still exposed to the internet.

00:39:37

Well, uh, I used to remind people that, you know, in a, in a world with crypto and those kinds of things, um, any of that stuff is directly monetized, right? So if you can get a, if you can get a CII server to just run mining for you, um, that produces value. So the old adage of my application has nothing of value is no longer true. Your visitors have CPU cycles. If you can stick a JavaScript crypto miner in there, your build servers have CPU cycles, right? So all of these things can be directly monetized, certainly in a crypto world. Uh, even if you have nothing to steal. And in some cases it's easier because you know, when, when thiefs get caught, it's often when you're trying to sell the thing, they stole a, well, if you just stole cash directly, you don't have to sell it. Right. It's and that's what the analogy is happening in crypto.

00:40:29

Yeah. I agree. It's, it's, uh, if you want to do an attack, you need to, you need a set of proxy, at least one multiple proxies to, to get along the way, right. Proxy is not in the technical sense, but in the sense of, of getting closer to your, to your target and in some ways, those are the easiest, uh, easiest way to achieve things because people don't feel as there any target. Uh, and, uh, if you think about the traditional hacking, right, most of the time it's social hacking and it's about giving a call, it's, it's extremely low tech, uh, give me your password. Uh, I need to reset something. So I understand, you know, here we have, uh, uh, we have a discussion around, uh, uh, the factory and how we can build a process and solve, but let's stop, forget. It starts with extremely, extremely low tech. Uh, and so, uh, everybody needs to be made aware of that. And actually, you, you find some interesting companies, uh, building training math, and I know, uh, that's a gift On the other day where up here, I'm back, I was looking at a solution where it's almost like a 360 VR type of environment, and you're, you're, you're, you're going through a scenario and things happen. It's sort a bit like if you are a pilot and they, they, they, they create an alarm or something like that. Then as you see how you react, because you need to be trained, uh, to react in, in, in the time you expected the last, because those attacks can happen anytime.

00:42:06

Um, I'm curious, um, you know, whether you think this, this sort of, um, these sorts of threats, uh, this sort of risk has changed, uh, in, in the current sort of work from home era. Um, and, uh, you know, how does, how does that complicate the situation? Right.

00:42:25

Certainly the stuff I was talking about, you know, if the developers are under attack, if you were in an, in a, in a building with traditional perimeter defenses, those, uh, back doors, might've been blocked by something else. Um, if it got installed, it might've been limited in its scope, how many people are working at home that no longer have those, um, basic, you know, perimeter defense kind of things in play. Uh, and how, how would a company find out that one of their developers machines was compromised? I think it becomes much harder than it was before. You know, we, I think we, we, we tend to take those perimeter defenses for granted because they've been around for so long. They're not perfect for all the things, but suddenly when you take them away, uh, a lot of these attacks, I think become a lot, a lot more likely to land successfully.

00:43:14

Yeah. I think there was a lot of implicit, uh, embedding of security, uh, based on the fact that the data center was skiing. Right. And so, uh, uh, everything was funneled and that's essentially what Ryan says. It runs with perimeter security very better. Right? So you, you, well, people in orange, the office does that. It goes through, uh, the office network and the VPN and, and, and so on and cloud or not cloud that's that, that's how you're shielded. And, um, what COVID showed us and remote work is, is really, is that, uh, as a cloud was as data center, because, uh, it's as close or as remote, uh, for everybody it's, it's the best at the center. And it has forced organizations to rethink, uh, what is the right period matter, uh, hope he's talking about zero trust, right? Uh, uh, and, and on, on slack and, and yes, that's right. You have to redefine those because we had a lot of implicit set in, in how things were secure.

00:44:14

So do you think, uh, just adopting cloud is, is the way forward for, uh, mitigating a lot of these endpoint threats?

00:44:23

Oh, no, I don't think, uh, it, it, in some way it doesn't change it. If anything, it offers, uh, so much, uh, powers that, uh, it offers a way to blow up your world in, in, in a limited number of ways. Right. We see those environment created for developers, preview environment, test environment, they get creating shutdown and stump. It's fantastic. Um, it's fantastic. But at the same time, it's, if, if you have any threat in any of those, there as many, uh, entry proxy to your, into your world,

00:44:58

That's interesting. Uh, that's not what I expected, cause I like on the one hand, um, yeah, there's more going on sort of behind the scenes. On the other hand, there's maybe less living, you know, locally, um, on, you know, your laptop. Right. You know, there's, uh, I don't know, now there's cloud development environments available. Right. You can have your IDE entirely cloud hosted. And so it's like Google docs, nothing's living on your laptop. I'm curious, you know, does that, does that help shift the perimeter back to something more defensible or, uh, you know, is it, is it sort of a wash? Yeah.

00:45:31

I think also one of the things, and that is a thread throughout the conversation. When you talk about training and giving the developers what they need, you can have the best tools, everything, but if someone just configures something in the wrong way, um, then suddenly maybe, you know, you have a problem, but I think that just doubles down on, um, by all of the best tools in the world, but take care of your developers and integrate it into the process. The way a lot of our, uh, commenters have been stating here in slack since we've been talking.

00:46:09

Yeah. Yeah. It's tough. Cause there's, yeah, there's no single solution, right. Everyone's environment is completely different. You know, we were just talking about, did you shift everything to the cloud, but nobody actually does that. Right. Everyone's in some sort of hybrid environment. Um, and so, uh, you need customized solutions as well, which is, which is why it's great to have these conversations. Right. I didn't get to take questions from the audience about sort of particular things they faced, um, in their sort of unique circumstances. So,

00:46:38

Yeah. And by Zoe, Steven, um, you talk about, uh, openly talking about things. I think it's a great point because it feels a bit like when a topic becomes a topic like, uh, you know, we, we,

00:46:54

We lost you for a few seconds. Sasha, your audio just cut out.

00:47:00

Yeah. I'm back. Sorry. Yeah. Uh, yeah, I think it's good to, to, to openly talk about those things that you, as you were saying, because, uh, in some way, as soon as we talk about a topic and we talked about supply chain attack, for example, it feels like this should be now solved everywhere and, and that doesn't happen anymore. Right. But we know that the last city at which things happen, we know as a reason, the nurse Shazard was the reason backlogs business constraints and so on. So actually we're likely going to see issues like that for the next decade. And, and obviously we don't want to see that we want to see less of them, but I just think we need to have empathy and a safe place where we get to speak and exchange about those and say, Hey, maybe I know you mostly in some cases, because it could expose your company, but say, you know, we're struggling with that. Or we haven't solved those issues. And, and having those discussion, I think is very important that you don't snap your fingers and you're done with it.

00:47:54

. Yeah. Um, does anyone sort of speaking of this right? How you, how you just get deeper on these topics and, and learn and discuss, does anyone have sort of favorite places they go to, uh, to look for, uh, for guidance or have conversations in this space? I mean, these sorts of conferences, I guess there's one answer. That's where I sort of learn most of the insightful things I've, uh, come across the last few years.

00:48:28

Okay. So, oh, go ahead.

00:48:31

Go ahead. No, yeah.

00:48:33

Oh, I was just going to say threat modeling has come up a couple of times, supply chain, pipeline, threat modeling. I think there is some pent up interest around that. So Sasha or Brian, do you have any, any comments for that?

00:48:54

It feels like a whole webinar. Um, you know, I think based on the conversations I've had with, with people over many years, I would say no, most people are not modeling that threat. Um, you know, and, and, and some of the stuff that I've brought up here probably is new to a bunch of people that didn't really think, oh, uh, I hadn't thought of that, that door in the back corner of the factory. Um, you know, and so, uh, I think the, in some ways, um, the, the, the number of possible ways that the supply chain can get totally hosed is incalculable, especially when you start thinking about transitive dependencies and upstream repositories, and how do you know what's on the repository? How do you know who put the stuff there? Um, you know, and so, so an element of this is making sure that you can detect and respond as quickly as possible to the inevitable problem that's going to happen.

00:49:49

That that is, is kind of a key. Um, but then really focusing on some of these things and, and, and locking them down, like we've talked about, you know, having the policies to control them, having ways to detect, uh, in near real time when there's malware published, um, or when one of your developers has maybe pulled down one of these things, what's the blast impact. You know, I think you've got to model those out and start working through them, but, uh, assume that one of these holes in something you didn't think I was going to happen and, and ask yourself, how quickly could you turn around, even if you knew so many companies don't know what's in their software still today. And even if they did, don't have mechanisms in place to turn it around quick enough to respond. And so, um, w in, in some ways, you know, it's really scary to talk about all these other things. And so many companies need to just focus on the basics, but then don't stop there, keep moving forward, keep moving down the down the stack, if you will, to, to understand the, the rest of the implications.

00:50:51

Yeah. I'd like to share some numbers because we did a global survey recently on, on, on, on this topic, I've seen executives and, and it's pretty interesting what comes out of it because, uh, so I'll, I'll give a three or four numbers on that. 95% of the executives claims that their software supply chain are secure. Okay. So that's, that's amazing. Uh, I would not expect this to be that high, right. But then when you double click on and ask other questions and then pinpoint into some questions, uh, to serve says that it would take them more than four days to fix a problem, because they had really, uh, an experience of, of that nature four days. So it's, it's huge, right? Four days a lot can take place in four days. And, uh, um, and, and see about 60% 50 I, 58% sees that easy experience. One, they would have no idea what their company would do. So it just shows that at the high level of rights, the risk kind of these perceptions that you're handling the problem, your tactics are problem. But when you actually ask very specific questions about, okay, what does it mean if this happens and to measure things this ends up becomes a much, much less mature, uh, situation.

00:52:14

Yeah. I think the first step is just awareness. And, uh, and then like, like Ryan was saying, once you, you know, even once you become aware of some things and address them, right, there's more to discover more to explore about the organization. And so, um, I guess it's an ad hoc threat modeling process, but, you know, it's, uh, something to do.

00:52:33

Yeah. So Christopher has another good question. So what if magic happens and everyone knows how to secure the software supply chain what's coming up next? What are the big security threats that you forecast?

00:52:58

I reject the assumption. We're not going to fix it this year. It's, it's going to get deeper. I think that's the trend we've seen over the past. You know, what, four years now, at least since this started in 2017, you know, I think this whole thing is just a giant cat and mouse game. Every time we get good at closing one hole, whether it's the network ports, um, or application coding errors, or the dependency problems, uh, the attackers look at the next soft target. Um, you know, and so I would be looking for the places where they're seeing weird one-off things. Um, kind of like when we saw the start of the supply chain stuff in 2017, I don't know what they are if I did. I'm not sure I would tell everybody, cause it seems like as soon as these things are announced, the bad guys and certainly the less sophisticated copycats all pile in right away. And we saw that with the dependency confusion. Um, so I certainly wouldn't want to be on the record of telling them where to go look.

00:53:55

Yeah. I think it is incredible how the threats continue to evolve. I mean, I remember when, uh, we, we just had to fix memory errors, you know, see it was all, everything was due to CC plus plus memory overflow. Yeah, exactly. Well, JavaScript doesn't have buffer overflows, but it's still got plenty of problems.

00:54:24

Cool. Well, so Brian, you're being constantly accused of not being able to be optimistic, so genes that drain streams. So, uh, I think you need to say something optimistic, even if it doesn't relate to security just for gene,

00:54:40

Just for Jean, he didn't find my last piece of optimism of it's not as bad as it used to be actual optimism. So, um, I'm going to struggle that I think, you know, the fact that everybody's talking about it, the fact that we're starting to see standards evolve around it, we're seeing lots of open source projects solving. It tells me that it's a problem that everybody recognizes. That's the first step to fixing it. I feel like for so long, there were so few of us actually pointing it out when people dismissing it, that's where we came from again. So it's not as bad as it was before, but that's optimism. At least we're talking about it and we're moving in the right direction. So is that, is that good enough?

00:55:19

Yeah. Things are less. Shit is that's a,

00:55:27

Uh, yeah. I mean, I write, I think these practices that have taken off, you know, things like, well CACD and, you know, tooling integrated into pipelines and, and just, you know, multiple points of monitoring. Like, um, they really do make a difference. Right. They, it attacks have had to evolve. Right. Which means that we're, we're doing something right. Yeah. Um, you know, another thing we saw, um, in, in this year's software supply chain report uh, we did an analysis of, um, you know, just like, uh, how projects have evolved with respect to this particular metric, meantime to update, you know, how basically, how, how quickly do open source projects update their dependencies when there's a vulnerability released against them. And that that metric has been improving steadily over the past 10 years, um, at least in the Maven and, and, uh, ecosystem. So what we looked at, so, you know, I think the community as a whole is getting, you know, the modern tooling and practices do help and they're being adopted and there's focus here. Um, so yeah, I think that's a,

00:56:31

Yeah, I w I guess I'll add, I'll try to double down on the optimism there. You know, I've seen companies, um, that have really started to focus on what I would call really the leading edge of all of this problem, and starting to look at this as a developer experience problem, as opposed to just throwing more tools into it and thinking holistically about it. You know, that's kind of how I've been, trying to think about it for a long time, but I've seen actual companies doing that. Um, and there's not enough of them, but the fact that that's happening now, uh, it kind of dovetails into that all of the, the open source part of it. So, you know, there's a double one for your gene. Um, so some more, some more good news again, I'll leads us towards, we'll finally start to solve this problem.

00:57:19

I think we should all end on an optimistic note. Is there anything that, uh, inspires your confidence, Sasha?

00:57:27

Uh, actually I'm, I'm a born optimistic, uh, and, uh, uh, I think, uh, I actually think it's, it's great that we're talking about it, that we're seeing innovation, crazy innovation taking place in security, lots of solutions geared towards developers geared towards ops runtime analysis and so on. So, so it's, it's there, right? Uh, it's a complicated topic. It takes time, it takes energy, it takes money. So it's not going to happen by a snack, but it's, it's becoming a, a real topic. So we're on the cusp of a change, really. So, yeah, it's amazing. I see.

00:58:05

And crew close to time. Hope you want to give final comment and close this out.

00:58:10

Yes. Thanks everybody for participating so much in the slack channel made it a lot of fun.

00:58:16

Yeah. So great discussion.