The Application Security Podcast

Kyle Kelly -- The Dumpster Fire of Software Supply Chain Security

Chris Romeo Season 11 Episode 1

Kyle Kelly joins Chris to explore the wild west of software supply chain security. Kyle, author of the CramHacks newsletter, sheds light on the complicated and often misunderstood world of software supply chain security. He brings unique insights into the challenges, issues, and potential solutions in this constantly growing field. From his experiences in sectors like cybersecurity and security research, he adapts a critical perspective on the state of the software supply chain, suggesting it is in a 'dumpster fire' state. We'll dissect that incendiary claim and discuss the influence of open-source policies, the role of GRC, and the importance of build reproducibility. From starters to experts, anyone with even a mild interest in software security and its future will find this conversation enlightening.

Links:
CramHacks - https://www.cramhacks.com/

Solve for Happy by Mo Gawdat - https://www.panmacmillan.com/authors/mo-gawdat/solve-for-happy/9781509809950

FOLLOW OUR SOCIAL MEDIA:

➜Twitter: @AppSecPodcast
➜LinkedIn: The Application Security Podcast
➜YouTube: https://www.youtube.com/@ApplicationSecurityPodcast

Thanks for Listening!

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Chris Romeo:

Kyle Kelly is the founder of CramHacks, a supply chain security newsletter. Kyle advocates for awareness of unique risks like coffee in supply chains. He'll explain more. As an executive consultant at BankSec, he directs penetration testing and incident response for financial institutions. Kyle also applies his expertise at Semgrep as a security researcher, focusing on static analysis to pinpoint bugs and vulns in third party dependencies. Kyle joins us to explain why he considers the current state of software supply chain security, a dumpster fire. We hope you enjoy this conversation with Kyle Kelly. Hey folks, welcome to another episode of the Application Security Podcast. This is Chris Romeo. I am the CEO of Devici, a threat modeling company, and also co host of the Application Security Podcast. But Robert is traveling somewhere around these great United States of America in a car somewhere. So if you see him, say hi. but obviously he's not here today. Um, but I am excited to be joined by Kyle. And Kyle is going to talk to us a lot about things in the software supply chain. But Kyle, before we ever start talking about serious topics such as software supply chain, our listeners always like to hear where people are coming from. And so, if you could share with us your security origin story or how you got started in security, we'd love to understand where you're coming from.

Kyle Kelly:

Yeah, sure thing. Um, it's, it's not all that different from most people, at least what I seem to hear at video games, obviously a big one, played video games my whole life, but I guess where it really started from, uh, you know, I got my first PC when I was four years old. Windows XP. I'm only 26 years old, so I had that pleasure. Um, my mom was a teenager, and she, at that point in time, teenagers loved to have a dedicated dial up line. And so I was fortunate enough to have a mom who was a teenager with a dedicated dial up. who now had a newborn child to take care of, which meant she had to work a lot, which meant that little Kyle had a dedicated dial up line and a computer. Um, so, I, this sounds pretty dark, but I was probably one of those little kids getting preyed upon in the AOL chat rooms at a very young age. Um, at least my mom had the awareness to educate me not to share things like personal information, where I live, things of that nature. Um, but yeah, I started playing RuneScape when I was maybe six years old, I think I had my first account. Um, MMORPGs, for anybody who's not familiar, will suck up as much time as you will give them, which meant I got interested into like bot development and just how can I play RuneScape, World of Warcraft, and all these other multiplayer games at a top level without actually spending 24 hours a day playing each and every one of them. And, uh, yeah, eventually that just led to Well, I'm growing up. I need money. How can I make money? Um, at the time, cybersecurity didn't actually pay all that well, um, compared to like computer science or computer engineering. So I went down that route and then wound up working in cybersecurity as a penetration tester and have just been, uh, going down that route ever since. Now, I have a role at BankSec where I do penetration testing, incident response, type of stuff for financial institutions. And also I'm a security researcher at Semgrep, where I get to do my AppSec. I call it my fun job because it keeps me engaged from a technical side.

Chris Romeo:

Very cool. Very cool. So let's, let's, uh, engage on this topic of the current state of the software supply chain and software supply chain security. And so I'm not going to put words in your mouth. I'm going to use, I'm going to use the direct quote by way that you, you, you sent this to me. And so you referred to it as a dumpster fire. And so I'm curious if you can unpack for us. Why do you think the current state of software supply chain security is a dumpster fire? Let's, let's start there.

Kyle Kelly:

I really, I don't think it's that bold of a claim. Um, so what is software supply chain security? If you ask ten people that question, you'll get ten different responses. Uh, I had a rant on LinkedIn recently. I believe it was Cisco who had a top ten software supply chain security incidence. And reviewing them, I was just thinking in my head, like, I would probably classify three, maybe, of the ten to actually be software supply chain incidents. Like, is SolarWinds a software supply chain security incident? What, what is the software supply chain? We still haven't even really defined that. Uh, if you Google the definition, they more or less say it's anything that is involved in the development or, Uh, deployment of, of an application or software, but, uh, that's where my joke usually comes in, like. Is coffee maybe the most serious software supply chain security concern? Like, if a developer does not have his coffee, is he more or less likely to deploy vulnerabilities into their code? Is that an actual supply chain risk? I would kind of argue it is. Sleep, uh, one of my, uh, first employers actually did a study for the US government, where they just determined how many more vulnerabilities are introduced based on the lack of sleep. Surprise, surprise, like, the less sleep you have, the more likely you are to make mistakes. Um, uh, it cost the government a couple million dollars to figure that out. Um, but, yeah. So,

Chris Romeo:

Seems like something we could have concluded without few million dollars of, of, uh, of study money going in. But that's okay. That's, that's the way these things work. So then let's, let's take a step backward then. And say, what do you define software supply chain security as? Because I get it, it's one of these things that there's potentially different definitions depending on who you talk to. If you talk to somebody in U. S. government, they're going to give you a different answer than somebody who works for a startup. But, you and I are sitting here chatting, and so let's get your definition on the books here as far as, what do you consider to be included within the software supply chain security?

Kyle Kelly:

I, honestly, I just think it's too broad of a term to really have a definition for that I would feel is adequate. Like, I think, um, container supply chain security makes sense. Like, it does the container that you are using have known vulnerabilities. Okay, that makes sense. Is there any assurance to that container? Do we know where it came from? Uh, or maybe like, um, build time, supply chain security, like what, or, or component. Supply chain security, like there needs to be defined subsets for this so that we can really dig into what the problem is. I bring this up in OpenSSF meetings on almost a weekly basis because they're talking about supply chain security issues. And I'm just like, well, we still don't have a definition of, of what we should specifically be targeting. Cause is... if you read about software supply chain security, most commonly you're going to be looking at vulnerabilities in open source software packages. Like that is, that's a good subcategory, just open source software package vulnerabilities. Why do we have to generalize it into software supply chain security and conjoin it with topics like SolarWinds when they're totally irrelevant to one another?

Chris Romeo:

Yeah, I mean, so when you think about SolarWinds then, and I feel like we talk about SolarWinds too much in our industry, it's the, it's the Target in Home Depot of the year 2010 plus, right? Um, it's, it's the one that we, we keep going back to, but SolarWinds then, you, what would you classify that as a categorically, as an issue, is it, it's a build pipeline vulnerability that result, that ended up in resulting in, uh, you know, a, a, a piece of software being deployed through the proper channels that had been compromised? Like, what category would you use for, for, for SolarWinds then?

Kyle Kelly:

Well, I guess I just don't understand why it's not just an application security vulnerability. Like, just cause it's. So, the most common supply chain security attacks are software vulnerabilities that are used by consumers. Well, that's like the whole point of software. So, that means every incident is a software supply chain security incident? Um, or are they just application security vulnerabilities that are exploited?

Chris Romeo:

Yeah, I think at the end of the day, many of them are application security vulnerabilities, but that's an even more broad category. then software supply chain though, right? So if there's a SQL injection in a library, then yes, technically that is an application security vulnerability. It's all also would be called a software supply chain issue or vulnerability as a result of the library. Um, I would say that I, I tend to differentiate between the two, just because to your, your earlier point, right? Like we are two umbrella when it comes to describing these things, like words do mean things and it's, it is important to qualify exactly what it is that we're talking about. Right. And so, um, for me, application security vulnerability even becomes more kind of a more generic term.

Kyle Kelly:

Yeah, I would agree. But so like, I guess a poster child for software supply chain security or open source package vulnerability, um, would be like log4j. Like, it's not something you independently use. Um, it is a, a package created for the purpose of being a supply chain tool, you know, it's something that's ingested to use in your software, not an independent application. So if it was an independent application, I would no longer consider it to be a supply chain issue.

Chris Romeo:

Yeah. Yeah. I mean, that's the, so, so there's, so there's a differentiator then between, you know, when we're trying to hone in on what is software supply chain, there's a differentiator. Like if it doesn't stand alone, then it is a, do you use the word component? Like can we use the word component in the conversation of, of the supply chain?

Kyle Kelly:

Yeah, so I, I mean, I used a dozen different terms. Package, dependency, um, component, module. Yeah, pretty much anything that falls into that, that realm.

Chris Romeo:

Okay. And so let's come back around to the, to the, the, so, so let me, let me, let me kind of read back to you what I think I heard. And I want to, I want to understand the dumpster fire and maybe dive deeper into the dumpster fire as well, because that's a, That's a pretty big, I mean, it's a, it's a, it's a big accusation, right? To say that something is, is a dumpster fire. I don't have any problem with big accusations. I'm actually quite a fan of it, of, uh, of, of pushing our industry, pushing the boundaries in our industry to cause people to have to think more than, than just pare it back what they heard in a conference or heard on an, in an interview or something like that. But, so, so basically if I was to summarize what I think your, your conclusion or your argument is, is. Software supply chain security is a dumpster fire because nobody, we don't really even have a good definition of what it is. Is that a fair assessment?

Kyle Kelly:

Yeah, there's, there's no reasonable definition that makes it actionable to have a market around it. I mean, at BlackHat, for example, there were maybe a dozen or more booths that say software supply chain security. Well, you go up to one and they say, we do, we identify firmware vulnerabilities. We identify open source package vulnerabilities. We determine if your, uh, ASPMs, you know, like if, if you're using a, using, if your application is internet facing and, um, consumers are using it and there's potential vulnerabilities there. Like there, there's so many different avenues, but everybody's using the same term, just software supply chain. And it's driving me crazy.

Chris Romeo:

And I mean, this is not, it's not like this isn't a common occurrence, right, in our, in our industry as, as things have, as we've gone through, I mean, Blockchain. Right? Like, three or four years ago, you'd go to a security conference and you'd walk the vendor area and everything was blockchain. We're gonna do blockchain this and blockchain that. Now, almost nobody's talking about blockchain. It's not even a thing anymore. Um, AI is now the thing, right now. Like AI is, is everything. It's, it's, I mean, that's, that's a bit of a misnomer. That's that people in the industry will, and, and, uh, people that are building technology products will, will tell you that AI is the answer and the solution. And you don't even know what the problem is, but it's the solution for it. Um, but I think this is just a, this is a pattern that, that has happened in our industry as well, though. Like we go through these kinds of hype cycles and. It feels like software supply chain security may be at the end of that hype cycle. Like, I don't know, what's your, what's your thought on that? Like, given that, that, that it feels like this has been around for a number of years. Like, are we on the burning out phase of this where the hype is going to stop and, and this is going to just become part of how we build software? Or what are your thoughts?

Kyle Kelly:

I think we're just getting started, honestly. If you look at, uh, I mean, it's largely due to regulation. So, I guess my other hot take is that, although I'm not a huge fan of this, I think that supply chain security is inevitably going to be a GRC problem. I, I think a great startup idea for any listeners. Uh, you take any vendor risk platform, risk recon, et cetera, like all those third, fourth, fifth party risk solutions, and you just clone it, and you have it ingest SBOMs, and you have a million dollar product. Um, so, the, the real rush seems to obviously be due to the executive orders and regulatory guidance that's in draft form mostly. Um, And so you see things like DependencyTrack, which I forget exactly. I think DependencyTrack has been around for over 10 years now. But if you look at 2018, 2019, they had like less than 10 orgs using it. And now they're saying that there's over 10, 000 orgs using DependencyTrack. So clearly some exponential growth in recent years.

Chris Romeo:

let's, uh, let's dive deeper, or as other people might say, let's double click on this idea of, uh, supply chain being a GRC problem, because I'd love to, love to get some more context on that, because I'm sitting here kind of like going, hmm, kind of shaking my head maybe a little bit on supply chain as a GRC challenge and, and GRC being the right place to solve this, but I'd love to understand more about, about why you, why you've concluded that.

Kyle Kelly:

I wish it was just a straight AppSec issue that.... The reason I don't categorize it into that space is because AppSec or security engineers are expensive, time constrained, um, and really for the best bang for your buck, it's likely to be more of a policy as a code solution. You take your, your open source policy or your SDLC policy, you put it into, uh, an SCA tool or an ASPM and you say our developers can only use open source dependencies that have no known critical vulnerabilities. They have, um, or they've been maintained in the last six months. They, uh, they have a security, uh, a scorecard rating of seven or better, whatever it might be. So those things can be built into your policy and through a GRC platform. Um, uh. You can, you can govern a lot. Um,

Chris Romeo:

Whenever I think about giving more things to the GRC team, more, more, more policy related things, I just, my, my innovation index for Cool new things that we're going to build as part of our, part of our company just tends to go in the wrong direction. Like, I think, like, I think innovation tends to go down. The more policy, the more governance, the more compliance time activities and things that we do, the less innovation that we have. Uh, and maybe that works for some companies that have reached a certain maturity level, but like, I couldn't imagine a hundred person startup, which is, which is probably touching the GRC. They probably have a GRC person, let's say a 200 person startup, they probably have a GRC person at that point or someone who's carrying that responsibility. But my thought of them being involved in the innovation engine and able to determine what we use in building our software just seems like that's really going to slow us down and take away our ability to innovate.

Kyle Kelly:

yeah, so you bring up a good point there because there's really two, two perspectives of supply chain security. There's the consumer and then there's the developer. So from a, I, I think more so on the consumer side, it's bound to be, uh, you know, a GRC problem. Maybe for larger orgs that, like, I mean, there, there's orgs that, If you use an SEA tool, even with reachability and all these other prioritization capabilities, might still have hundreds of thou, hundreds and hundreds of thousands of vulnerabilities like you. If you're a government contractor working with DOD, whatever it might be, and they come to tell you that you can't have any critical vulnerabilities in any of any of your projects. How do you, how do you combat that? Uh, yeah.

Chris Romeo:

I think the reality is nobody actually does. Because it's not possible. It's not possible to say, well, it used to be possible, right? Like I grew up in the world of Government Security Certification, going back to, uh, the Orange Book, and then ultimately the Common Criteria. And part of the Orange Book, uh, this is like a history lesson for people that aren't as old as me, but the Orange Book was a set of standards that the government put forth, and there was, there were levels that had formal models created of how a system would function. Now, they had, those systems were a whole lot simpler than what we're dealing with today. But there were ways you could apply formal modeling to demonstrate that a system was secure. The challenge is like it's, it's just not practical with anything bigger than a thousand lines of code, which, you know, you have more than a thousand lines of comments in one code file in modern systems today. And so this idea that, you know, well, you can't have any critical vulnerabilities. Well, that's great, but it's not, you know, it's not reasonable. That's something else I spent a lot of time talking about last year. It's like Reasonable AppSec. What a reasonable security in general. What's reasonable telling someone that you can't have critical vulnerabilities in your project or in your product? it's just not reasonable because back on the software supply chain they take in a They use a component that's critical to the functionality of whatever it is that they're selling whatever their products doing That has a critical vulnerability in it. What do you do? Just shut down? Like, declare bankruptcy and shut down the company? Like, well, we had a critical vulnerability, we're just going to go declare bankruptcy and shut this thing down and try again. Right? Like, that's that type of a policy, and that's my fear at GRC, right? We start, we start having policies that really don't work in the real world. And I'll give you an example of this, right? So, this is kind of a funny example. So Uh, as the first company that I started, we were building a platform and as a security professional, my advice to people has always been, if you have a critical vulnerability in a component, you should just break the build, right? You should break the build and, and have the developers figure out how to patch it, how to upgrade it, how to, how to, to, to, to bring it forward. and make it, make it secure so that you can continue building that, that, and deploying that production application. But then one day we had a, we had a critical vulnerability occur in a component that was crucial to the functioning of the platform and guess what, they didn't have a patch for it and there was no way to patch it. And so I had to eat my words, but also it really, it really changed my perspective when I was sitting in the developer's seat. and working with the developers and saying, there is no way to patch this. So if I had lit, if I had continued forward with my staunch security policy of, well, if you have a critical vulnerability, you just need to break the build and then, you know, you can't move, you can't push any software, any features, or any bug fixes until you get that thing resolved. Well, once I was sitting in the developer chair and it wasn't possible, guess what? My, my, my approach and my policy changed because I knew it wasn't realistic. It didn't, it wasn't something that, um, was even, it wasn't even possible.

Kyle Kelly:

Yeah, so for listeners who might have a manager, supervisor, or whatever it might be that has Chris's old philosophy, um, my advice to you is to take that component, fork it, and use the forked version and you no longer have a vulnerability, at least according to SEA tools. So, it's that simple.

Chris Romeo:

I think that's a good way to get fired as well, over a period of time. That's a good way to be terminated from employment though, right? Cause like, you're, you're masking some amount of, you're masking some amount of, uh, risk from a particular component at that

Kyle Kelly:

Yeah. No, it definitely, I don't advise that if you care about your job, but it is a real problem because there, people do fork their dependencies or components and unknowingly are devaluing any SCA tooling that may be incorporated into the environment. So,...

Chris Romeo:

I never even thought of that before. Like, that is not something I would recommend as a best practice or a design pattern for how do you, how do you build the most secure platform on earth? Well, you fork a whole bunch of your third party components and have your own little orphan versions of them that are running and you lose all that traceability.

Kyle Kelly:

Yeah, not to speak for Google, but, uh, my understanding of their practice is that they actually do this extremely well, where they, they fork, uh, they, they have a thorough review process for open source packages that they'll be using. Um, they fork them, they maintain them themselves internally, um, at least when necessary, maybe there's a minor security patch or something that needs to be changed. Um, and they, they build, they build in assurance throughout that, that whole process.

Chris Romeo:

mean, that's a great, it's a great practice. If you have the resources and capability to do it, the challenge is how many people have that. Those resources and capabilities to do what you just described. A very tiny sliver of our industry. And I'm talking about technology. I'm talking about every, you know, all companies on earth. How many, what percentage of companies could actually do what you just described? 1%?

Kyle Kelly:

I think, uh, JFrog Artifactory is in a great position to, um, to, to really build on what they already have, uh, where people are using Artifactory to kind of host their, their known dependencies, um, where they, they can say, Hey, you know, your organization has never used this dependency before. Are you sure? You've checked it out. Everything looks good. Or now they have X Ray and these SCA components that can identify the vulnerabilities org wide as opposed to just per project and things like that. So,

Chris Romeo:

Yeah, I think of that as a best practice as well and I'll even genericize it a little bit more and say Proxy, you know, a third party dependency proxy is really the capability. And JFrog is one product that could do it, but it doesn't mean there's not other, other ways you can do it. And, and, um, but it's really having that. Having that proxy so that developers aren't just going and grabbing software from whatever, wherever they want and including it in the application. You have some amount of control over what things are vetted and what things require a security review. So I, I, I described that in the past and I still believe that is a best practice. to be able to have that type of visibility into what components you're using. And it's not just a Wild West scenario where developers grab whatever component they want. And, um, uh, give you an example of, this is, this is just an example of how dangerous the component life cycle and components, uh, repositories are. Um, there was a, uh, I can't remember if it was, this was an experiment we did at a previous company that, that, uh, one of the people that worked with me did. Somehow we were looking at packages, we were looking in the Ruby ecosystem, and there was like a package called, is, I think the package was called iseven. And it did something as simple as, it gave you a function that you could call to determine if an integer was even or not. Seems like something you could have coded yourself in five milliseconds without needing a package to do it for you. And so this, uh, this guy that was working with me, he's like, I'm going to create one called isodd. And I'm going to see how many times people download it. And so he created this package, and within the first day, hundreds of times it had been downloaded. And all he did was just mimic and nobody knew who we were though. Like he could have put anything into that that he wanted. He could have put some type of malicious agent or whatever. Like, but then over time, I think there were thousands of downloads over the first couple of months of this thing. Now he had no reputational score. He had nothing in the, in the, uh, repository of, of libraries. that said that he should be somebody that should be trusted, that you should just grab this package and use it because this is a trusted entity. It wasn't signed, it wasn't, there wasn't anything being done to it. And it just goes to show like, that's another example of really the brittleness of the software supply chain, is it could fall that easily.

Kyle Kelly:

I think, uh, you brought up kind of like the package manager ecosystem, NPM, Maven, GEM, all those guys. I, I, I get that. You know, they don't have tons and tons of funding. A lot of them are community based or community operated. But really, I think there needs to be a new standard for responsibility. I think it was PyPi just recently started to require MFA, like starting in the new year. So that's great to see. NPM, I would say right after this, go to NPM and just search for Roblox. And I bet you, like, the first thousand hits will be malware. There's no evaluation process. There's things like StarJacking, where you can misrepresent how many stars the package has when you're querying it on npm. So you can look like a very, like, recognized and reputable package even though you're not. They don't do any validation about the source, like the manifest comparison. So there's, there's I would never trust a package manager with anything, which is sad to say because you're installing software from them and you can't trust them.

Chris Romeo:

te tell me more about this StarJacking. I've never heard of StarJacking. This is a, the first time I've heard that term. And now I'm curious as to, to, how, how does an attacker pull off StarJacking to make it appear that the package that that I've put into NPM is five stars when it actually has no stars?

Kyle Kelly:

Yeah, so my very basic understanding of it, and I might have this off, but I'm fairly sure you can pretty much just When you're publishing to NPM, you can misdirect it with another project that might have 10, 000 stars. And when your package gets published, it'll show that it has 10, 000 stars. Because it's not validating the source of the information.

Chris Romeo:

Hmm.

Kyle Kelly:

cross referencing it with what you're actually publishing.

Chris Romeo:

Yeah. And that, that just, I mean, it gets to the issue of trust. amongst those that are building software. And that's, that's half the battle. That's half of the challenge that, that I see that exists in the software supply chain is trusting the people that are, that are building this. And then for me, you've got everything else. You know, the vulnerabilities, inherent vulnerabilities in libraries, things like that. Kind of, I bucketize those all off to the side in a separate bucket. But when we think about the trust of even something that we're downloading, right? Not even talking about, you're further back in the process where you're talking about as the package gets injected into the repository and then, you know, you're somehow, somebody's messing with the stars. But it's really, for me, it's like, how do we, how do we even trust the people that are creating open source today? And I haven't found anybody that has a really great answer to how we could actually do that.

Kyle Kelly:

I'm sure there is, there was recently a specification around a scoring system specifically for open source reputation, which I thought was an interesting concept. Um, it would, I think it was just based on like the number of maintainers, number of commits, basic heuristic data, but it was better than nothing. I don't, I can't remember what it was called off the top of my head, but it's out there.

Chris Romeo:

Yeah. And then you've got like the scorecard, right? Like, is it OpenSSF that does the scorecard? For, for, but now that's just the top, that's just a collection of some of the top open source projects. It's not like the scorecard covers everything in NPM. It's, it's a, you know, the, the most important things.

Kyle Kelly:

Yeah. And similarly, uh, like Google with osv. dev. Uh, for certain projects, they'll use their Fuzzer solution and include that in the OpenSSF scorecard. So, um, there are companies nowadays that are starting to be more transparent about what open source projects they're using, which gives them quite a bit of credibility. You know, if Google's using it, it must be good enough for me type of deal. Um, so that's nice to see. But it's still really just mind blowing to me coming from more of an information security background. Application security feels like the Wild West, feels like 10 years backwards, um, where, you know, just imagine going to any organization and seeing that users can install whatever they want on their devices, which actually apparently is really common at startups, which I didn't know until I started working at a startup. Um, and so it's, it's crazy to me. I have friends at all types of major organizations where they're like. No, I don't have to run my code through a SAST solution. I'm like, you guys are like a, like, do you have a hundred billion or trillion market cap? And you guys don't use SAST? Like, it's not a requirement? And they're just, no, why would we waste our time with that?

Chris Romeo:

Yeah. It's, it's definitely a challenge to think that there are. Organizations that are, that are still so immature from an AppSec practice perspective. But I mean, there's a lot, there's a lot more immaturity than I think we realize across our industry. Because very seldom do you have someone submit a conference talk on the image, the AppSec program. And get up and talk about it or it's something like we can, we can have a, we can listen to them describe how bad they're actually doing. SAST, we don't do SAST. Next slide. Nobody's going to be able to, you know, to, to stand in front of a group of people and make that statement.

Kyle Kelly:

still have that as a talk idea. So if any CISOs out there want to contribute and talk about how immature your, your org is, I'm happy to, to document it and talk about it.

Chris Romeo:

Good luck with that. I think you will get zero total responses to that, so. Okay, so just to kind of wrap, bring this back around now. From your perspective, Kyle, what do you think are some of the best practices that companies can, can adopt to help get away from this dumpster fire status that you described for supply chain security? Okay.

Kyle Kelly:

kind of starts off with an open source policy, um, or just SDLC policy, whatever you want it. If you have one incorporated in there. Um, and. Some very basic SCA tool. I mean, Dependabot doesn't have much for prioritization, but if you're a smaller org, it's, it's fairly effective, um, and it's not too hard to deploy and actually keep up with if you're, if it's built into your CICD. Where, where things get messy is trying to deploy after the fact. And so build reproducibility is, is like the first thing when somebody says, Kyle, how can I start doing or caring about software supply chain security? I say, you need build reproducibility. If you are not using lock files and. It's not glaringly obvious what it takes to build your code in the first place. You're going to have a really hard time integrating software supply chain security because you don't even know what, what components make up your supply chain. Um, so I think that's why I do appreciate SBOMs as much as I don't see them as a critical security solution today. I think SBOMs and just the idea of knowing what the heck makes up your software project, uh, is really important.

Chris Romeo:

Let's, uh, let's go into our lightning round here. Because I think you kind of answered our first question, so you're going to have to use a different one. Um, our first question in the lightning round is always controversial take. What's your most controversial opinion on application security and why do you hold this view?

Kyle Kelly:

Um, I guess my controversial take is that I, if I were building a startup or really any product focused company, I probably really wouldn't care about AppSec much at all. Um, it's a. There are certain things that I guess just through practice that I would probably ingrain into the practice, into developing the products, but I wouldn't necessarily care too much about, you know, having a SAS solution, SCA and all these things, um, just because every day is, is, is about survival. Um, you know, if you're, if you're not building, you're dying. So at least that's how I look at it.

Chris Romeo:

Yeah, I mean, there's, there's some truth to that in the, in the startup world. I'm on my second one now, and there are I would say that my general approach is as being a security professional is to apply more security than probably most other startups do, but that's just my nature. That's just how I see the world. Like some of those things are non negotiable, but there's also the pressure to deliver features and deliver product. And if you don't do that, then you go out of business. So it doesn't matter how great your AppSec approach was when you're out of business. So there's always that trade off that has to happen. Once again, it comes back for me to reasonable security. Like, what's reasonable? What's reasonable for a startup? I don't think reasonable is nothing, but it's certainly not what I would expect from a 20 year old technology company and the maturity that they have. Um. How about a billboard message? So what would it say if you could display a single message on a billboard at the RSA or Black Hat Conference? How

Kyle Kelly:

That's a tough one. Just because I really liked it and it's, it's stuck in my brain since this past Vegas trip, uh, for Black Hat and B Sides and all that stuff. Uh, Reversing Labs had, had a really cool sticker. Uh, it was, uh, about like SBOMs. But it was just a, it was a sticker and it was like the ingredients on the back of a, a can or like soda, and it was like your software dependencies as ingredients, which I thought was pretty cool. Just like knowing what the heck is going into your code. That's, that's all I care about. It, so on the information security side, people always ask, what is the first thing you would do if you were coming into an org that had no information security practice? And my, my response is always, I would need to learn what makes up, what is the makeup of the organization? What infrastructure is there? What devices are deployed? How many devices? Things of that nature. You can't protect something you don't know exists, is really the,

Chris Romeo:

about a top book recommendation? So any, any books that you've recommended to people? And if so, why do you, what did you find valuable about those books?

Kyle Kelly:

um, and not technology related, but I, I really enjoyed, uh, solving for happy, which is just, uh, kind of, I, I'm really into psychology. I'm really fascinated about how humans think and like why the little voice in your head says certain things. If you have a little voice, if you don't, you might be more concerning than if you do. Um, but, um, Yeah, I just thought it was a great book. It's, it's kind of helped me understand the difference between the past, the present, and the future. The past, uh, or the future leads to a lot of anxiety. The past is, uh, usually there's a lot of grief. But if you just live in the present and the now, you're generally going to be happy.

Chris Romeo:

Alright. How about a key takeaway or a call to action based on our conversation here? Is there anything you want our audience to do as a result of this conversation?

Kyle Kelly:

Um, you know, a lot of this software supply chain security stuff kind of goes out the window. If you advocate for developers to pick the right dependencies or components in the first place. Um, so if you're looking to build a startup, maybe find a way to make it easy for developers to pick the right dependencies. Socket.io does a good job of this for NPM, um, specifically NPM, and I think there's a lot of room for improvement in that space.

Chris Romeo:

Okay. Very cool. Well, Kyle, thank you for joining us for this episode of the application security podcast. And, uh, we had enjoyed talking through the software supply chain and I feel like so many other things as well that kind of came out in the conversation, but truly enjoyed having you here as a guest. And, uh, thanks for, for taking the time.

Kyle Kelly:

Yeah, thank you. It's a pleasure being here.

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

The Security Table Artwork

The Security Table

Izar Tarandach, Matt Coles, and Chris Romeo