The Application Security Podcast

Erik Cabetas -- Cracking Codes on Screen and in Contests: An Expert's View on Hacking, Vulnerabilities, and the Evolution of Cybersecurity Language

February 17, 2024 Chris Romeo Season 11 Episode 3
The Application Security Podcast
Erik Cabetas -- Cracking Codes on Screen and in Contests: An Expert's View on Hacking, Vulnerabilities, and the Evolution of Cybersecurity Language
Show Notes Transcript Chapter Markers

Erik Cabetas joins Robert and Chris for a thought-provoking discussion about modern software security. They talk about the current state of vulnerabilities, the role of memory-safe languages in AppSec, and why IncludeSec takes a highly systematic approach to security assessments and bans OWASP language. Along the way, Erik shares his entry into cybersecurity and his experience consulting about hacking for TV shows and movies. The conversation doesn't end before they peek into threat modeling, software engineering architecture, and the nuances of running security programs.

Helpful Links:
Security Engineering by Ross Anderson - https://www.wiley.com/en-us/Security+Engineering%3A+A+Guide+to+Building+Dependable+Distributed+Systems%2C+3rd+Edition-p-9781119642817

New School of Information Security by Adam Shostack and Andrew Stewart - https://www.informit.com/store/new-school-of-information-security-9780132800280

FOLLOW OUR SOCIAL MEDIA:

➜Twitter: @AppSecPodcast
➜LinkedIn: The Application Security Podcast
➜YouTube: https://www.youtube.com/@ApplicationSecurityPodcast

Thanks for Listening!

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Chris Romeo:

Eric Cabetas has more than two decades of experience in InfoSec. His previous posts include senior roles at Ernst Young, Fortify Software, and theladders. com. In January 2011, Eric started IncludeSecurity and brought together a team of experienced InfoSec specialists. Since then, IncludeSecurity has become a sought after consultancy that has performed security assessments for various industries, including technology startups, e commerce businesses, software vendors, and financial services firms. Outside of his day job, Eric has won several ethical hacking contests, including DEF CON CTF. He's also served as a hacking consultant for TV shows and movies produced by MGM Studios. Eric joins us to explore why we have so many vulnerabilities. Simple question, right? We cover his experience as a hacking consultant for TV and movies, thoughts on memory safe languages, and why IncludeSec bans OWASP language from their reports. We hope you enjoy this conversation with Eric Cabetas.

Robert Hurlbut:

Hey folks, welcome to another episode of the Application Security Podcast. This is Robert Hurlbut, and I'm a Principal Application Security Architect and Threat Modeling Lead at Aquia, and I'm joined by my co host, Chris Romeo. Hey, Chris!

Chris Romeo:

Hey, Robert. Chris Romeo, CEO of Devici and thinker of all things AppSec. You guys know I'm working on my tagline and I'm just kind of doing it live because I like the pressure of having to create something on the fly. Um, none of them have stuck yet as people know, but I'll be here all week and there is not a two drink minimum.

Robert Hurlbut:

Excellent, excellent. And today, we're joined by our guest, Eric Cabetas and, uh, Eric, welcome. Thanks for joining us.

Erik Cabetas:

Uh, thanks for having me. And my tagline will be, this is the only black shirt I own,

Robert Hurlbut:

Excellent. All right. Well, as we get started, you know, usually, uh, when we're, we have a guest on our podcast, we start with a security origin story. So Eric, what's your security origin story?

Erik Cabetas:

Sure. Um, okay. I'll try and be brief. Uh, went to school for polymer engineering material science, not anything related to hacking. Um, my roommate, uh, took over my computer, backdoored it. I was like, what is going on here? This is very strange. This is very interesting. Um, wanted to learn about computers, signed up for a class called exploring the internet. Um, in that class it was pretty cutting edge at the time because this was the, probably around 98, uh, 1998. And they gave us access to a, a Telnet terminal on a Alpha 264 Unix box. And I was coding HTML and teaching myself shell scripting and stuff like that. Taught myself C, uh, found the whole wonderful world of underground IRC security research, uh, wrote, uh, some exploits, learned how to find vulnerabilities, uh, met some wonderful people in my old, uh, my old hacking research groups, and, uh, got into the industry in 2001, uh, started out at Microsoft, uh, on the IAS 6. Uh, server team, which you might recall, IAS5 was the one that had like 25 RCEs. IS6 only had one. Uh, we, we, we improved a bit. Um, and then after that worked at Ernst Young and kind of like big, uh, financial pen testing and, uh, after some years of that, and that team was called the Advanced Security Center. So, uh, my company Include Security came out of there. Gotham Digital Science, which is now Straz Consulting, came out of there. Um, uh, Bishop Fox came out of there. So we were all like a really, uh, fun team of, uh, pen testers. Uh, but after some years of, uh, high travel in that world, uh, I decided to kind of go more corporate. And, uh, ran a security team for three years, uh, here in New York City, where I'm based out of. Um, that was a great experience because it kind of put me in the blue team side of the house. And my, my shoes were regularly stepped on by pen testers and bug bounty people and, and, uh, ethical and not so ethical hackers that were trying to extort us. Uh, so I had a lot of interesting experiences on the defensive side. Uh, but, uh, one thing that I noticed throughout that experience was the people that I was hiring to do security assessments and I just was not thrilled with the whole process because I had worked at EY and I knew how that could work on the big level. And I was just wondering why nobody did it better at kind of the smaller level. So started, uh, include security, uh, about 12 years ago now. And that's what we do. We focus on, uh, hardcore technical assessments from everything, uh, from the Silicon up to every network. Tier and layer that you can think of. So that's myself. And along the way, um, you know, I had, had some lucky opportunities where I got to play a lot of CTFs, won DEF CON CTF, uh, judged the Seesaw CTF for many years, uh, which is the largest academic CTF and, uh, yeah, all sorts of fun in this industry after 25 or so DEF CONs.

Chris Romeo:

Very cool. So whenever DEF CON CTF winner, Uh, I always want to know more about that story because I know that's, I know what that means right in our industry. That's a, that's a big deal. And so I'd love if you could just tell us a little bit more of the story about, uh, that experience and winning and any details you can share about along the way.

Erik Cabetas:

Sure, absolutely. Um, so when I first started finding these war games and CTFs online, uh, there were only really two in the world. Uh, one was DEF CON CTF and the other one was, uh, based out of the Middle East. I think it was in the United Arab Emirates. Um, and, uh, the website, you would just go to it and play war games and level up and it was, it was kind of cool. Maybe 1999 timeframe, uh, come to find out many years later, hanging out at ShmooCon, that that was actually a website run by the U. S. intelligence community to see what the current level of skill set in the hacking community was. Uh, so, uh, yeah, so I'm sure I was profiled at some point by them. Uh, but the way I found DEF CON CTF was, uh, I was doing. Uh, work at Microsoft, and I was put in this room called the dungeon, and there was this guy in a big druid robe, uh, and the whole room was dark, and there's just Christmas lights lighting the room. There's this guy in this druid robe, uh, blasting Psytrance in the corner of the room, and he's like, Well, what do you hack on? And I'm like, well, I'm trying to learn buffer overflows. And like, I'm trying to like really learn these exploits. And he's like, you should go play DEF CON CTF. Uh, he's like, just trust me, like learn, learn that stuff and play DEF CON CTF. I was like, okay. I read up about it. I was like, oh, you need a whole team. Uh, I don't know anybody in this industry. And he goes, just show up and I'll introduce you to some people. So, uh, the guy was named, uh, Michael Eddington, uh, Co founder of Leviathan Security and Deja Vu Security. And the guy in the juried robe. And he, when we got to DEF CON, he introduced me to a bunch of the different tables and I got the same reaction from every single table. They're like, yeah, we've been practicing. Like, don't just walk up to us and be like, come, can I join your team? You're probably some plant. Uh. So I just told everybody, I wrote down my phone number. I was like, Hey, here's my number. If, if you change your mind, I'd love to play. And after day one, the last place team was like, listen, you seemed like a nice person come join our team because we can't really do anything from here. So like, even if you are a plant, like you can't do anything about it. Like we're already in last place. It's like, all right, fair enough. So that team was called Team Anomaly. And, uh, yeah, by the end of day one, we were eighth place. And by the end of day three, we won. Hence the name. We, we, we named the team first, or they named the team first before we did the anomaly thing.

Robert Hurlbut:

Oh,

Erik Cabetas:

But, uh, my contribution, uh, was, uh, I wrote a couple exploits. One was a web app exploit, and one was a, one was a buffer overflow exploit on a CGI. So, I kind of got to, and the, the PHP exploit that I wrote, it, At the time, the technique, uh, local file inclusion had not really been researched, discussed publicly. So, I used an LFI attack, uh, to get RCE on the system. Uh, but at the time that we were doing that, there was no public research about it. Nobody said like, hey, here's a new attack style or whatever. Like, we were just kind of figuring it out on the fly. And, uh, you know, in the subsequent years since then, that's just kind of like a standard technique. But You know, when you're in the mix and in the heat of CTF battle, you're just like, what is this? I don't know. How's this work? I don't know. Let's just figure it out. Okay. Exploit works. Great. And like a year later, people are like, Oh, here's a whole presentation about this technique and what it is not based on my work, but it just was somebody published it like a year later, coincidentally.

Robert Hurlbut:

Oh, very cool. Very cool. So, uh, just to dive in today to some of the things that we're gonna be talking about, uh, one thing we noticed when you, you know, we're talking to you about being on the podcast is that you mentioned you were a hacking consultant for TV shows and movies and very curious about that, you know, what is a person who is a hacking consultant for TV shows and movies do, and are there any stories that you can share from those experiences?

Erik Cabetas:

Sure, so, this was, when was this, 2013 ish? Um, I had heard that they were going to make a TV show about hacking, starring Christian Slater. And I'm sure every single person listening to this is going to be like, Oh, Mr. Robot. No, there was actually another hacking TV show starring Christian Slater before Mr. Robot called Breaking In. And it aired on Fox, um, for two years. And it was about this, they were trying to make The Office. With hackers, and it turned out to be this sort of like hybrid of the A team, that old show from the 80s, with the office, like the comedy from the office, with just like, ragtag group of people, um, didn't quite work out, uh, so after two years, they canceled it, uh, but I had heard that the show was, was just in very early stages of planning, so I did some OSINT analysis, Uh, to find out the email address of the, um, development staff at MGM, um, the director and the writer. Um, so I went and I just emailed them all directly, and I think the subject was something like, You, please don't make your hacking show suck, or something like that. And I just wrote them, and I said, listen, I'm a hacker, uh, I've seen We've seen too many bad shows and I just gave them the profile of like eight of my friends. Like, Hey, here's Alex Sotorov, Dino Daizovi, Steven Ridley. Like, here's a bunch of different people who are really, really good at this. Talk to any of them, but please don't like go get somebody that doesn't know what they're doing. Um. So they wrote me back and they're like, well, what about you? And so I got on the phone with them and, uh, that ended up working out. Now, what does it mean to be a consultant for these? Uh, you're all involved for the most part, uh, 95 percent of your work is the pre production. So. The script for each episode, uh, they usually work on it in a day, um, so the writers room has, uh, this particular show had maybe eight or nine writers on it, uh, one person takes lead, uh, kind of writing the script, and then everybody else just starts throwing out ideas, and then the main writer of the, kind of is the one that like, does the commits. He's like, yeah, add that. Add that, add that. So you got the scribe person, the main writer, and then everybody else that's just verbally kind of spitballing ideas and turning this episode into something. Um, and it starts out from an outline, usually a half page to a page outline from the main writer. So they've got the main points of the episode and then they fill in all the Um, so where I, uh, kind of helped out there is after they got that first drop of the draft of the episode, uh, I would kind of take a fine tooth comb to it and look for everywhere. Um, where technology may play into it, uh, so it'd be like, okay, you know, they say this and then they do this so that you could have them holding, you know, a Raspberry Pi while he says that. And that would add like technical accuracy and correctness to a statement. So I would add in like little notes like that. Um, so after I get the first draft, uh, I, I submit my notes and then. Uh, I also, if there's technical dialogue, um, I usually give them three options. Uh, one is, if there's technical dialogue that I saw that was incorrect, give them three options. I'm like, one, this makes it a tiny bit more correct, but fits the story. Um, and then this one makes it a lot more correct, but kind of deviates from the narrative a bit. And then the third is, this makes it 100 percent correct, but you're gonna have to rewrite the section around it, or maybe this entire scene. Um, so I'd always give them three options of kind of. You know, levels that they want to correct it. Sometimes they choose option one, sometimes they choose option three. It just depends on what the lead writer feels, uh, at the moment. Um, so that way it kind of gives them the flexibility to work in their narrative and what they want the episode to be, but also lets them know, Hey, this is how you could get it 100 percent right if you wanted to. And then every once in a while, I would give them like, little things to add, like, Hey, if you mention this here and use this line with these words, like, it would reference Anonymous, like the hacking group, and people would get that, you know. So, it's just kind of like a back and forth like that, uh, I give them notes and, and they incorporate some of them into the scripts.

Chris Romeo:

Did you ever get to visit the set while they were shooting or was it all remote?

Erik Cabetas:

So not the set, uh, so it was all remote. I did get to visit the writer's room and work with the pre production staff. So I worked with the producers, the directors, and the writers. Um, so most of my work was with them. Um, although I did hang out at a barbecue, uh, at the, at the director's house where I got to meet a lot of the The, um, talent, if you will.

Chris Romeo:

Okay, that's very cool. It's good that, uh, you had that perspective that like, let's make something that is actually realistic for those people that understand these concepts. So, because we've, you know, there's so many shows that we've seen where I'm not naming, I'm not going to name any names, but there's so many shows where it's like, really? Come on, you can't do that. And yeah. Yeah, there's, there's one about a particular law enforcement, I won't mention who they are, but a particular law enforcement branch where, um, a character will literally type about a hundred keystrokes and have access to anything on earth. Like, I'd love that. Whatever programs that, that they're using there, I'd love to have access to it, but I don't think it's real. It takes 200 keystrokes. We all know that. 200 keystrokes in 12 seconds if you're going to actually do something, so.

Erik Cabetas:

Just load up Hacker, Hacker Typer, start typing. The magic kind of happens.

Robert Hurlbut:

magically happens.

Chris Romeo:

that's, I mean, listen, I'm old enough to remember, was it Superman 3 with Richard Breyer? I can never remember if it was 2 or 3.

Erik Cabetas:

salami

Robert Hurlbut:

Theory.

Chris Romeo:

Yeah, yeah, come on, like, I mean, he just sat down and typed away and things, satellites started changing orbits and,

Robert Hurlbut:

And he didn't know why, if you remember. He said, I don't know. I don't know why. It just happens.

Chris Romeo:

What a great movie. I gotta, I gotta find that out, find a way to watch that again. So, all right, we should get serious though. Cause, uh, um, Eric, we got a number of, of, uh, technical topics we want to dive into here with you. But one of the things that, uh, that you had shared with us is as we were preparing for this interview, it was kind of this big picture idea. And this is where I want us to start. And so you, you said, you know, uh, uh, kind of commentary on the current state of vulnerabilities. And so I'm curious to get your take on why do we still have so many vulnerabilities? We still have CVEs popping all over the place. We've got this, this, let's find it, but not fix it kind of culture. Um, there's frameworks in play. Like there's, there's all of these things, but it seems like we, are we getting any better? Or are we not? Or are we staying the same? So I'd love to get your take on that and see where you go with it.

Erik Cabetas:

Sure. Um, it's a big topic, right? And we could, I think we could write a book on this topic, uh, but kind of like on a high level, I'd say the answer is a cautiously optimistic, yes, we're getting better, uh, but the rate of improvement is nowhere near, it's just a linear slope that's slightly positive. It's not like the exponential we're getting better that we all would love to see. Um, And the reason it's getting better is because we now have the full picture in place. Like, um, five years ago, we weren't thinking of the dependencies of our systems and people weren't attacking those as much. But we had to keep expanding. The attackers kept expanding, the defenders kept expanding to the understanding of their whole software ecosystem. From, you know, CICD and dev builds all the way to. The final deployment and operation, uh, we now really truly consider the entire SDLC, uh, whereas before I think, uh, a lot of people just gave lip service to that concept. Um, so we are going to get better now that we have that holistic mindset and there's like a million SCA and SBOM tools out there. That's going to need to distill down to a couple really effective, uh, products and tools. Uh, currently, I think we're still a little bit of Wild West in that space. But, uh, you know, these things now we have all the, the third party trusts well considered, whether it's web like JavaScript SDKs or, or whether it's just straight, uh, binary blobs being put into your mobile apps and things like that. Um, I think we've got a, um, a holistic understanding of that. So these things will get better. They're just a little slow to get better. And how could we. Amplify that. Uh, in my opinion, it's really three things. It's, um, All of the standards in security, whether they be compliance, uh, contractual obligations with your partners, peers, and customers, uh, whether they be standards that you declare as making, or maybe it's regulatory, um, any of these things, I would say 99 percent of our clients come to us for one of those four reasons, uh, like, hey, I've, uh, on, on the B2B side. Maybe on the B2C side, you're some social network or you're like an e tailer. Maybe there's some other things that you're concerned about, like Federal Trade Commission and things like that. Not so much those four points, but at least on the B2B side, 99 percent of my clients come to me for one of those four reasons. Um, now, all of those, they kind of set their standard. 10 to 20 years ago. They're like, hey, you know, you want PCI DSS certification. You've got to do a pen test. Okay, what does that mean? Oh, well, we didn't define that and we know that there's a million levels of maturity in that. But that's it. So I think a big part of it is kind of evolving our standards and our needs. Um, So if you said, okay, well, it's 2024, you've got to have some static analysis in place. You've got to have like an expert pen test team that uses a good methodology. Like there's more to this game than just saying, I did a pen test. Um, so what, one thing I'm seeing, uh, the last couple of years is, especially in the startups, uh, primarily in the startups is just the race to zero. You know, like, okay, what is the cheapest possible. thing that I can find that gives me zero results and costs zero dollars. That's what they're trying to optimize for is I want zero results and zero dollars. They want zero results because that gets them the least amount of work on their side on as far as like getting through a procurement process and selling their software. So like great, we got no high risk findings at all. We're just gonna blow through this procurement cycle with this. Customer. Um, and that's a real thing that's happening. People are purposely choosing companies and AI powered security and whatever you want to call it, so that they actually get less results. And I've, I've had clients and I've talked to some of my peers. I won't name them, but some of my peer competitors at other companies, and they've had clients tell them, I can't use you guys this year because you just kind of killed us last year. There were too many findings. So that's a real thing that's happening. And unless we like change the standards and the compliance requirements and all of that, um, that's going to continue to happen. And the second

Chris Romeo:

that is a sad, sad state of affairs though, that people would really race to the bottom like that. And like, I don't know, I mean, I've, I've started companies and built products and, and maybe it's because I'm a security engineer. at the core. And, and then I'm all the other things kind of in the outer place is not, you know, at the core, but like, I couldn't imagine doing that. I couldn't imagine, like part of building a platform and building a product for customers is I want to do the reasonable amount of security for where I am as a company at that point. And that's not, and a race to the bottom is not reasonable. You could also say, do I need a weekly pen test, um, of each release that comes out to truly generate enough assurance? I mean, if I had a customer that was paying me, you know, 100 million a year for the software, yeah, yeah, we could do that, that would make sense. But, but there is something more reasonable than race to the bottom versus 100 million contract that requires pen, continuous pen testing.

Erik Cabetas:

And I don't think like having high standards applied across the entire industry is a solution, that's not what I'm advocating for. Cause like, if your software is, let me help you pick your outfit of the day versus let me control the Boeing plane, um, there are different levels here that, that I would expect of, of maturity and applied, uh, assurance needs. Um, and how this could happen is I think. The compliance, the standards, everything's got to kind of level up, um, and then organizations like CISA or CISA, um, come out, coming out of the U.S. government, um, and then I hear a rumor, uh, that maybe perhaps there may be some presidential executive orders kind of stating, um, hey, things need to get more mature. Uh, I had a rumor, no, nothing, nothing confirmed, uh, but that there might...

Chris Romeo:

The challenge there, though, is, and I had this same problem with the initial executive orders, right? They only apply to people that are providing software to the government. Which sounds like it should move the industry, but at the end of the day, it doesn't really move Yes, there are some big companies that sell software to the government and I get it. They're finding ways to try to, to, to manipulate their software into those requirements in such a way, but for the bulk of the, of the market, there just isn't, executive orders don't really drive them to action at the one or two year mark. Yes, there'll be some, some, uh, carryover. There's always, there's always some carryover from standards that are applied to the government over a number of years after them. I mean, that's, I'm with you. I want to see us do something better, but I don't know. I don't know that the executive orders are going to do anything to move the needle before I retire. And I've probably got 10 or 15 years left in this industry.

Erik Cabetas:

So I think, I think it would be a trickle down effect, right? So US government is not buying the latest, uh, CICD from some startup that just started last year, that's not happening, but they are buying, uh, all the major software from Adobe, Microsoft, SAP, like all these big independent software vendors. So if they can just tell all these ISVs, Hey, meet this minimum level of security, and that is actually well defined. Um, then I think it's a trickle down effect because all those ISVs will then tell their partners and the people that they procure software from, like, hey, we can't work with you unless you meet our requirements, our level of need. So it's kind of a trickle down effect in that way. Um, but it's... even if CISA today releases something that says, you got to do this. Uh, it takes years to kind of like meet the enforcement and they would put it out for requests for comment. And then, you know, it'll actually get written in and people have to meet it two years from now. And then there'll be a grace period. And then after it gets enacted, then years after that, well, the big companies trickle down to the small companies on that transitive like requirements of

Chris Romeo:

Yeah. And my hope was that the market, I don't know why I'm, I'm such a, I'm such a pie in the sky thinker, I guess, but I had this dream that like the, that we were going to have the, the market was going to drive this. Like consumers were going to get smarter. Like, I mean, I started in security in 1997, so I've watched it, but you know, we're all, we're all basically from the same vintage here, right? Like as far as our, our, our background in security and I just, over the last number of decades, I was like, the consumers are going to catch up and they're going to, they're going to drive, they're going to drive the companies to, to care for their data, both the security and privacy of that data better. And, and the market is going to weed out people that don't do it. I'm sad to say, I was completely wrong, because it hasn't happened. I thought the data breach world, that we remember the beginning of the data breach wars, where it was a big deal, and like, ooh, I got a letter in the mail, and I can't believe my, my financial world's falling apart. I have to go get a police report, because somebody stole my credit card, right? But that's, that, that's been accelerated so much, we're like, when's the last time you cared about a data breach? Hmm, we don't, because it doesn't really impact us at all.

Erik Cabetas:

Exactly, because all the things that could result of that, um, the identity theft, the credit card thievery, like all of these things, uh, have now been mitigated. There's cyber insurance that covers it. There's all sorts of, uh, ways to deal with fraud. We've essentially just baked in fraud into the standard course of business. Instead of making everything more secure, we just accepted things are not going to be secure. It's the, kind of the consumer equivalent of assumed breach. Well, we're just going to assume everything's always going to get hacked and let's deal with that. Uh, that reality,

Robert Hurlbut:

Huh.

Erik Cabetas:

but yeah, it is sad, but it is reality. Uh, so we kind of have to align with, with what the consumers say. And the motivations and incentives, uh, are something like I think about a lot. There's this really great academic conference called, uh, the economics and security conference, I think it is. Um, so put on every year, I think it's in Boston. But love that, the papers from that.

Robert Hurlbut:

Very cool. So, just changing a little bit, uh, another topic is around memory safe languages, and just curious about your thoughts on those types of, um, languages and the current initiatives towards their adoption.

Erik Cabetas:

Sure, and you know where I'm going with this Robert, I'm going to ask you to define what a memory safe language is.

Robert Hurlbut:

Well, I probably have a longer, uh, definition based on just experiences, but years ago, many years ago, started with c and c plus plus, which was always dangerous with pointers and, and all kinds of, uh, things you could do with memory, but you had to be very careful with memory. And then we had, uh, after that we had, um, more memory safe, what we called memory safe at the time, uh, java and.NET, which were, um, more non-deterministic in terms of memory. Um, handling, uh, you would have garbage collection, all kinds of things like that, that would help you. But at the same time, you still had to do stuff in order to take care of it. So, in my understanding, is that a memory safe is, is taking some of that onto the platform, onto the language itself, so that you're not having to, as a C programmer, have to think about all those things about, what do I put on? What do I take off? And. Did I get the order right? Uh, non deterministic. Did I still free up the garbage or free up what I'm using so that the garbage collection can take over? That kind of stuff is just gone.

Chris Romeo:

Alright, I'm gonna, I'm gonna chat GPT summary, I'm not using chat GPT, but I'm going to use my, my, my built in summarization to say everything Robert just said, abstracting memory management away from the programmer, away from the developer.

Erik Cabetas:

Yeah, and then you have to end with, Does that sound like a good solution to you? Thanks chat GPT. Um, Yeah, so I think we kind of understand, uh, you know, Assembly, C, sometimes C these languages, uh, with that manual management of memory, uh, creates a dangerous situation, which yields the buffer overflow type of attacks, uh, some people call memory corruption, some people call memory trespass, um But this whole family of attacks, if you see the numbers put out by Google, by Microsoft, they've analyzed all of the vulns that have been found in their browsers, both publicly and privately, and they both come up around the same number. I think Mozilla also did this. All three of the major browser vendors did this same exercise and were very close numbers. 70 percent is how many vulnerabilities they have that are memory corruption related. Um So, that, that tells you a lot. I think at Microsoft it was like 74%, at Mozilla it was like 69%, but like, you know, if you average them all out, it's about 70%. Um, this tells you a lot about Kind of how important that concept is and if the whole reason people are using these more quote unquote dangerous languages is to be performant and be fast, right? We're trying to squeeze out that little bit when you have a software as complicated as a browser, everything that's that's one cycle longer builds up because you've got so many threads so many Javascripts running and it ends up being a lot slower to the end user. So how do we? Kind of figure that out, and we have to use these more modern languages that are fully featured, but yet manage the memory for you a bit. So Rust, Go, are kind of like my go to when I use examples of memory safe languages. Um, if Rust and Go, but we do know from those surveys from the browser companies that if everybody was on Rust and Go today, we still would have 30 percent of the volumes remaining and those, you know, can be worked out with, um, uh, like good layered security, uh, you know, Robert, I know you do a lot of threat modeling, like really understanding the threat model of where things can go wrong uh, between layers and ensuring the access controls and tightly, you know, abstracted, but tightly defined abstractions, uh, across those layers. So that's, that's like a whole nother type of software engineering. And frankly, it's, it's a type of software engineering that I've not seen a ton of innovation. I see a lot of code level innovation and, and, uh, sandboxes and, and memory protections and mitigations, exploit mitigations, all of that, tons and tons, but tell me when's the last time that you saw somebody say, all right, um, I'm going to integrate the software engineering architecture and the threat model into something together, either from the security side or the development side. I've not seen anything kind of like, Okay. Help really push that well. It's more manual. The people that I know that do threat modeling, like nobody's done something integrated like that really. What are your thoughts, Robert?

Robert Hurlbut:

Yeah. Overall, I see more ad hoc type of solutions. You know, those who have good, um, experience and background in both areas to be able to put those together. Um, I think, you know, the future is, is bright, though. I see new tools that are coming along. Um, you know, various aspects that are going to help them with, you know, some of the AI solutions and so forth. They're going to try to help, um, help them think about those kinds of things and putting them together, you know, sort of possibilities there. So we'll see, but yeah, that's what I've seen so far is more ad hoc. Those that have those experiences, being able to pull those together.

Erik Cabetas:

So, you know, the adoption of Go and Rust, um, these languages have been around a while now. Go's probably been commercially viable probably on the order of eight years, and Rust probably on the order of four years. Um, so, They're, they're out there. Uh, you know, 1Password, and this is not a secret, but you can, you know, reverse engineer it and figure out that, uh, and also see on their job sites what programmers they're hiring. They're hiring Rust programmers. Um, so they're, they're using a lot of, uh, memory safe languages, uh, and you see the results is 1Password compared to all of the other types of companies out there that do similar types of, uh, protections. They're much more secure. They have much less vulnerabilities. The other guys all have tons more vulnerabilities. And that's a core part of it, is starting at the language choice. And then it's design on top of that. And, and, and further maintenance and deployment and all that. But it starts out with language choice. So, you know, is that going to be something that comes from CISA, I think we've seen something, if we saw something from CISA or from USGov that said, hey, uh, every vendor that sells to us has to, within four years, um, you know, create all new software, uh, in a memory safe language, there's something like that, that would change a lot, um. You know, if you said something like all new software that you're creating, um, because a lot of these languages like Go and Rust can interoperate with C and C like I can make a DLL and Windows in Go or Rust that then I can call from the other components, right? So you can interoperate with these things. It's not just it's got to be Rust or it's got to be C.

Chris Romeo:

I'm just imagining people

Erik Cabetas:

crazy.

Chris Romeo:

after that statement comes out, writing a wrapper in Go or Rust around their C and C infrastructure. It's written in Go. I'm telling you, it's written in Go. And then everything is like a transport layer towards their internal C and C languages.

Erik Cabetas:

putting a band aid around the bazooka pointed at your foot.

Chris Romeo:

Yep, yep. It's proxy as a service. It's a new, uh, it's a new type of system that I'll be releasing if this becomes law.

Erik Cabetas:

Yeah. And you know what, even within, uh, Rust and Go world, like there are ways to use it unsafely, uh, you know, even other, uh, more mature languages, like, uh, C sharp has an unsafe keyword in it, but you, this is kind of the, the Go, the, the, the going understanding is if you're using the unsafe keyword, you probably screwed up. There's, there's got to be a very, very good reason you're using that keyword in C Sharp or Rust. Um, but, uh, it's better to have just one dangerous section of your code that's, you know, 50 lines or less, hopefully, than having all of your code be that.

Chris Romeo:

100%. All right, one more issue I want us to tackle here before we get to Robert's famous or infamous, I don't know what word to use there, lightning round. So, uh, when we were looking at some potential topics, you had this, this concept of OWASP language is banned in IncludeSec reporting. And of course, Robert and I are both big OWASP fans. And so I was like, Oh, interesting. I want to, I want to unpack and understand more about this because maybe there's something I don't know. So yeah, if you could just enlighten us on, on kind of what OWASP language is banned and why you've landed on that, that stance.

Erik Cabetas:

Absolutely. Um, so first, let's take a step back. Uh, like, what is Include Security? So we are a team of all expert, uh, assessment providers. Uh, assessments are. usually by our clients called Pentests, but they're not always because we may write custom fuzzer rules. We may write a custom fuzzer. Um, we might write static analysis rules, uh, on our own with SEMGREP, something open source, or we may use our clients, uh, commercial off the shelf, uh, you know, static analysis tool. So there's all, all variations and flavors of security assessment. It's not just a Pentest. Um, you know, we might. Do dynamic tracing of system binaries. It's all sorts of stuff going on. So, um, and I thought about it kind of like what, what we do is basically an investigation. We're doing an investigation into the concept of assurance for our clients, uh, software assurance. So. In our investigation, we have our methodology that we execute, and then we have different levels of confidence in our findings. Um, but the way that we have to approach this is as if we're investigators. So I kind of, like, started reading some tech writing books and, and looking at, like, how, uh, forensics investigators write their reports. And the, the, overarching theme is the term objective, right? Like you have to be objective in the way that you write about these things. So if you start using subjective language, uh, then you kind of get away from that investigative mindset and you start, things start creeping in like, like judgment and preference and things that, in my opinion, shouldn't be in and expert teams reports, right? And so, once you start thinking about these types of things, then you go to OWASP and you start reading the OWASP top 10 and the guidance from OWASP and their testing guide, and you're just like, this thing is riddled with subjectivity. Every layer here, there's judgment, there's personal bias, there's preferences in all of this. Um, so at IncludeSec in our internal style guide, I was like, okay, well, these are the things, uh, these subjective terminology are banned. Uh, so we don't use terms like sensitive. Nothing is sensitive in our assessments. It is security critical. Um, and or confidential to customers or something else, right? Like there's some objective statement we can always replace these, uh, subjective words with. Um, so insecure is another one. That's like the number one problem with OWASP is like they use the word insecure everywhere. What does that mean? Does it mean it's encrypted or not encrypted? Does it mean that the access control is, is broken? Like there's always a way for you to explicitly define what you mean. Um, and OWASP doesn't do that. And let me, I was just looking at the 2021 OWASP top 10, and right off the bat, we have broken access control. What is broken? Like, it's just the most subjective word. Cryptographic failures. What is failure? Uh, like, injection. It's so vague. Insecure design. Subjective, right? So, I could, I could go all day on this, but I think you get the point. It's just, um, treating what we do as an investigation and being objective and treating the software that we're reviewing as the assessment subject. Um, and that's, that's how we write our, our reports is we work with an assessment subject. It might be multi components or it might be a network of components. Uh, it might be third party, but all of these are the assessment subject and components of the assessment subject. That's, that's kind of what I meant.

Chris Romeo:

So you end up, I mean, you end up writing a much more concise report that is focused because you're

Erik Cabetas:

I view it as defensible.

Chris Romeo:

Words that... yeah, defensible. There you go. I like that word too.

Robert Hurlbut:

Thank you.

Erik Cabetas:

Yeah, so like I want to have a finding where somebody can't argue, uh, based on a subjective word, right? Like, oh, that's not insecure. It's secure to me, right? Okay, within your business context, within your risk model, you're right. It is secure to you, right? And like, why are we even having this discussion? Let's just change all the terminology and now we don't have to have that discussion.

Chris Romeo:

Yeah. And I'm going to, I'm going to go off the script here for a second and say, uh, people that are looking to do testing engagements, the things Eric just described are not things that most companies are able to do. Create custom SAS rules, create fuzzers, all these types of things. And so I'm giving you a plug here, Eric. Just that's, that's, I mean, includes SAC, what you guys are doing. Like if. Those are the things that you're doing as part of engagements. Then you're in the top, you're in the top echelon of people just because, you know, like I said, I've been around security for a long time. I've seen the wave of pen testing vendors that came through that were running vulnerability scanners and that whole thing. But you guys are based on the things you're describing there. That's, that's next level stuff right there. And so just want the audience to know that.

Erik Cabetas:

I may be biased, but I agree with you a hundred percent. Like I created this team to, to be like an all expert team of hackers. And like, you know, um, and, and part of being that all expert team is thinking about the business side as well as the industry side when we're doing our work. Um, So we don't just put out a report, we put out a better report We want to put out something that our clients have full faith in and their customers and business partners have full faith in. So, you know, so I can get on about like branding and trust and all that stuff, but that's another, another episode.

Chris Romeo:

Yeah. Yeah. So, uh, Robert, take us into the lightning round.

Robert Hurlbut:

Okay. Yeah. So the lightning round is our three questions that we ask. Uh, the first one is, uh, just really your controversial take on this. Uh, what's your most controversial opinion on application security and why do you hold that view?

Erik Cabetas:

Okay. Um, so you gave me this about an hour ago. I thought about it for a minute, but here's the statement. Uh, FAANGs are utterly failing the industry and supporting, uh, secure development. So, uh, it's, when I say FAANGs, I'm talking top 10 websites, top 10 independent software vendors, like, just to kind of, like, That's, that's who I'm thinking of when I'm making the statement. Um, you saw companies like Microsoft and Google, uh, like two years ago, something like that, they came out with a statement saying, uh, a joint press release with the U. S. government saying, we're going to spend 20 billion invested in security over the next five years. How is that, you know, like that's, that's all their stuff and their products. All they would have to do is carve out like 10 million of that and create a secure code training platform that the entire world could use. And it would be free and it would like eliminate like a dozen different training companies. Um, now, is that good for those training companies? No, but it's good for the holistic security of the world, and it would be, it would be 10 million of the 10 billion that they want to spend. And you could create an expert training platform that, that would have all levels of, uh, knowledge from all sorts of practitioners. It's a real thing that could be really done. Um, and you know, if the FANGs got together, the top 10 ISVs got together, they just throw in like a million a piece. That is nothing to them. And this could really get done. So I think things like that, initiatives like that, and the things that we talked about in this podcast earlier about like, all right, well, what are the requirements for what a pen test is? What is software assurance? really defining those. Um, I see Google doing a little bit of this with, uh, OpenSSF, uh, that organization they have. Um, they're putting out some cool tools, some cool ideas. They're sponsoring some really cool open source assessments. Uh, I, I feel like it could be a lot more could be done on the proactive side. Um, there's no reason, uh, why these companies can't get together and buy out awesome companies like Semgrep, uh, or, uh, code QL from GitHub and like just make that open and free for everybody. And now the entire software industry is improved and we're, this is not for these companies. This is if they're gonna invest 20 billion dollars, 10, 20, 30 billion dollars. We're not talking a lot of money here, uh, compared to that scale, right? We're talking tens, if not maybe a hundred million. So, yeah, I think that is the core of why all of this is failing is because those companies aren't taking the responsibility to really level up security where they could. And if they don't, then the government's going to have to mandate it. Um, So that's, that's my controversial take.

Robert Hurlbut:

Uh, the next is a billboard message. What would it say if you could display a single message on a billboard at the RSA or Black Hat conference?

Erik Cabetas:

Can you just limit me to the RSA conference?

Robert Hurlbut:

Sure. Okay.

Erik Cabetas:

Okay. Shut this conference down. This, uh, the RSA conference, uh, needs to be completely, completely separated, or maybe not shut this conference down, but split this conference in two. Like, we need to be okay with the fact that there is a business conference in security. And that is what RSA is, but it's still pretending to dress up as a technical security conference. Just rename it the Business of Security Conference, or something like that. And make it such that decision makers are there, and the vendors are there, and there's an actual good organization of these two networks interacting. Uh, so it's not the current shit show that it is. It's, it's just so unorganized. The decision makers want to buy. The vendors want to sell. And like, that simple two way ecosystem is not serviced well by RSA Conference in any way. And I, I think it needs to go away. That's my That's, there you go, second hot take for you.

Robert Hurlbut:

I think we got the controversial in that one as well. All right. And the last one is, uh, what's your top book recommendation and why do you find it valuable?

Erik Cabetas:

Gotcha. Um, well, I have the bookshelf behind me, and, uh, if you can see, this is Security Engineering by Ross Anderson. Um, in my opinion, this is the book that changed a lot of the ways that I think about security. Um, and I think it's on its third revision or something like that.

Robert Hurlbut:

I think he just finished.

Erik Cabetas:

And, uh, my My buddy Alex, who's a head of security for Qualcomm, he, uh, about a month ago, he gave like a scathing review of the book, uh, on his LinkedIn profile. And, uh, listen, like 80 percent of his points were totally valid. There are some, some things in this book that are a bit too, um. Uh, academic, and there's some things in this book that, uh, are a bit of just storytelling, um, without actual practical application, uh, but if you can read in between the lines on all those parts, there's lots of aspects of the book that talk about security versus usability, security as a business enabler, and lots of those concepts, um, and it kind of started getting my head running. I read that book, uh, when I, I just joined as a director of security of that e commerce company. So, you know, I'm here on a blue team and I read that book exactly at the time that I needed to get that mindset. Um, another book was the New School of Information Security by Shostak. Uh, that book and security engineering, I read those together right when I got that role and I really was able to move my mindset from like, Hey, here's hack, write exploits, audit code mindset to here's how to run security programs. And you really have to get out of the idea of, let me, like, just show you how cool I am with finding vulns and let me actually affect positively the way this business is run.

Chris Romeo:

Yeah, that's good advice. All right, so Eric, what, uh, what's a key takeaway, call to action? How do we want to, how do you want to wrap up this conversation?

Erik Cabetas:

Um, so I'll give you the high level and the low level. Uh, the, the low level is, Hey, you're a security engineer, security manager, kind of listening to this. Um, think about everything that you do in a business context, right? Uh, don't think about things as black and white as like, this has more volns or less volns. Think about, uh, risk and, and revenue as, as bands of, of understanding. And once you start thinking about that, I think your mind will, will change a lot of the ways that you do things. Uh, but two on the macro level, um, you know, harking back to what I said before, uh, like. We're the industry's on a race to zero and the only thing that's going to be able to stop it is by companies like digging their heels in and saying okay. No, we're no longer accepting crap security assessments. All right, like you... I want to name some of my competitors, but just probably not gonna go that hot take right now But you know, there are competitors of mine where They'll do a pen test for 5,000 dollars and they'll just give you a report and, and that works, right? The more of that that gets allowed in this industry, then the less actual security is going to be there. That's, that's my macro. So whether it's CISA or whether it's, you know, the FAANGs stepping up and saying like, hey, we're gonna stop this collectively. Something's got to happen because it's getting worse before it gets better, in my opinion.

Chris Romeo:

Eric, thanks for taking the time to to be a part of the Application Security Podcast. I really did enjoy the The depth that we went and the breadth, like we

Erik Cabetas:

to me rant.

Chris Romeo:

yeah, we went, we went, we covered a lot of stuff. We covered a lot of ground in this and it's good. I think these are all, all big issues that a lot of people are, are trying to figure out. So I think it's going to be very helpful on multiple levels to people. So thanks for, for being a guest.

Erik Cabetas:

Absolutely, uh, have, have me on anytime in the future. It's a great show. Love, love meeting you guys.

Robert Hurlbut:

Thanks!

Security Origin Story
Winning DEF CON CTF
Hollywood Hacking Consultant
The Current State of Vulnerabilities
Memory Safe Languages
Banning OWASP Language?
Lighting Round

Podcasts we love