The Application Security Podcast

Arshan Dabirsiaghi -- Security Startups, AI Influencing AppSec, and Pixee/Codemodder.io

December 05, 2023 Chris Romeo Season 10 Episode 35
The Application Security Podcast
Arshan Dabirsiaghi -- Security Startups, AI Influencing AppSec, and Pixee/Codemodder.io
Show Notes Transcript

Arshan Dabirsiaghi of Pixee joins Robert and Chris to discuss startups, AI in appsec, and Pixee's Codemodder.io. The conversation begins with a focus on the unrealistic expectations placed on developers regarding security. Arshan points out that even with training, developers may not remember or apply security measures effectively, especially in complex areas like deserialization. This leads to a lengthy and convoluted process for fixing security issues, a problem that Arshan and his team have been working to address through their open-source tool, Codemodder.io.

Chris and Arshan discuss the dynamic nature of the startup world. Chris reflects on the highs and lows experienced in a single day, emphasizing the importance of having a resilient team that can handle these fluctuations. They touch upon the role of negativity in an organization and its potential to hinder progress. Arshan then delves into the history of Contrast Security and its pioneering work in defining RASP (Runtime Application Self-Protection) and IAST (Interactive Application Security Testing) as key concepts in appsec.

The group also explores the future of AI in application security. Arshan expresses his view that AI will serve more as a helper than a replacement in the short term. He believes that those who leverage AI will outperform those who don't. The conversation also covers the potential risks of relying too heavily on AI, such as the introduction of vulnerabilities and the loss of understanding in code development. Arshan emphasizes the importance of a feedback loop in the development process, where each change is communicated to the developer, fostering a learning environment. This approach aims to improve developers' understanding of security issues and promote better coding practices.

Links:
Pixee https://www.pixee.ai/
Pixee's Codemodder.io: https://codemodder.io/

Book Recommendation:
Hacking: The Art of Exploitation, Vol. 2  by John Erickson: https://nostarch.com/hacking2.htm

Aleph One's "Smashing The Stack for Fun and Profit":
http://phrack.org/issues/49/14.html

Tim Newsham's "Format String Attacks": 
https://seclists.org/bugtraq/2000/Sep/214

Matt Conover's "w00w00 on Heap Overflows" (reposted):
https://www.cgsecurity.org/exploit/heaptut.txt

Jeremiah Grossman, aka rain forest puppy (rfp):
https://www.jeremiahgrossman.com/#writing

Justin Rosenstein's original codemod on GitHub:
https://github.com/facebookarchive/codemod

FOLLOW OUR SOCIAL MEDIA:

➜Twitter: @AppSecPodcast
➜LinkedIn: The Application Security Podcast
➜YouTube: https://www.youtube.com/@ApplicationSecurityPodcast

Thanks for Listening!

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Chris Romeo:

We have the pleasure of hosting Arshan Dabirsiaghi, a visionary in the realm of application security. Arshan's journey from co founding Contrast with Jeff Williams to his current endeavors is filled with invaluable insights and lessons that have shaped the cybersecurity landscape. We'll delve into the genesis of Contrast, explore the transformative role of AI in application and product security, and discuss the ever important human element in the world of automation. We'll also dive deep into Arshan's decision making process behind Pixee and get a glimpse into the open source component of their technology, Codemodder.io. Gear up for an enlightening conversation with Arshan as we navigate the intricate world of application security and beyond. Hey folks, welcome to another episode of the Application Security Podcast. This is Chris Romeo. I am the CEO of Devici and also a general partner at Kerr Ventures, joined as always by my partner in crime, even though we don't do any crime, but my partner in crime, Robert Hurlbut.

Robert Hurlbut:

Hey Chris, yeah, Robert Hurlbut, I'm a Principal Application Security Architect and Threat Modeling Lead at Aquia and glad to be here today for another great podcast.

Chris Romeo:

And also not a purveyor of crime,

Robert Hurlbut:

And not a purveyor of crime.

Chris Romeo:

I mean, I mean, you know, we gotta, we gotta, we gotta set the record straight. There is no, no most wanted list with Robert on it for cyber crimes. No way. Um, cause we're. The good guys. We're on the good side of, uh, of the equation, most definitely. So, um, we're joined today by Arshan and, uh, excited to jump in and learn. First of all, Arshan, we always jump right into security origin stories. So we want to know how you got into security. Tell us your background and go back as far as you want to go.

Arshan Dabirsiaghi:

Yeah, I was, I was, uh, I was fascinated with the idea of taking over your computer against your will from a very young age. I thought that was the coolest thing that could, anybody could ever do and I was a soccer player and I like, you know, I compared the feeling of scoring a big goal with, uh, you know, with, with finding a vulnerability or Uh, or finally coming up with like a really good weaponized exploit, and the, the exploit wins every time for me. Uh, I just, I just thought that was, uh, that was so awesome. And so, you know, as a kid, 13, 14, you, uh, obviously the outlets you have for that are not, you know, especially at that time, there weren't very many. Uh, you know, authorized outlets to do that kind of stuff. And so, uh, we, you know, I, I was playing on BBSs. I read Aleph One's, uh, Smashing the Stack for Fun and Profit paper and, and really altered the course of my life. Um, and so I, I, you know, uh, I played around and then, uh, when I got to college, I found out that, oh, you can do this for a job, which was. Of course, you know, now looking back, that's feels crazy, but at that time, you know, everybody on IRC that you knew, uh, sort of a side gig. Most people were network engineers, uh, or even maybe even security engineers, but it really at the network level. Nobody was doing, it was all about firewalls and, uh, IPs and all that kind of stuff. So to, to find a, you know, to find, uh, exploit development sort of as a job was, was really awesome. Uh, so that, that's how I got my, uh, start was trying to recreate and, uh, all the, all the great things in Aleph One's paper.

Chris Romeo:

Yeah, that's definitely is a seminal work. That, uh, doesn't get cited much anymore, like it's, it's not as well known anymore, but, uh, we'll put a link to it in the show notes, because I know you can still, I think it was in Phrack, didn't it get published in Phrack?

Arshan Dabirsiaghi:

I believe so. Um, well, let's, let's, let's recognize some other seminal papers too from that time. I'd love to, Phrack, first of all, Phrack as a, as a zine was, uh, really quite helpful. Um, in, you know, cause everybody was kind of self taught at that time. Also Tim Newsham's, uh, format string paper was, uh, mind blowing. Uh, I think it was Matt Conover from WooWoo wrote, uh, the first heap exploitation, heap overflow exploitation paper I read. Uh, so, I don't know, I just want to throw those out there too

Chris Romeo:

Yeah, no, that's good,

Arshan Dabirsiaghi:

papers. Oh, and Rainforest Puppy. Rainforest Puppy was super. Uh, influential in the web space with, with his research. So

Chris Romeo:

yeah, I mean, that's, I think a lot of folks that are in this space now, Have been in it for a shorter period of time and maybe they don't remember or they weren't around when some of those things were happening and I can just remember Phrack. Phrack was such a great thing. Like, and it was always okay, you know, truth. It's time for a truth session. I liked the letter, like the comments where they would, people would send in like, um, comments or letters to the editor or something. And it was. The, the snarky responses that they would put into that was like, I know I should have been focused on the technical articles, but I always went to those snarky responses. Cause they always just made me laugh out loud because they were just like, they would make fun of the person's like, whatever they sent in or whatever. It was just like, it was, I don't know. That was it. Once again, makin' me remember...

Arshan Dabirsiaghi:

It was, it was a great mix of like, you know, super technical content, bleeding edge, very obviously counterculture and, uh, you know, human, you know, a lot, lots of funny stuff in there. So for, for a rebel without a cause, it was like, you know, it was awesome for me.

Chris Romeo:

Yeah, these were our people. It's like, we found our people and the only way we can connect to them is through this zine that comes out, you know. Uh, who knows what the release cycle was? It was always like, when's the next one coming out? Ah, we have no idea. At some point in the future, so.

Robert Hurlbut:

So, uh, Arshan, you mentioned, uh, uh, some of the things that you've done and just your, your start. But one of the things you also did is that, uh, you helped, uh, found, uh, Contrast with Jeff. So could you also, uh, talk about or talk about some of the lessons that you learned in doing that?

Arshan Dabirsiaghi:

Yeah, sure. So that was, that's, uh, Jeff for those, uh, Uh, those of whom Jeff Williams, who was a chair of OWASP and, uh, was my, one of my two bosses, Dave Wickers, who's, uh, who's, who's been involved in OWASP since, uh, nearly the beginning and Jeff, who was, uh, one of the very earliest people there and, uh, had a really formative, uh, experience there, I think as chairman of the board for a long time. Um, yeah. I learned a ton in that time. I mean, I just on, you know, leadership, culture, uh, you know, some of that stuff, you know, I'm just a poor mimicry of Jeff. Um, Jeff is, uh, I described him as like effortlessly never a jerk. Um, and when you're, it's easy to fall into the trap of being, uh, insensitive, I think, when you were in charge. And so I, I learned from watching him that there's another way to, uh, to perceive, uh, you know, there's another way to deliver leadership. Um, so on the kind of people, personal skill set development, uh, you know, I was, uh, pretty technical at that time, of course, but, uh, he, you know, I, I learned a lot from him on, on that side of things. And then, know, the, uh, I was a security researcher, but to have a product company is, uh, obviously a whole different set of, uh, problems and, and, uh, skills to develop. So, you know, all the instruments of building a product organization, uh, venture capital. Organizational structures, marketing and, uh, the sales cycle and all that kind of stuff. So look, you know, I spent my thirties learning all of that stuff. And, uh, at some point it became interesting to me. It's like, you know, it's another kind of hacking. You're trying, you're trying to, you're hacking the growth of a, uh, of a company you're trying to teach the market about a new way of looking at the world. And so that was fun and, and uh, and rewarding. So, um, it really, uh, too many lessons to remember.

Chris Romeo:

So did you work with those guys at Aspect?

Arshan Dabirsiaghi:

I did. Yeah, absolutely. So that's, I mean, that's really where I learned the trade of a application security. I really. Before that time, I really was focused on, uh, sort of memory corruption kind of space. Um, and so, yeah, I learned AppSec from, from them, from working in that field. Uh, and, and that was a really great setup for Contrast. What we, you know, my, the first part of my career Aspect was. You know, it was, it was all about me, it was ego driven goals. Uh, and it was about like find the next big thing or, or release research, uh, you know, and develop my own career. And then when I put that stuff to the side a little bit, I try to say, okay, well, Let's help customers. Uh, and what I found was that the existing automation, which was primarily static and dynamic analysis was, in my opinion, wasn't fully solving the problem that our customers needed solved. So, uh, having another, you know, Contrast was born out of that, uh, was born out of that need. We felt like we can do something better. We can deliver, uh, you know, customers instead of giving them 5, 000 things, We can deliver them 20 things, and those 20 things are going to be, you know, really refined. We observe them happen in the runtime of the application. So we feel very confident saying, all right, this is the preferred set of things you should work on. Maybe, you know, still something that we'll eventually talk about here today is, you know, the journey still to get from one of those 20 things to something is fixed was still something that, uh, needed work, but it was very, uh, it was awesome to be able to bring that technology market.

Chris Romeo:

Yeah, and uh, it's great to hear about Jeff's impact and Dave's impact on culture, company culture, and building companies, because I worked with them at Arca Systems before they started Aspect. Arca got acquired by Exodus Communications, and then they spun out of Exodus to start Aspect. But we all learned those cultural things at Arca because we had a group of really smart people that started this company, this Arca Systems company back in the early 90s and people like Jeff and Dave and I grew up in that company watching these other people, um, that, that were running that company and, and learned how to, how to have a company that was very much focused on the people and don't put up with attitudes from, you know, you don't have people that you don't want to work with. Like we learned all of those things from, from being a part of this other company. And I was able to bring that forward into security journey, the company that I founded. And it's great to hear that. And I assume that Jeff and Dave brought that with them into, uh, into what all the things that they did. And so, um, there's, there's like some legacy. We think about legacy in our industry. Like it's not something we ever really talk about, but the people that we all worked with at Arca that taught us how to, how to have excellent companies that people wanted to work at. Um, it wasn't, it's not like any of us came up with these ideas. It's like we watched other people show us how to do it. So it's fun now to bring that to the rest of the industry and encourage other companies. Hey, you can have a company without having any jerks that are part of it. Like, just don't allow people to come in that are like that, and you'll have a place that people really want to be.

Arshan Dabirsiaghi:

Yeah, absolutely. One of my, one of the things I told my hiring managers, I don't know where I got this from. I probably didn't come up with it, but it was, you know, when you're interviewing somebody, imagine what they're going to be like on their worst day. And if you just have a vibe that their worst day, they're going to be. unpleasant to work with, uh, because I'm trying to use a safe language here. Uh, then, you know, it's, it's not worth the risk. Especially in a startup culture. You're trying to do something very irrational, right? You're trying to, it's not rational to start a company and try to change, uh, change an industry. It's, it's a lot of work and there's going to be ups and downs. So, you know, that you're going to be entering this high pressure situation. And if you suspect that somebody's, you know, gonna under that pressure is going to hurt the culture. That's really bad because the culture is everything. Uh, as it's been said, culture eats strategy for breakfast. So if you, if you can't get the culture right, uh, it's, it's not worth doing.

Chris Romeo:

Yeah, 100%. And, like, my, the, the practice that I've adopted is, uh, for my previous startup and for my existing startup is, I have a wide group of people interview any candidate, and anybody can be thumbs down. As long as they have a reason, you have to be able to explain why, because there's certain things that you can't, you can't, that I won't accept as reasons to be thumbs down, but if, if anybody comes in and says, eh, uh, here, I, you know, we use the red, back to, back to yellow, Red and yellow flags was the terminology that we use. And so somebody would say, ah, I've got a yellow flag. And it was like, and so it was designed, they were telling us, Hey, this is a cautionary thing. Let's talk about it. And what I found is when someone would describe a yellow flag situation, like, I didn't really know when this is how they answered this. And it made me kind of wonder if they'd be able to, if there'd be somebody we want to work with down the road, somebody else sometimes would say, yeah, you know what? Like. I saw that slightly different in how the candidate approached that. And then if we had an all out red flag, it was like, you know, we're just, we're going to pass. And so, for me as a leader, I had to give up some of that control and say, if somebody on my team raises a red flag, they just knew we were going to pass. Didn't matter, obviously we had to explain why you had a red flag. And if it was a viable reason, we had to pass. But we ended up building a company where we didn't have anybody we didn't want to work with. Because we were able to, to, to, to kind of, to your point, you're, and you're kind of approaching it slightly differently about saying, Hey, is this, what's this person gonna be like on their, on their worst day? And I'm going to remember that. I'm going to add that to my, my, my little system here, but, uh, it, I mean, startups, like you said, startups are, are such, it's such a tight knit group. It's such a, you're doing something completely off the wall. Usually you're trying to disrupt an industry or, or something, and you want to have people that you want to spend time with, that you want to work with. Be a part of that. And one person can, the wrong, one wrong hire at an early startup can decimate the rest of the team. Cause you can have people leave that say, I just, I don't want to work with this person. I can't believe we hired this person. I'm out of here. And all of a sudden now you're losing the people that are really the glue. Um, and so that's why hiring is such an important piece.

Arshan Dabirsiaghi:

Yeah. I mean, this doesn't even have to be as dramatic as like causing people to leave. But even if people are negative, even just like negative, sort of the acceptable level of negativity we accept from people around us. Negativity sucks all the energy out of the room. So it's, you know, you have to, I mean, like it sounds crazy, but you know, I like when people are smiling and happy and then like, they just seem jazzed every day. Those, those people are, I put a lot of value on that.

Chris Romeo:

Yeah. Cause I mean, in the startup world, you're going to have the worst thing happen and the best thing happen. Sometimes in the same day. Sometimes in the morning, I mean, I know you've been there and like, and sometimes in the morning, you're like, we're done, we're finished. This is it. This is the end. And then the afternoon, you're like, we're going to dominate the industry. And we can kind of joke around about it. Like, Oh no, it's never that bad. Sometimes it is that like, sometimes something happens and you're like, it's just a gut punch. But then in the afternoon, something good happens. And yeah, I mean, it's important to have the right people around you who can, who can deal with that ebb and flow. But like you said, negativity is something that if, if. It can be a, it can be a, a cancer inside of an organization because then other people get negative and yeah, so. Um, coming back around to Contrast now, I've always wondered this question and I've never asked anybody yet. Did con, was Contrast the first company to, did, did Contrast define RASP and IAST, as a, as a concept? Yeah,

Arshan Dabirsiaghi:

Yeah. So I'll answer maybe one more, one more final bow up for the previous

Chris Romeo:

yeah, sure.

Arshan Dabirsiaghi:

Just to Make sure, because I think we all have humility on this call. Interviewing and hiring is such an imprecise art. An interview is such a sort of artificial evaluation of people that, you know, I certainly don't feel like I have everything figured out. And I know that some people just don't interview well, and some people, you know, are going to come off a different way than what they would at work. So anyway, just to say that, you know, I don't pretend that my model is perfect. And, uh, uh,

Chris Romeo:

with you. Mine's not perfect either. I mean, this is, yeah, I mean, and some people like, and I always give people grace in that perspective. Like if somebody I'm, I'm more talking about kind of, I'm on, we're on the lookout for more of the aggressive tendencies where it's like, this is going to be somebody that's not going to be, not going to be fun to work with. I, I tend to give people. More room, because like at the end of the day, you hire for attitude. I do. I hire for attitude. We can teach, I can teach somebody anything. I mean, okay, I can take that back. I can't teach you calculus, or there's a lot of things I can't teach you, because I just don't understand and don't know them. But attitude is everything. Like, if I get the right person with the right attitude that shows me that little fire of they'll figure it out. That's attitude will drive my decision there. You can learn the technical things if you got the right attitude to drive it. So, um, okay. So how about, so RASP and IAST, is this something,

Arshan Dabirsiaghi:

I failed Calc 2, by the way, that was the only class in college I failed, because it was 8am and there was no way I was going to learn anything at 8am, so it's funny you mention Calculus specifically.

Chris Romeo:

Well, listen, I, I got a degree in computer information systems. Because you didn't have to take calculus. I was staring at computer science, computer information systems, and like, I'm going to go over here where there's no calculus because that

Robert Hurlbut:

you and you know, guys, I'm, mine was math. My major was mathematics

Chris Romeo:

why you, you're a good ballot. You're a good co host for me. Cause if I, uh, you know, if, if you, you have skills that I can't even, I can't even grasp the ability to do a calculus problem, so.

Arshan Dabirsiaghi:

I eventually did get a math minor, I think, or at least I sort of qualified for it, but it was a tough journey for me to realize. That was very humbling that I, okay, this is something you really gotta apply yourself on. Okay, so you asked about sort of like other. People in IAST and RASP, um, I think RASP, certainly in the modern, uh, appli..., the modern idea of, of hardening the runtimes of our application platforms, I don't think I know of anybody who was doing that before us. I mean, there were companies around the same time as us, uh, doing that, but I don't think, you know, that it's not like 5 or 10 years before that people were doing it. I, at least none jump up at the top of my head. Um, on the IAST side, which defining the vulnerability side. I, you know, one of, one of my advisors, uh, and one of my people I just reach out to all the time for help is Brian Chess, who used to was, um, CTO at Fortify Software, one of the, the, the, so I'd say the company that won static analysis in the first generation of those tools. Um, and I think they had done some experiments with it. When I was Googling around for like, you know, these super, uh, esoteric errors that I was getting while I was trying to instrument things. I would see Jacob West, who is also at Fortify, um, look, you know, trying to solve these same problems, you know, writing bug reports and stuff, so I, I had seen that people were playing around, but I think it's, it's a, it's not easy to prototype, but it is, it's more possible to prototype, uh, well, I should say, I think it's possible. You can make a demo and it'll be hard, but to actually get it to perform at the, at the levels of accuracy and, uh, performance that people need to actually deploy it, I mean, that's where, uh, Contrast has a absurd moat, um, because there's a ton of problems that have to be solved. So some like hard computer science kinds of things, and then also some things that are. Uh, you just have to know, uh, the runtime of, of that language, uh, better than anybody else to, to be able to understand how to accomplish your goals, uh, in the, in the, what, in what's allowed on those runtimes. So, I, I think, uh, even, even if people had, had done some demos, uh, gotten some prototypes off the ground, it, it's a long journey from there to to getting it done, uh, being able to ship something that people will use.

Chris Romeo:

Let's switch gears now and talk about something that is very popular in our industry right now. And that is the impact and influence of artificial intelligence on the world of application and product security. So I thought we'd start there and I'd love to understand your thoughts and kind of philosophies about how AI is going to impact what we think of as the AppSec market.

Arshan Dabirsiaghi:

Yeah, I've been, I've been waiting for this time period for a very long time. Actually in college, my track was not security because security was not a track. Then my track was AI and that's what I did my master's research on. And, um, you know, of course everything I learned now there is totally useless now. Um, but anyway, we've been waiting for some of the promise to be delivered. Uh, in this space for forever. And, you know, even five years ago, I would have answered this question very, very differently. I would have said that there's opportunities on the margins for, you know, anomaly detection, uh, maybe some, you could have some predictive power, power over like what kinds of vulnerabilities, uh, you know, would be most interesting to prioritize. But I think now obviously we're seeing, you know, a lot of startups in the space, uh, trying to, you know, use generative AI as a foundational technology. And we're one of those, uh, people, uh, because I, I, you know, I, I think that a lot of the, uh, if you zoom out far enough, like, you know, we're trying to automate our, uh, you know, all the AppSec research, all the AppSec experience that we're trying to. Yeah, that we've accumulated over the years. And so I think that the large language models give us a way to, to do that. They give us a way to scale, uh, using that consultative experience to help lots of people. So I think we're, we're seeing it happen now and we're going to see more and more. And I'm super excited about it. I think, I do think it's, uh. People are a little bit too trusting right now of the models. I think, uh, as I told somebody the other day, like you can't let it have both hands on the wheel. You have to, you can, you can let it have like one hand and you can let it sort of lane assist, uh, but you can't absolutely let it drive. Uh, it's, it's still quite out of control, um, but you know, I think maybe if the foundational models help us solve some of the hallucination problems and, and allow more sort of programmatic control rather than only allowing natural language sort of interaction, I do think there will be some. Uh, step function, you know, gen, you know, some, you know, next generation sort of stuff capable from them, but right now you really have to be very careful about the problem space that you give it and how you interact with it.

Chris Romeo:

Yeah. And my conclusion so far, when it comes to generative AI and how I think it impacts us, let's say for the next five years, I see this as more of a, more of an assistant than a, than a replacement. for any function, any problem we're trying to solve. So, in the classic case of development and writing code, a, an AI can provide the scaffolding faster than even the most senior security engineer or security developer, senior developer can. Just because scaffolding in a particular web language, web framework, it's just, it is what it is. Um, there, there's not, there's no creativity in scaffolding a particular. Um, so I really see AI as being that super assistant, and I'm, I'm caveating this heavily with the next five years because I think we're going to see a lot of things move forward over that time. Uh, but when I hear people. that are, oh, you know, the AI is going to replace the things that we do. I don't think AI is going to do that in the short term. But what are your thoughts on that? Do you think, do you think we can replace mission critical functionality and let two hands be on the wheel in the next five years? Or do you see this as really something that's going to be helping, just more of a helping function?

Arshan Dabirsiaghi:

Yeah, I mean, I definitely see it being more of a helper function. I, I, I agree that. You know, in the wider societal things, we could get a whole conversation about, um, how it will affect lots of different, uh, jobs, but I think for, for this industry, I agree, it's going to be more that, and, and, and I think people say this a lot, people that use AI will just consume people that don't use AI. Right? They'll, they'll outperform them, outcompete them. And so, uh, right. You, you picked a good example. Um, I think Sweep is doing a good job of like showing that you know, you just tell me kind of the feature you want and I'll go give you a good start. Like, I'll go, I'll go make a pull request that has a really good start to that feature. Um, and so I think it's going to, you know, hopefully remove some of the toil, but I think there's also risk in, in this, uh, that I don't see people talking a lot about. So obviously Copilot. It's just a mirror held up to GitHub, right? Like it's trained on GitHub code that's found in GitHub. And so we know that all the code on GitHub, at least today, was written by humans, and so it has vulnerabilities in it. And so the code that Copilot is going to generate is also going to have vulnerabilities. And so the studies have actually already borne this out. There's been multiple university led studies on this. So not only that, like it should have the same vulnerability density as humans, but The, I'd argue it's actually going to be even worse because if a developer, look, all, all developers are lazy and impatient and I, and I proudly count myself amongst that group. But if something works, if they just get a, cause Copilot now will write, I mean, it'll write whole functions for you. If you let it, I think you, there will be a tendency for developers to understand less and less of their code. Because if it works, I'm not going to look at it again. Um, so I, I think that there's going to be, it's going to be harder to reason about code that you didn't write. And I think it's, it's going to be harder to understand the security implications of code that you didn't really pay attention to. So, you know, we're really betting a lot on. Pull request review and tools to help us, uh, you know, review and harden that code because right now I think, you know, it's, it's, it's dangerous for those two reasons.

Chris Romeo:

Yeah, it makes me think of the, of the The previous 10 years, the Stack Overflow problem where there were a couple of different university studies that went and looked at Stack Overflow and analyze the number of vulnerabilities in the top answers and even found that many times the lower voted answers were the more secure way of approaching something, but something got the most votes. And that's the one that people are copying and pasting, unfortunately, right into their code. I mean, I'm guilty of it too. I've, I've gone to stack overflow and I can't figure out how to do this. Oh, look, someone else has already solved it for me. Copy, paste, run. Oh, look, it works. Oh, good. Um, like, I mean, I, I wrote it, I'm a little bit embarrassed to say this, but I wrote an XML processing function for something in Ruby, for a Ruby on Rails app. We got to the end and I'm like, I don't even know how this, I don't know why this works, but it does. It spits out the right output at the bottom across multiple test cases, so we just go with it. Like, and that's, I see what you're saying there. Like, that is, that is something people aren't really talking about right now as far as the, the inherent risk in introducing more vulnerabilities than a senior software engineer would today as a result of the training data that's coming out of GitHub. And, and I hadn't even thought about the fact that developers are going to do the same thing I did with my XML function. They're going to go, just works. It spits out the right output at the end. Let's go with it. Push. It's in production. Approve the PR. There it goes. And so, um, do you think there's going to be a backlash? Like in the, in the next couple of years, where is it's going to, is there going to be a point where like all of a sudden it catches up with us and all of this code that, you know, we've lost the ability to understand fully what's happening. Like, is it, is there going to be an explosion as a result of this?

Arshan Dabirsiaghi:

I'm trying to create the future instead of predict it. Uh, so, you know, obviously we'll talk about Pixee, uh, which is my, my new company and how I'm trying to solve that problem. But, you know, on your stack overflow thing, I did want to point out, I think this is. Uh, this is a problem that I, uh, you know, there's probably a part of the problem why the secure answer is down low is that, uh, you know, the community has some idiomatic solution that they have always traditionally upvoted, or the docs say that this is how you should do it. If you look down and you see the more secure answer, because I've seen this as well, a lot of times the API difference between those two is the more secure one. It forces the developer to, uh, address a security concern that they don't know exists. And so it doesn't feel like... it's confusing to them because they don't know what, what is the problem that, like, parameterized queries or external entities, like, I don't know what that is. So I'm not gonna, I'm, that code is. Confusing to me. So I'm just going to whatever, just give me the three lines that do the thing that I know I want. So this is, I mean, it's such a good, um, I've been arguing for a long time that API design is like everything with, uh, you know, for security, because if we can make the developer face the question that they don't know they should be asking, that's how we get them to write more secure code. Uh, but those, those APIs don't usually exist because. People tend to write APIs that just do the one thing that they're asked to do, you know, and so they don't want to broaden the scope of it. But that difference is, is really the difference in, you know, whether a developer does something securely or insecurely.

Chris Romeo:

Yeah, that's I would say why Robert and I have both dedicated decades to the practice of threat modeling. Moving that forward far on the design side, I think where where you're kind of focusing in us in on is is closer to the code itself. And I think both of those things, I mean, What's old is new again. Defense in depth is still the ultimate way to secure something the best way possible is have the most things that we can do that everybody can get behind and if it starts in design and then you've got some things that are happening You know when we think about the Contrast example again, in, in runtime, like, you know, you're, I hate this. I'm going to say this, you know, that people shift left, shift right, shift everywhere, but there is, unfortunately, those terms have been so, I'm trying to find the right word, overplayed, let's use overplayed as the word. That wasn't what I wanted to say, but overplayed. Um, but there is truth to it though. There is a kernel of truth behind it in doing things early in the process and having a solid runtime solution. A, multiple layers of a defense in depth strategy, and that's, you know, that's, that's, there is, there is, there are nuggets of truth inside of that.

Arshan Dabirsiaghi:

Yeah. I think that, you know, shift, I think Contrast has been saying now for a little while, I've been disconnected for a little bit, but I think Contrast is saying now shift smart. And, and I think I've seen other people say variations of the same thing, which is. There are some points when it's super cost effective to do a security process, like threat modeling, obviously, sometimes you come in and the thing has already been built and deployed for three years and you're doing a threat model on it. Uh, but obviously the, the right time to do that is before the first line of code has been written. So I think that, but, but you have a design in place. So. Obviously you can't, you know, shifting left for threat modeling is, yes, you want to shift it as far left as possible. But there's some things where it doesn't make sense to shift it lefter, because you might be missing context that you don't have until later in the process. So I agree that shifting left is generally, uh, directionally correct, but we, you definitely want to shift to the place where, uh, it makes the most sense. And I would argue that that's not always as, you know, left at all costs. Yup.

Chris Romeo:

mean, we have to approach the, the world of technology as realists. We've been saying, and Robert and I both have said this literally for decades, that people should threat model before they build something. Now the catch is, nobody listened to us, and a number of other people have been saying it, right, for decades. In general, nobody listened to us. So yes, there is, there is a need both for, but threat modeling still has value even for something that's been deployed for multiple years. It just has a, there's a different way of capturing that value. It's not going to be the same. You're not going to, we're not going to start rewriting. We're going to redesign the whole thing from scratch. I mean, that might happen one out of a hundred times, but normally it's going to be, how do we, how do we make incremental improvements to the thing that we've, that we've built by understanding the design and, and getting closer to it and kind of wrapping that all the way back around, that's one of the things you, you were mentioning that could be lost. is developers becoming more reliant on AI and they don't have the context. They don't even know. They don't even know how the architecture works because they've relied on other components that have been built either through AI or by other people in the system. And that is, that is an ultimate danger too, that developers lose context of the thing that they're building and they just keep their heads down. Oh, I'm just responsible for this one little piece. How does that one little piece interact with the 17 other little pieces? Oh, I don't really know. I don't understand that, that context. Then we start getting some really to some really dangerous areas where unpredictable things will start to happen.

Robert Hurlbut:

So Arshan, you mentioned your company, Pixee, and some of the things that they're doing. But, uh, curious, how did you choose the product area that Pixee covers amongst all the other issues in application security?

Arshan Dabirsiaghi:

Wait You know, that's an easy one. My co founder also came from Contrast. We were both kind of you know, uh, we were kind of leaving at the same time and, uh, we just got to talking about the different problems, uh, that AppSec, you know, that there weren't AppSec left to solve. And we weren't immediately thinking, let's go start a company, but, um, he, uh, you know, my, my co founder, who's Sarag Patel, by the way, was Chief Strategy Officer at Contrast. Uh, he, uh. You know, he just, he challenged me, uh, because for the first time in a long time, I didn't have a job. And so he said, uh, how much, can you fix things? Like, instead of just reporting things, can you fix things? And I said, well, we've done hackathons before and, uh, we were able to solve some stuff. It didn't feel like, uh, we ever put enough time into it to really discover that the The depth of that, you know, how far we could go. And so, uh, two weeks, uh, after two weeks, I came back to him with a demo and I said, uh, this is really exciting. Um, so, you know, I had a GitHub app that you would subscribe to the gi add the GitHub app to your GitHub repository, and it would start fixing things. Things that were you that were trivial to identify and things that your static tools or IAST tools would identify. And so it was really, it was just a, uh, I wish I could say it was more, uh, there was a lot more research put into finding the problem. Uh, but really it was, you know, that challenge, uh, from my co founder. Um, and we had seen this, we had seen this pain in customers before where, you know, we're trying to, you know, in the, in the application security testing tool space, we're trying to find things. And then, but if you went to a meeting where there was the, you know, the, let's just say CISO and CTO or the CIO, we would say, Hey, look, we did a great job. We found a, you know, a thousand things. And, uh, the, the CTO would ask the CISO, how many of those things did you fix? And the number was always way less than what we wanted. And, you know, and of course that doesn't, it doesn't say anything about our tool or, or about the development. It doesn't say anything about anybody. The problem is, the journey from something, somebody's identified something, to something is fixed. I mean, it's ridiculous when you, when you take a step back and say, okay, a tool finds something, it goes into a bug tracker, uh, it goes, it gets horse traded by some product manager, somebody has to prioritize it, the engineer, uh, who works on it, probably not a security expert, they have to reverse engineer what a good fix would be, uh, they probably get it wrong the first time and, uh, you know, we have this, We've had this mantra, uh, in security, I'm sure you, you all know, like, Oh, don't write your own crypto, right? We always tell developers, don't roll your own crypto. And, uh, I've seen that before, but I haven't seen it a lot, but we do tell developers every day, Hey, fix this deserialization vulnerability, fix this XXE, fix this, uh, you know, other esoteric vulnerability class as if that's easy. Right? As if they understand, uh, you know, the, all the technical, uh, everything in the weeds about those vulnerabilities. They don't, and they almost shouldn't. Uh, it's, it's an unfair expectation for them. They're not being, even if they are being trained on security, uh, you know, how much are they retaining? Are they going to remember from their training? Like, Oh, deserialization, I have to be careful here. What's the security control I should use? And so the journey to getting an accurate fix. is actually absurdly long and winding, uh, so it's, it, it is a problem that, uh, I, we, we definitely knew about, uh, but how we arrived upon solving was really very flippant, uh, but I'm so excited that, uh, that he asked me that question because, um, I'm so excited to be here talking about it.

Chris Romeo:

And, and, uh, as we were talking before, you mentioned Codemodder. Codemodder.io, open source component that you're using, that you're building as a piece of this solution. So, let's talk about what the open source piece is here, and love to understand what it does, how it does it, go into some details here.

Arshan Dabirsiaghi:

Yeah. So there was, um, I should say this is like the second or third generation of internal tooling we've created. And we're, we're very happy with where we ended up. Um, and so that was really why we've gone open source is because we think that we've reached, you know, stability, at least in the, in the technical direction of how we're going to change code to make it more secure. Um, but codemods were. The term codemod came from an engineer, Justin something, I'm sorry, I'm forgetting his name, at Facebook. And it was a very simple, like, Python script that was like, Give me a regex to find and give me a regex to, uh, fix. Like, what is just like, give me this and replace it with that. And so he was using that to perform, um, you know, just sort of reducing toil of doing refactoring of one API to another. And that idea kind of like, I don't want to say died, but it didn't really reach escape velocity. And then the JavaScript community picked up this idea of codemods and they were writing codemods for solving what I'd say as little problems. The React community, uh, is, is really good about, they write codemods to translate like from React 4 to React 5. And when I saw this tool, I was like, Oh gosh, this is, uh, an engineer named Swicks, a YouTuber, told me like, Oh, you're writing codemods for. for these other languages. And I said, what is a codemod? Um, but anyway, so there, these codemods, they're, the problem with them. The reason we don't see them solving problems sort of at scale is they're not very expressive. Uh, you can't say like, if it's Tuesday and you know, the code is this, and there's some data flow things like, you know, it's just, it's, you can't do that. It's like, if you're using API A, change it to API B. Uh, and so there was a natural, uh, very low ceiling, I'd say, on, on what types of problems you could solve for that. And so Codemodder is a framework for developing codemods. Uh, right now we support Java and Python. Uh, whereby instead of trying to build a technology that tries to do all the complicated things in searching code and mutating code, It's really a orchestration framework where you can, you can plug any tool you want. You can plug in PMD or FindBugs or Semgrep or Contrast. And so you have, you're stitching together tools that are good at finding things. And connecting them with tools that are good at fixing things, tools that are good at mutating source code, like, uh, LibCST for Python and JavaParser and, and Spoon and, uh, jscodeshift. And so there's, you know, this idea of stitching together the tool, the community loved idiomatic tools for finding code and then changing code is, uh, is a new idea and one that I'm super proud of because we, you know, we. It's a much quicker, uh, journey for me to build an expressive codemod if I can write the rule in PMD and then I can change the tool, you know, change the finding that that thing finds with, uh, with JavaParser, I can write a codemod, you know, really, really quickly, uh, whereas if I, you know, if I tried to, it would be very ocean boiling approach if I, If I tried to write one new tool that did all of those things, you know, we would, it, it just would never work. We'd never get there. Um, and so it was born out of necessity. You know, I was trying one of those ocean boiling approaches and, uh, and that's, you know, how we ended up eventually with, with Codemodder.

Chris Romeo:

So this is a framework that others can contribute to, given that it's open source. So is your vision that you're going to create a community, an open source community around people that are creating these things and sharing them in a way that others can, you know, I think about like Semgrep and their rule approach. I don't know why I'm using that as a, it's just what's coming to the top of my mind. Semgrep created the engine and then they had an open source component of it. And lots of people contribute rules. Um, to, to make the, to, to provide something that I, that I can go grab and add to my, it may not be a, you know, a Semgrep rule, it's something that somebody in the community made that I'm like, oh, that, okay, that's something that's finding something interesting. Is that kind of your vision for the community around Codemodder?

Arshan Dabirsiaghi:

Yeah, you know, we'd love for people to build their own codemods and, you know, we don't have anything like, uh, like, you know, we're big fans of Semgrep. We love the registry that they have where you can go and build stuff and there's a playground, uh, um, and people can sort of like You know, share rules with each other. Um, that's absolutely something we'd love to do. Uh, you know, we're, we're still pretty early, so we, we don't, unfortunately we don't have time for everything yet, but we absolutely, they know that's one of the reasons we, it's primary reason we made it open source is we want people to be able to build codemods and do whatever you want with them. You can, you can build codemods using our framework. And then deploy the, you know, you don't have to buy our product, like you, or we don't, we actually don't sell it yet, but, uh, you, you could, you deploy it in a GitHub action, and then you can just make sure that any code that, uh, you know, that you want to change gets changed in your CI process. I'll give you an example. If you, if you're giving the same PR comments over and over, that's a great opportunity for a codemod. So you would. You know, you'd write, instead of like writing another comment, you'd build a codemod, add it to a GitHub action, and now whenever somebody commits, you know, it'll just run that codemod on it, and, uh, you'll, your advice will be applied. Like the, you know, you've automated yourself, uh, to make the code changes that you want to see. Uh, in, in all, in all of your code.

Chris Romeo:

What's the, what's kind of the check and balance, I guess? As far as you're making, or I guess, where's the feedback loop? How do you get a feedback loop coming out of that? Because at the end of the day, I want the development population to get better. I don't want to have, I don't want to have to string together hundreds or thousands of codemods, which then get us to the state of, of secure as close as we can get to secure. So any thoughts on, is there a feedback loop that, that, that you see fitting into this?

Arshan Dabirsiaghi:

Absolutely. I mean, so that's why Pixeebot, who uses this technology, we are super, uh, we prioritize the storytelling of this. Like every time you make a change, you should be communicating to the developer, what you did, why you did it. Uh, and so perform that, you know, and the act of that feels like. PR feedback, which we're used to getting, of course. Um, but what's really happening is there's a micro training going on. Like, okay, got it. I learned about this issue, uh, onto the next thing. And so over time, developers will get exposure to. you know, lots of different security issues, lots of different anti patterns that they see the right patterns for. And so, you know, it's, I, I did a book club, uh, on patterns, uh, in my, at Contrast with my team. And it was the best thing about that book club is every week we took a chapter from a certain Java patterns book and we talked about it and whoever's leading the session, you know, they, they took, uh, they took the pattern and they found the anti patterns in our own code. And led us through a discussion about that and that felt so, that hit home so much more because, you know, rather than a, an example from a book with a contrived piece of code in it, that means nothing to me, um, you know, having that being done on your actual code base, I think is, is, uh, is people are going to get way more invested way more quickly and see value much more quickly because it's, I'm getting the training, but also it's improving me at the same, you know, me at the same time. And so, um, I think that will be, uh, a very interesting part of what we're doing.

Chris Romeo:

Yeah, and I've seen that in the wild as well. Like, when people are looking at security issues in their own code, they lean in a little bit closer. When you're using, when, versus looking at an example, I mean, my classic go to example was, if we look at this bug in OpenSSL, Uh, everybody kind of tunes out, but that's not my, I mean, I use it, but it's not my problem. Let's look at this little segment of code that this team put together. Then everybody kind of leans in a little bit, like, wait a second. He's talking about the thing that I'm responsible for. He's calling my project, you know, maybe not as perfect as I believe it is. So yeah, I think there's a lot of, uh, I think there's a lot of, of value in that approach. I'm glad to hear you've thought about that feedback loop already, because that's, that seems like a really powerful thing. Um. We gotta get to the lightning round, but, uh, super excited about where you're going with this, Arshan, we're gonna be, we're gonna be following along and, and, uh, cheering you on as well as you go forward, but, um, our listeners have become accustomed to a lightning round segment led by our very own Robert, so Robert, take it away.

Robert Hurlbut:

Yeah, so we have three questions. Uh, so first one is, uh, what's your most controversial opinion on application security and why do you hold that view?

Arshan Dabirsiaghi:

Uh, there's no buzzer, so I'm, I'm gonna, I don't know how long the, the lighting round questions can go. Um, I, I think I have a couple, but I think the one that. Uh, I'll just be purposefully inflammatory here, but, uh, things that fix things and, and paved roads kind of APIs offer so much more value than things that find things. Um, and because what actually reduces risk at the end of the day is when code gets hardened, when code gets remediated, and so, uh, and when vulnerabilities can never have been created in the first place. So, I, I'm, I'm a root cause kind of person, like, so, I, I think we should be spending a lot of our focus on those things. Uh, and we spend a, and the way we spend money now is very much imbalanced in that way. I think we spend a ton of money on, Uh, the things that CISOs and their groups get measured on, you know, the number of vulnerabilities found, uh, you know, then there's a lot of value proving in. Um, in the processes, we, there's a lot of, yeah, we measure ourselves on the processes, but not the outcomes as much. And so, uh, the outcome should be how many things were fixed, how many things were hardened, how many things were eliminated from ever being created. Uh, so I, um, I think the short version of that is we need to change how we incentivize ourselves in the security industry.

Robert Hurlbut:

Makes sense. What would it say if you could display a single message on a billboard at the RSA or Black Hat conference?

Arshan Dabirsiaghi:

Um, the, the North star of every security person should be to make the VP of engineering own security. So I think the, the engineering team is the least cost bearer of security. And if we give them that problem, well, you own AppSec. We'll, so I'll talk specifically about AppSec. If they own the problem of shipping secure code, and they're just going to get you know, oversight from AppSec, uh, I, I think that's a much better model. Whereas I see a lot of like very, uh, a really clear, uh, cultural line, uh, between those two where security is, is throwing things over the fence without really understanding all the context. And so I, I do think engineers understand the code better than security does. And. They're the ones that will find the cheapest, most effective way to fix it. And so, if I could wave a magic wand, if I could make a billboard, I think it would drive towards that. Like, we need to make the right person in charge of fixing this problem, and that's not how the world is today.

Robert Hurlbut:

Okay, last question. What's your top book recommendation and why do you find it valuable?

Arshan Dabirsiaghi:

Book? Um, it's with me at all times. Uh, slight exaggeration. All right, this is, uh, for, for pure listeners. It's Hacking, um, The Art of Exploitation by John Erickson. I think I still even have the, I think mine is only a first edition and there's been other editions which I haven't read yet. But, um, it is Such a good intro to very, very complicated, um, security topics. Um, it's not a ton of AppSec in there, but, uh, I, I think the foundation, a lot of foundational memory corruption network stuff is in there. And I just, you know, it's a book that, uh, I think could replace a You know, it could replace whole, uh, classes in college. I just very, very well written and the concepts are, are so, um, I don't know, so cleverly discussed in there that, uh, I'm never met him, but I'm a big fan, John Erickson.

Chris Romeo:

Cool. So Arshan, uh, as we kind of wrap up our conversation here today, um, is there, you know, is there a call to action, a key takeaway, something you want to leave our audience with?

Arshan Dabirsiaghi:

Yeah, look, I, I want to, I want to be your virtual staff security engineer. You know, that's the purpose of, of Pixee and Pixeebot on GitHub. So, you know, my, my call to action is, uh, you know, Go at us on the marketplace, uh, give it a try. We support Java and Python right now, and we want your feedback. We want to know what's working, uh, what's not working. You know, we, we don't sell the product today. Uh, we're just happy to get your, you know, happy to, to, to, you know, to get it in the community. Um, and so I, I'd love for people to just try it and tell us what you're thinking. Um.

Chris Romeo:

cool. Well,

Arshan Dabirsiaghi:

But yeah, we want to harden your code. We want to remediate your vulnerabilities. Like we want to do it all for you. Uh, using, and by the way, I probably should mention, uh, in case it wasn't clear. We do use a combination of sort of. AI, LLM sort of assisted techniques, and then, uh, things that don't use LLM. So we use a combination of both. So, uh, if, if you're scared of AI, don't worry about it. We don't use a lot of it. And if you're excited by AI, I think you should, you know, uh, I look forward to, uh, uh, showing you what we've been cooking.

Chris Romeo:

very cool. Thanks for, uh, sharing your insights. talking about the history with Contrast and, you know, it was a great conversation just to, for me to put together some of those pieces, um, that I maybe didn't understand from the past, but also hear what you're up to now. So, like I said, we'll be, we'll be cheering you on, um, as you guys move forward with, with Pixee and, uh, thanks for being a part of the Application Security Podcast.

Arshan Dabirsiaghi:

It was so fun.

Podcasts we love