The Application Security Podcast

Eitan Worcel -- Is AI a Security Champion?

December 19, 2023 Chris Romeo Season 10 Episode 37
The Application Security Podcast
Eitan Worcel -- Is AI a Security Champion?
Show Notes Transcript

Eitan Worcel joins the Application Security Podcast, to talk automated code fixes and the role of artificial intelligence in application security. We start with a thought-provoking discussion about the consistency and reliability of AI-generated responses in fixing vulnerabilities like Cross-Site Scripting (XSS). The conversation highlights a future where AI on one side writes code while AI on the other side fixes it, raising questions about the outcomes of such a scenario.

The discussion shifts to the human role in using AI for automated code fixes. Human oversight is important in setting policies or rules to guide AI, as opposed to letting it run wild on the entire code base. This controlled approach, akin to a 'controlled burn,' aims at deploying AI in a way that's beneficial and manageable, without overwhelming developers with excessive changes or suggestions.

We also explore the efficiency gains expected from AI in automating tedious tasks like fixing code vulnerabilities. We compare this to the convenience of household robots like Roomba, imagining a future where AI takes care of repetitive tasks, enhancing developer productivity. However, we also address potential pitfalls, such as AI's tendency to 'hallucinate' or generate inaccurate solutions, underscoring the need for caution and proper validation of AI-generated fixes.

This episode offers a balanced perspective on the integration of AI in application security, highlighting both its promising potential and the challenges that need to be addressed. Join us as we unravel the complexities and future of AI in AppSec, understanding how it can revolutionize the field while remaining vigilant about its limitations.

Recommended Reading from Eitan:
The Hard Thing About Hard Things by Ben Horowitz - https://www.harpercollins.com/products/the-hard-thing-about-hard-things-ben-horowitz?variant=32122118471714

FOLLOW OUR SOCIAL MEDIA:

➜Twitter: @AppSecPodcast
➜LinkedIn: The Application Security Podcast
➜YouTube: https://www.youtube.com/@ApplicationSecurityPodcast

Thanks for Listening!

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Chris Romeo:

We're thrilled to host Eitan Worcel, the co founder and CEO of Mobb, and a seasoned AppSec expert with 15 years of experience. From his early days as a developer to his current role as a business leader, Eitan has been at the forefront of detecting security issues. He's on a journey to make those findings actionable with a trusted automated vulnerability fixer. We'll delve deep into the world of AI driven automation in AppSec. We'll explore how AI is revolutionizing how we address code vulnerabilities and its role in prioritizing fixes and ensuring the integrity of automated solutions. As we navigate these intriguing topics, we'll also get a glimpse into the future of AI in the AppSec domain, and gather insights for organizations looking to adopt AI driven solutions. Gear up for an enlightening conversation with Eitan Worcel as we unravel the intricacies of AI in application security.

Robert Hurlbut:

Hey folks, welcome to another episode of the Application Security Podcast. I'm Robert Hurlbut, Principal Application Security Architect, as well as a Threat Modeling Lead at Aquia, and I'm joined with my good friend, Chris Romeo. Hey Chris,

Chris Romeo:

Hey Robert, Chris Romeo, CEO of Devici, and also general partner at Kerr Ventures, and also someone who's known for stirring things up in the application security industry on the internet. But that's a working title. I'm working on, I'm working on getting that a little snappier, a little snappier delivery.

Robert Hurlbut:

can see that. Excellent. That 30 second pitch will be your title, right?

Chris Romeo:

That's true. It should be, it should be. I mean, I've, I've kicked the hornet's nest twice in the last week. Once by re by bringing back my DAST is dead comments, which some people weren't very fond of as it

Robert Hurlbut:

I saw that.

Chris Romeo:

luckily it's, you know, it's not easy being me is what, uh, is what they say. And then, uh, my, the other one that's got a lot of attention is, uh, WAF is an antiquated technology. And you shouldn't use it, which has not been popular either, but it's been fun. I mean, and at the end of the day, I'm, I'm doing this to stir up the community because when the community is stirred up, people start having really good debates. They start really, they start really factoring it. Okay. How can I argue for this technology versus that one? And so I guess that's a little bit of a behind the scenes view. We probably have to, should have edited this out so that, uh, people know maybe I'm just kicking the hornet's nest so that people. their, push their thoughts forward a little bit, which is good for us. It's good. We should stretch and, and, uh, not just hold onto things. Well, we do this because we did it 20 years ago. Oh boy. That's a troublesome statement, right? So enough about my, uh, soapbox, internet fame and sensation.

Robert Hurlbut:

well, you know, uh, today we've got a special guest, Eitan Worcel, and, uh, really, really, uh, thank you for joining us. A listener had requested that we talk about automated code fixes. with AI. And so, uh, we reached out to you a ton. Welcome again. Uh, and thank you for joining us today.

Eitan Worcel:

Thank you and let it be heard. I wasn't that listener.

Robert Hurlbut:

Okay.

Eitan Worcel:

for someone else. Excuse me.

Robert Hurlbut:

Understood. Understood. So typically when, uh, somebody joins us in the podcast, the first thing we do is we ask their security origin story. So could you, uh, share with our audience? So what is your security origin story?

Eitan Worcel:

Um, I think unlike most of the folks that you were, uh, that were on your call, I wasn't a practitioner. I joined the area of cyber security in 2007. Um, I joined a small company named WatchFire at that time. For those of you who remember, I was a developer in a tool named AppScan Standard. So that was the, I think the first automated DAST scanner. Speaking about DAST, uh, it has a warm place in my heart, but I won't go into arguments about that because I. I know I'm subjective. So, um, and it might be, yeah, we are using it for so long and it's in this and every other place. So we should keep it. And that would be a bad explanation. Um, I, I stay there over the years, IBM acquired it. I moved from development to product management, worked with a lot of companies. Started our cloud solution to compete with Veracode and with cloud, um, moved to the U. S. in 2016. And I think that's where my eyes got open to the real situation, the application security. Um, when I was a developer, I wanted to make sure I find everything. Uh, every time we got a defect where, hey, you have a false negative, you missed this. We were spending days, myself and my team, to figure out that, to make sure that we Capture it, no matter how hard it is for the user, just to make sure that we catch it. And then you go and work with the largest companies in the world and you realize, uh, it just goes to a long list of findings that they can't handle. And I stayed, still stay there, uh, became head of product under HCL when HCL acquired a business from IBM. Got my green card and I left in end of 2021 to start Mobb, which is what I'm doing today with my business partner, co founder, CTO, Jonathan Afek, who, I mean, plug for him, he was in cyber before my time. He was in cyber in one of those ideas, special IDF units. Um, he has, uh, a lot of experience. He was, I think, what, he was 21 years old when he was first in Black Hat, um, giving a, a, a talk about some hack that he found. We are balancing each other in that sense.

Chris Romeo:

Very nice. All right. Well, I want to start us off here. Maybe a bit of a bit of a philosophical Conversation and I thought up this question and I don't even know where this is gonna go But I'm gonna ask it anyway, because I think it's a fun question. Is AI a security champion? So is artificial intelligence a security champion? And I certainly have opinions. I'm sure Robert does too, but Eitan, we wanted to hear from you, let you kind of break the ground for us here on this, this issue.

Eitan Worcel:

Kind of a curveball, but, um, AI is maybe a wiseass security Champion. Um, I think, well, let's, let's balance it for a second. AI is not just Gen AI, although everyone are talking about Gen AI today. And when I say, say WiseAss, it will be that side, because. We're talking about machine learning and all those. Yeah, if he does not know the answer, we'll not give an answer. And the Gen AI. Always has an answer, right? Uh, right or wrong, it will always have an answer. I think it's a great tool. I think that people can and should use it also for security. If they don't know how to fix, especially now when it includes references for where did the information come from, but oh my God, it makes silly mistakes, really bad mistakes sometimes that can't compile or, or just wrong, I would be, I'd love to tell some stories, but I do, I am interested in hearing your thoughts on that, because I know you were, like many others in the industry, promoting the security champions a lot.

Chris Romeo:

Yeah. I mean, I'm gonna let Robert go next though. So Robert, I came up with this wacky question, so. You know, is AI a security champion?

Robert Hurlbut:

Well, when I first saw the question, I, you know, I thought of Gen AI and, and as a partner, if you will, uh, for security to help, uh, champion security, uh, ways that it can, you can find things out in terms of threats and, uh, mitigations, uh, I've seen some good, good use of that. Uh, that's what I had in mind. Um, you know, that's just my interpretation or a thought about an answer for that.

Chris Romeo:

I mean, when I think is AI a security champion for me, AI is a helper function. It's not a, it's not, I think we're, I think we're giving a lot of room. for people to dream about what AI is and what AI can do. I certainly think it's, I think it's going to change the face of all of all technology. And then it is now, and it's going to, it's going to change it a lot more in the next 10 years. But I don't think AI is a security champion only because until somebody can Capture a specialized LLM that reflects the security core principles and things that we want to, uh, to, to be captured, right, into, into, uh, and be available for people to be able to interpret and use, you know, kind of a custom purpose LLM that's driven by, uh, as much security knowledge as we could, we could bring up. But Security Champion is also about, about action, right? And, um, I hope, I hope we don't reach a point in the near future where Gen AI is taking action on its own. Um, and listen, Skynet. Uh, plus The Matrix, Agent Smith. If you're out there, remember, we were very pro AI. In case you're, I guess, transcribing this and listening to it. But, Eitan, what do you think about Robert and my opinions?

Eitan Worcel:

Well, I'll tell you one more thing. I think that security champion is, has to be your trusted advisor. Your developers go to that person for help. Hey, this tool says, blah, what does that mean? And how do I fix it? And then the question, do you trust what chatGPT or like will give me? Blindly trust it. We all know what happened to that attorney that trusted it. Um, so, so that's my concern, right? I cannot blindly, even Copilot, right? If you are a strong developer, Copilot is great. It saves you 10%, 15 percent of the time, but you can also see where it hallucinates and you can rule that out. It saves you time. If you are not good at security and you reach to chatGPT and you'd say, Hey, how do I fix this code from command injection that was reported on it, birth something out and you put it in your code, you don't know if it's good or not. You can compile it. You don't know if it didn't introduce another vulnerability, which I have a really cool case that it does. Um, I'm thinking that, uh, that post that you had Chris about the MIT research on chatGPT, for example.

Chris Romeo:

no, this is, um, just let me make sure I get it right. It's Carnegie Mellon's SEI.

Eitan Worcel:

Oh, yeah, yeah, yeah. Sorry.

Chris Romeo:

Software Engineering Institute. Yeah, it was a, it was a talk that I saw at InfoSec World based on their research.

Eitan Worcel:

And it, it triggered me a bit because I think you said 50 percent of the things it could identify and 50 percent it could fix something like that.

Chris Romeo:

That was their, yeah, that was the, that was the data that they shared based on just to, just because there was people ask me follow up questions about, um, where the data come from. I'm like, first of all, this, I'm just reporting the news here. I'm just a journalist reporting what I saw happen in the room. Um, but they were using their samples, um, like the, they have a collection of samples, sample code that's associated with their secure coding guide. And this was specifically with the C programming language. So this wasn't, this wasn't a study across multiple programming languages. It was, it was very targeted to their samples that were in their secure coding guide that was, was, was driving their, their research here.

Eitan Worcel:

and it triggered me because, um, in our company, one of our, our lead researcher, Kirill, did, uh, research. Well, when, when ChatGPT came to life, we were scared at the beginning that all of our automatic remediation work, it's going to be done automatically and we don't need that. So we went and researched it a few times and we did a research with OpenAI 3.5. And our results were very, I mean, not ours, but GPT results were very underwhelming. And one of the cool things there, which I just tried two days ago to see if GPT 4 changed, I gave it code from, uh, Juice Shop, right? And they told it, hey, fix the command injection here, and it fixed it. And then I knew that there was a problem, so I asked, is there potential, there is no SQL injection in it here now, and then it apologized, said, yes, there is, I, I introduced a new vulnerability, so then it fixed it again. So, my point is, as a secure champion, it shouldn't, right? It's good to be identified in hindsight, but yeah, I don't, it helps. I will give you the helper.

Chris Romeo:

It's about trust. It's about, it's about like with this, with a secur So the reason AI is not a security champion is I don't trust it. I don't trust it fully. Of course, I don't really trust anybody fully, I guess. So maybe, maybe I'm talking myself out of my own argument, right? But trust but verify has been the mantra for security. I've been doing this for 26 years and that's always been the case. Like, you can trust but you have to verify that Other assurance inducing mechanisms and methods are helping you to continue to trust. So, I mean, what's your take on this right now? Do you Trust the LLMs maybe in a way because you've looked at them closer. And I can admit, I'm coming at this from the cursory. I haven't dove deep into this. I'm coming, I'm flying around at the 10, 000 foot view and making opinions. So, you've dove deeper into this. How much do you trust the results of an LLM right now?

Eitan Worcel:

So it's related to that trust that you mentioned. Um, I had a discussion with the VP marketing of Phylum, Michaela Vidal. We are very close and she raised the point that security was for years about the zero trust. And now you're asked to trust data that you don't know where it came from? I mean, OpenAI?

Chris Romeo:

good, that's good, I never thought about that.

Eitan Worcel:

And, and it made me think a lot and, and the idea is that LLM itself, it's important what you feed it into, what you feed into it and how you do the supervised learning. And I guess you guys heard the rumors on how the training and verification of open AI is being done in foreign countries that they use ChatGPT to save time to check what ChatGPT produces and ChatGPT is getting dumber because of that. Um. I am a believer in OpenAI, not in OpenAI, sorry. Uh, in, in LLMs in GenAI. I do think that it helps, I do think it'll get better and better and better, but people, but it's not the, you know, it's not the silver bullet today. Uh, I don't know if it will ever be, people need to know what to expect from it. Yes, it has. It's a throve of knowledge. And in that knowledge, there is also incorrect knowledge. It's not a throw of, it's not the assurance of accuracy. So, if you build an LLM, well, if you use an LLM and feed it with very accurate data, and you tune it to only give the right answers, and I don't know if it's possible, we are exploring it, we are experimenting with it. But even, even with all that, do you want a tool that, a Gen AI tool, as accurate as it is? to write the code for you. So you will tell, Hey, here's a code. There's an XSS here. Generate a fix for that. And it will. And tomorrow, here's a code for XSS, which will be a little bit different. Generate a fix for that. And you know, the Gen AI doesn't generate the same thing twice. So the responses that you will get might be different, a different way to fix the XSS. I don't want that. I want consistency.

Chris Romeo:

yeah, our friend Izar Tarandach brought up this in a different podcast we were doing about he can't wait for the day when the AI on one side writes the code and the AI on the other side has to fix it. Like, what's going to happen then? When that loop starts to run, right? Like where it's not, it's not Robert wrote the code and then he fed it through an AI and LLM to see if it was aware of any issues and it found an issue and gave you a fix when it started, when it's got both sides of the equation. What's gonna happen? Like, is, are the wheels gonna fall? It seems like if it has both sides of the equation, the wheels should eventually, you know, the law of entropy says that it should eventually degrade to the point where there's nothing left at the end, right? Um, if it's just looping back and forth, but I don't know if that's, I don't know if that's prophecy or, or just a, uh, a guess for what could happen in the future.

Eitan Worcel:

I had an interesting discussion with, um, Ian from, uh, Gombach AI. I don't know if you're familiar with him. He's the CEO of that company. So it's another AI company, um, on cloud security more. And his perspective, they're not Gen AI, they're old style machine learning, good old AI, predictive AI. And his point is, you can't do a Gen AI because it's not persistent, it's not, and I believe the same, right, it's not going to speed the same answer twice. And you have to have that consistency. So for those of you out there, not just Gen AI, other stuff, but yeah,

Chris Romeo:

is that a common thing that's happening behind the scenes in the industry right now? Because certainly everything is AI charged and, you know, if you're a venture capitalist right now, you probably have a stack of pitch decks and everything's got, if it doesn't have an AI, you know, you have one stack that's up to the ceiling that has an AI component and one stack on this side that has two decks in it because these are the people that didn't add supercharged AI into it. Um, But I'm wondering, like, how much of that giant stack of AI pitch decks are really just machine learning under the hood and they're just trying to tune it up to make it, to make it sound better? Like, have you seen anything from that regard?

Eitan Worcel:

I'll counter that, how much of those are chat GPT wrapped in a nice bow with a nice UI? I,

Chris Romeo:

True. All

Eitan Worcel:

I think more. And every, well, I would say that at least 50%, and I'm probably underestimating of the AI features that all those security companies have is a chat GPT wrapped in something. Um, By the time this video comes out, I'll be after my AI Dev World, um, conference, and then I will be smarter. Um, we, we got accepted to it, so we'll go and show our stuff and learn about others. I'm pretty sure that many of them are, like you said, they added AI for that. Investor, uh, and next to that, you know, long pile of, of pitches with AI, there is also the shed, uh, drawer with all those long pitches of crypto and web3 and, and all their stuff and blockchain security. I do believe that most of it actually today is gen AI and not machine learning because people are following that hype and believe that they can wrap chatgpd quickly and get something. And they're not giving a thought for everything that, that it means.

Chris Romeo:

right, let's, uh, let's wrap back around here and start talking about more specifically about code vulnerabilities and getting into the nitty gritty about, uh, the AppSec impacts of, of Gen AI. So, uh, how, how does what's happening with AI How does this differ from traditional methods of how we fix code vulnerabilities? So I'd love to get your perspective, like a compare and contrast between in the olden days before AI, people fixed vulnerabilities like this, but now with AI, this is how the process changes, the approach changes. I think it would be good to lay that foundation for folks. Hmm.

Eitan Worcel:

Yeah, so, I mean, the old days, you go and you read the documentation of the tool. Well, let's, let's take a step back. First, you say there is no vulnerability. The tool is wrong and you fight it. Um, but then it's when you are forced, you go read your documentation. You go to Stack Overflow, you go to Google, you write, you go to the security champion that is a real human intelligent, not artificial intelligence. And you write the code. And in many cases, and I saw that because, um. I did an experiment, I invited people, I posted on LinkedIn, anyone that goes on a call with me will get a hundred dollars gift card, and I want to see them fixing one vulnerability. And I didn't go fancy, SQL injection, um, with Java, with Python, whatever makes sense. And I had a guy, he's fresh out of college, so everything needs to be fresh in his head. So he wants Python, I gave him Python, um, SQL injection, and he was following the documentation, and also what he saw online, and he was writing a sanitizer instead of using prepared statement.

Chris Romeo:

Yes.

Eitan Worcel:

Basically broke his application because he was sanitizing a username input and didn't think about hyphen and period and dot and Latin characters and all that. Now, today, what he would do, chat GPT, read it, and copy paste it. And that's the problem. I think Vulkan Security published an amazing research. I don't know if you saw that many months ago on the hallucination of AI. And so just, just a plug for them. It was awesome research. I loved it. Um. OpenAI tend to hallucinate, ChatGPT gives you, invents packages that do not exist, so if you copy the code, the application will break, the compilation will break, and then hackers find it, they create those packages, so the next time that you use it, the package exists, and, and it will come from NPM, but that package is malicious.

Chris Romeo:

Mm hmm.

Eitan Worcel:

That's my, that's one of my biggest fear, because let's face it, the three of us, we are customers of companies, banks, insurance, um, they have a lot of developers that are not securely trained, and they will do that, and they put our data and our money at risk. Need to find a better way. to handle that.

Chris Romeo:

So then what's, how, how do things, how is it different from when an AI is involved though? So we talked about kind of the case of the standard developer, like your example of your guy on the, on the call who built a sanitizer. How does AI make that, how does it change that process that that guy would have gone through? Mm

Eitan Worcel:

He would skip a lot, he would take a lot of shortcuts, go directly to chatGPT, write the code, copy it, put it in his code, see that it compiles and call it a day. And now, given the fact that senior developers in many organizations, just developers with five years of experience, they didn't learn security either, so for them it works. And now the next time that their SAST scan will report the vulnerability, they will dismiss it as a false positive. And that's, that's my concern. There is no expertise in it

Robert Hurlbut:

Hmm.

Eitan Worcel:

and you don't learn anything..

Robert Hurlbut:

Certainly a challenge, right? Uh, to consider. I think we're starting to. So in terms of priorities, how does a AI, an AI system prioritize, uh, for vulnerabilities and what's, what's first, what's second and so forth? How does it make those decisions? rules

Eitan Worcel:

Sure. Uh, just by the way, before that, I, I don't want to shoot myself in the foot here, right? Trashing AI and security, there is a possible way of using an AI. And my point is, if you just go to ChatGPT and write it, fix this, it will give you that. And you don't know if that is the right way. Um. And there are tools to help you with that. Now, prioritization, had the, there is another post on prioritization now, I think it was Evan, right? Talking about how you prioritize on LinkedIn, where, how do you prioritize today? It's not just by the severity, right? It's by exploitability, reachability, by the importance of that application versus that application. I may have command injection in some marketing app that has no data, real data in it, so what worth will happen versus something else? Uh, well, not command injection, SQL injection. Let's change that. Um, I am curious, I will be honest. It's not prioritization using AI is not an area of my expertise. I didn't look at it. I'm trying to imagine what can be done, but in my head, it's more. Of the rule based approach where I give the definition, I tell the AI, hey, learn about this application, read, tell me how you classify it, and then use that to prioritize. I think it's more than that.

Chris Romeo:

So, I mean, it sounds like that's a human function still, like using AI to do automated code fixes. It's not going to do the prioritization for you. A human's going to have to set a policy or some level of here, here's the, here, yeah, like a rules engine or something that's going to drive where the AI engages, because you're not going to, I'm, I'm, I'm, I'm assuming you're not going to turn the AI loose and say. Here's our code base, fix everything you can find. And then there's thousands of, I'm imagining the lights on the Whopper going back and forth. And there's, you know, PRs being just stacks of PRs and they're being approved and, you know, right. Like this is a controlled burn. It's a, it's a. Right? It's not like, and Eitan, what's your perspective on this is, do we turn, how do, how do we scope this? Do we turn it loose on the whole code base? Do we constrain it in some way to get some, because like I think about how I always recommend people roll out new AppSec tools. Don't turn on every rule. Don't flood the universe with the results of it. Make a small selection that will have high fidelity and result in developers going, This tool's not so bad, which we all know, that means they love it. And they think it's the best thing on earth, right? Um, I'm only half kidding with that. But, The same thing like with with AI automated code fixes, how do we get some small wins so that people start to trust the technology and are willing to move forward with it in a bigger scale?

Eitan Worcel:

it's exactly that. You said earlier, trust and verify. You do your, you do your pilot. Um, so let's say that, let's say that my AI machine, um, spits out a fix for path reversal, right? You look at that fix. If you're a strong developer, maybe you understand it. Maybe you need someone to explain to you. I think that the security team needs to be involved and see the fix to approve it. And then you'll do the next one. Then you'll do the next one. And you look at the list and you see, oh, I have a hundred of those. I have thousands of those. I'm not going to do that manually one by one. So I'm going to now that I know what to expect from the AI, and that's why I said the productivity is important. Once you know what to expect the fix to be, you can decide if you trust it now. Even if you trust it, you do need to have good regression, good set of regression testing, because it's not fair even to expect from the AI to understand every piece of functionality in your application. I mean, when you guys, let's say that someone asked you to help with reviewing and fixing their code, you can do that, but you don't know anything, everything about the code. So you don't know if what you build will, might break in some situation, some scenario. something. So let's, let's give the AI automatic remediation its place. Trust, verify, but test. So after that,

Chris Romeo:

And I like what you just were saying about predictability because you're describing a world where the automated AI driven code fix is not going to freestyle every You're describing a world where it's going to predictably fix this type of path traversal issue in this type of way. And when I look at ten times that it does it, I'm not going to see ten different derivatives. One time it introduced, um, input validation, one time it, uh, You know, added synthesization the other eight times that actually fixed the path traversal or, you know, whatever, just took it out, right? And so talk to us a little bit more about predictability from, uh, from this perspective.

Eitan Worcel:

I think, and I spoke with that, I spoke with my team about it when we were working on automatic remediation. the moment, I can't imagine AI to do that last mile. I do believe that AI can help you because scaling that problem when you're in a large organization and you have a million vulnerabilities and not all of them are the same issue type in different languages, no tool can do it manually. I was working for SAST Vendor and DAST Vendor. I know what it means to grow your coverage and IAST. Growth coverage takes time. There should be a way with IAST to grow that, to scale faster. But I wouldn't want the AI to do the last mile of writing the actual code that the developer takes. There would need to be some guardrails there. And the AI doesn't like guardrails, um, even if you tune it that way. So that's my concern. Um, again, if you saw AI fixes something, and you like it, and you saw the second thing and the third thing, and you like those fixes, you should trust that the next 100 will be the same. Otherwise, you still need to verify each one of them. And if you have 1000 of those, and some poor developer needs to go one by one, they're not going to be happy. That's not a good way to eliminate your backlog. I

Chris Romeo:

What about how do we prevent the introduction of new vulnerabilities with the fixes that you're creating? Because once we get that predictability We're taking some big steps forward towards trust, but verify, but then there's also the concern and I think you even mentioned earlier that there was in that example where chat GPT added, it fixed the problem, but it introduced a SQL injection to do shops, command injection. How do we, like, can we, can we set this thing up in such a way that that doesn't happen? Or is that always going to be a risk we have to live with?

Eitan Worcel:

mean, first of all, again, to be fair, if you give a developer to fix a problem, you don't know that he won't introduce another problem.

Chris Romeo:

Sure.

Eitan Worcel:

We do all assume that the AIs I wouldn't say smarter, but more knowledgeable because it has access to all the knowledge that it was trained on. But then the question needs to be very accurate. It's more of, okay, can you fix command injection here and verify that you're not introducing any other vulnerability and and type while we do this right with your hands, which is hard. I think that at least from what we know about AI today, there is going to be some element of role-based technology. It's not gonna be just ai maybe three, five years from now, I hope, uh, it'll be the, the AI machine will learn to be better. And again, I'm not, I'm, I'm not on the side of the thing that AI will replace developers. By the way, I don't think that AI can come up with ideas on its own, like developers need to implement new things, innovate. But fixing security vulnerability is not about innovation. It's about following best practices, and best practices are guardrails. And, and you can, you should be able to train a machine to turn your code, which is not under any guardrails. Anyone can write code however they want, and that's the challenge. And, and apply best practices to correct that, to correct that error that was introduced. So it's not innovation, it's just Pure best practices, and it could be rule based, but it will be very hard to scale if it's rule based.

Robert Hurlbut:

How does the AI handle complex, multi layered security issues in the code?

Eitan Worcel:

Do you think it can?

Robert Hurlbut:

Good question.

Eitan Worcel:

I mean, let me, let me change it a little bit. How does, let's say that you were in, in AI camp and you trust AI, you see the results and you like how it does. Would you allow an AI to do an architectural change in your application?

Chris Romeo:

No, I wouldn't.

Eitan Worcel:

That's so automatic remediation is not about those, you know. One offs. It's not about architectural change. It's not about very complex. It's not about business logic. Um, I don't think it, because how do you train it? Where do you get a huge data set of business logic vulnerabilities and complex layers that, that, you know, the problem with the AI, you will see one example that seems similar and it will infer and give you, hey, here's the answer. I don't think it can. I don't. Ask me again in 2040. So,

Chris Romeo:

Yeah. I mean, that's, that's my next question. So where, you know, when you, when you see the role of AI, like where, how do you see it evolving in the AppSec domain? Let's just say in the next five years, let's not even go 2040. That's a little further away. Things have been changing so much, so fast. I'm afraid to think 20, I don't think we could grasp 2040. Right now, I think five years is enough to try to dream what's going to happen here. But what do you think is going to happen in the world of AppSec? How is AI going to continue to disrupt the world of AppSec?

Eitan Worcel:

I believe that a lot of the routine and mundane work will be automated with AI. I think that tools such as SAST, um, and I have no leg on that one. I'm not doing detection. I do feel that tools in the area of SAST will be. AI will come there and will detect things very fast because it's rule based, right? The, the, the knowledge is a rule based, so the AI can do that better. And the fixes, best, best practice fixes, if you feed a lot of those, it doesn't necessarily need to be the old machine learning style where there are a thousand cases of SQL injection in Java, uh, learn from that, how to detect them and how to fix it can be more on based on base practice and, and apply that. So I, I do believe that. Developers will be way so much productive because they won't need to spend time on manually fixing vulnerabilities. I believe that two, two, three years from now when we will look back and you will say, Oh, you remember when we were doing manual fixing of those issues? Ah, I'm so glad that we are not there anymore. We're so more, so much more productive these days. So AI will allow, it's like Roomba, right? Oh my god, I don't need to vacuum. I don't need to mop the floor. I don't Something is doing that work for me. I would like to live in a world where AI and automation takes care of the things that I don't like to do. And

Chris Romeo:

And don't, don't forget DJ Roomba from Parks and Rec. Tom Haverford had the Roomba that had the little iPod strapped on the top of the speaker. Like that's, that's what I

Eitan Worcel:

I had a really bad experience with Roomba, one of the first ones. So I have dogs. I always had dogs. Um, and Roomba was running every, every day. And one time the dog left something on the floor.

Chris Romeo:

Hmm.

Eitan Worcel:

Spread that.

Chris Romeo:

Oh, Roomba, come on.

Eitan Worcel:

And it was disgusting as hell. I came home and there was all over smatters on the floor and I need to also clean Roomba. So,

Chris Romeo:

Yeah, that's a

Eitan Worcel:

I mean, I don't think AI should, so to your early question, can AI introduce vulnerabilities, it's similar, right? Let's make sure that that doesn't happen and it craps on the application.

Chris Romeo:

That's a heck of an illustration. You need to work that back in and some other way as to How AI can introduce vulnerabilities. Well, let me tell you about this time when my dog wasn't feeling great

Eitan Worcel:

Yeah,

Chris Romeo:

There's some truth to that.

Robert Hurlbut:

So, uh, what advice would you give an organization that's trying to adopt AI driven solutions for their own application security needs? Yeah.

Eitan Worcel:

I saw a few demos of AI fixes, um, underwhelming, let's say, um, you need to involve security first before even showing it to developers, involve security, folks, and challenge and ask them to challenge a vendor, see what, what the tool will be good at. Let's say that we have an AI solution, AI automatic remediation solution that can fix just SQL injection and everything else he tried, it's not good enough. There's value in it. Right? If you have a lot of SQL injection, if you have 1 percent of your backlog, 5 percent of your backlog, SQL injection, great. Use it. Eliminate those. But stop there. Don't have it fixed path reversal if you don't trust that fix. So my point is, have security, verify the fixes. Um, make sure that you are happy with those best practices and you agree with them. Then take it to developers and tell them, hey, We have this amazing magic that will get rid of SQL injection. You will never need to fix them again, but still, you should not write them, introduce them next time. I think that this is the process. I just spoke with a guy and he said that they were looking about doing, using OpenAI in their organization, very large organization. One of the things that concerns them on the GenAI stuff is that maybe the answers, the recommendations that the AI will give are different from their Guidelines from the CEO, right? It's not the same messaging that needs to come from up above with fixes. If you're in charge of the security in head of AppSec, do you want the recommendation to be different from what you suggested? What, what the best practices that you have in your organization? I wouldn't, so, so it's out of it.

Chris Romeo:

thought of another you made me think of another Risk I guess in working with AI that I've heard people starting to talk about now and It's the idea that anything you put in Their license agreement says belongs to them as far as training perspective. So what's your take, what's your take on that issue?

Eitan Worcel:

when, when Sam took OpenAI and tried to sell it to enterprise organizations, the, the big banks, the big, you know, the very big companies, they said, no, because we don't want you to use it for training. Um, so it's not, if you're using OpenAI for, through the API, they don't collect the information. Uh, there are some private instances of that for organization. So. If you're using chat GPT, it does, but if you're using it over the API, it doesn't and also organizations. Um, if they are using your data and then they are selling it back to you through the solution, it's kind of. You know, not necessarily fair. Maybe it's okay. I mean, I would like to, I would be open to give some information that I have if it makes my life easier later, but there is a balance,

Chris Romeo:

Yeah. Yeah. There's a privacy angle in there, uh, for the individual that's using chat GPT. Different issue than what you're, what you're describing, how the enterprise has been protected through their ability, because they have. DeepPockets, right? DeepPockets translates to personal privacy protection and protection of your, uh, confidential information and proprietary information that, that exists in this, this, this environment. But the individual who's using ChatGPT today is, you know, it's, it's, they're, they're uploading their data to, they're losing control of their data to some degree. I think this is going to catch up with us later. Um, I don't think anybody understands the scope of it right now. Yeah,

Eitan Worcel:

even for private or, or corporate, um, there was a hack that a lot of the login credentials for ChatGPT, uh, got leaked and someone got a hold of that. And people ask me, okay, so what? What's the problem in that? And they told me, well, if I have your credentials, I can log in to ChatGPT with your credentials and I see all your past queries. And all the places where you wrote, here's my tax, um, paper and help me fill that or I was diagnosed with this. What does it mean? Right now? It's like the Google search history. That's way more detailed and you don't want that outside. So we're talking about private people. Yeah, it's, it's very sensitive information people put there.

Chris Romeo:

Do you see, um, a world in AI where we'll have the Startpages and the DuckDuckGos? that are more privacy focused solutions. Like we saw it happen with search, right? Like I don't use Google for search because I know the database they keep or try to keep on me. And so I, I decide I don't use it because I don't want to play in their world with that, that my information is mine and I don't want to share. I'll share it with who I want to share it with, not those that want to extract it from me. And so I've been using Startpage for years and years since Startpage came out. But does this open up a world for, Is there going to be a Startpage or a DuckDuckGo to Google version equivalent in the world of AI? Like, is somebody gonna, gonna solve that problem for us?

Eitan Worcel:

an interesting question. Startup idea for your guys out there listening.

Chris Romeo:

I come up with a lot of startup ideas on this show.

Eitan Worcel:

Just give him some,

Chris Romeo:

Yeah, 10%, 10 percent is what I take for, for you using my ideas, you know. Um, that's a free idea. It's out there on the, it's, it's, when it hits the broadcast airwaves, it's, uh, it's, it's available for anybody to

Eitan Worcel:

there is a business model behind it, it will happen. I mean, because if you think about Signal and then using VPNs and DuckDuckGo, who uses it? Those that have some understanding and care, and then they wouldn't use ChatGPT with. Um, private PII kind of thing, right? Um, so it's not, it's not the same, but at the same time, maybe they would want to use AI and then it is a good idea to give them a solution to help. I don't know. It's interesting.

Chris Romeo:

I mean, that's, you're talking about a major business idea. I mean, Startpage, when they started, and I don't know if they still do this, but they were sending the request to Google They were just anonymizing the data and then bringing it back that they didn't have their own database of, of links and s and and, and search terms and everything. They were just a filter. And it's a brilliant filter. If you can make a business that, that protects privacy and secure and provides an enhanced security and uses of the existing data source, then you're onto something because you're, you know, you're, you're not building something yourself, you're just putting a secure wrapper around it. So, all right, Robert, we got to do some lightning round questions here. We got a, we got some crucial questions that need answers.

Robert Hurlbut:

right. So we have three questions that we typically ask. Start with, we have first a controversial take. What's your most controversial opinion on application security and why do you hold that view?

Eitan Worcel:

think it becomes less controversial, but DevSecOps slows businesses. That's my take. Um, DevOps was there for a reason, was to allow businesses to go faster, compete. And DevSecOps is the opposite of that. Uh, it comes from vendors, like where, what I was used to do, uh, where we thought that the biggest risk for organization is a data breach and the biggest risk for organization is getting out of business because they're not fast enough. DevSecOps means you put guide, um, you put gates here and there and there, you know, that, uh, meme, guide, uh, gates, gates, gates everywhere kind of thing, it slows down. So we need, we need to revamp DevSecOps. We need to come up with a different approach. It doesn't work.

Robert Hurlbut:

Okay. Number two, billboard message. What would it say if you could display a single message on a billboard at the RSA or Black Hat conference?

Eitan Worcel:

Stop focusing on finding. Focus on the fixing. I, I, I, I vividly remember a talk that I had with a company when I was trying to sell AppScan many years ago. It was, um. Initiative to automate security scans and I told them, okay, we are so much faster. What do you do with the results of the scan? And they said, we don't know. We just need to automate the scans. So the world needs to move from automatic scan, from covering everything to fixing. You're not getting any more secure by running a single scan or a hundred scans.

Robert Hurlbut:

And the number three is related to what you're reading. So what's your top book recommendation and why do you find it valuable? I know I was noticing your bookcase back there, so,

Eitan Worcel:

it's kids books behind me.

Robert Hurlbut:

Oh, okay.

Eitan Worcel:

since I started the startup two years ago, I wasn't able to read a single book. I tried. My mind keeps wandering. Um, the last book I read before starting the startup was. Not application security. It was the hard thing about the hard things. And if you want to start a startup, I definitely recommend reading it. And if after you read it, you still want to have the startup, then okay. Um, it's a tough book to read for people that imagine themselves in that role of a CEO.

Chris Romeo:

I agree, I agree. That's a good one. I've read that one as well. All right. Well, how about a key takeaway or a call to action? What do you want to leave our audience with here?

Eitan Worcel:

Start fixing. I mean, you have already. So any developer out there, if you're working in a large organization, you may have my data managed by your tool. Please, please be responsible, fix the vulnerabilities that are reported and don't blindly trust ChatGPT. Go talk to the expert, go listen, ask for help and ask your security teams, ask your development management. Let's do something good about it and stop scanning and asking for exceptions. That doesn't work. Aren't you tired of getting your, that email or mail, snail mail saying your, here's a data breach and your information was involved in that, was included in that? No need.

Chris Romeo:

As an industry, we, we, uh, we gotta, we gotta find a better way. We gotta solve this problem. So I'm with you there. So Eitan, thanks for, uh, sharing your perspectives on this. I know we, we went super philosophical right off the bat, which wasn't the plan, but. I mean, that's part of AI, right? Is we have to, we have to talk about it from different perspectives to help us understand it. So thanks for sharing all your perspectives here. I know I learned a lot. You brought me forward and automated fixes and understanding how these things are coming together. So super excited to see how, uh, you and Mobb move forward. And, uh, you know, we're, we're just like all of our startup friends. We're cheering for you on the sidelines. So thanks for being here.

Eitan Worcel:

Thank you very much. Happy to. Thanks for inviting me.

Podcasts we love