The Application Security Podcast

Maril Vernon -- You Get What You Inspect, Not What You Expect

August 29, 2023 Chris Romeo and Robert Hurlbut Season 10 Episode 21
Maril Vernon -- You Get What You Inspect, Not What You Expect
The Application Security Podcast
More Info
The Application Security Podcast
Maril Vernon -- You Get What You Inspect, Not What You Expect
Aug 29, 2023 Season 10 Episode 21
Chris Romeo and Robert Hurlbut

Maril Vernon is passionate about Purple teaming and joins Robert and Chris to discuss the intricacies of purple teaming in cybersecurity. She underscores the significance of fostering a collaborative environment between developers and the security team. Drawing from her experiences, Maril shares the challenge of development overlooking her remediation recommendations. She chose to engage directly with the developers, understanding their perspective and subsequently learning to frame her remediations in developer-centric language. This approach made her recommendations actionable and bridged the communication gap between the two teams.

Maril also looks into the future of purple teaming, envisioning a landscape dominated by automation and AI tools. While these tools will enhance the efficiency of certain tasks, she firmly believes that the human element, especially the creativity and intuition of red teamers, will remain irreplaceable. She envisions a future where dedicated purple teams might be replaced by a more holistic approach, or white teams, emphasizing collaboration across all departments.

Maril's powerful message on the essence of security: "You get what you inspect, not what you expect." She emphasizes the importance of proactive inspection and testing rather than relying on assumptions. And she re-states the centrality of cooperation between teams. Maril's insights serve as a reminder of the dynamic nature of cybersecurity and the need for continuous adaptation and collaboration.

Helpful Links:

  • Follow Maril: @shewhohacks
  • Purple Team Exercise Framework: https://github.com/scythe-io/purple-team-exercise-framework
  • Scythe: https://scythe.io/
  • MITRE ATT&CK Framework: https://attack.mitre.org/
  • MITRE ATT&CK Navigator: https://github.com/mitre-attack/attack-navigator
  • AttackIQ: https://www.attackiq.com/
  • SafeBreach: https://www.safebreach.com/ 
  • PlexTrac - https://plextrac.com/
  • Atomic Red Team: https://atomicredteam.io/

Book Recommendations: 

  • Security+ All-in-One Exam Prep: https://www.mheducation.com/highered/product/comptia-security-all-one-exam-guide-sixth-edition-exam-sy0-601-conklin-white/9781260464009.html
  • The Pentester BluePrint - https://www.wiley.com/en-us/The+Pentester+BluePrint:+Starting+a+Career+as+an+Ethical+Hacker-p-9781119684305
  • The First 90 Days - https://hbr.org/books/watkins

FOLLOW OUR SOCIAL MEDIA:

➜Twitter: @AppSecPodcast
➜LinkedIn: The Application Security Podcast
➜YouTube: https://www.youtube.com/@ApplicationSecurityPodcast

Thanks for Listening!

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Show Notes Transcript

Maril Vernon is passionate about Purple teaming and joins Robert and Chris to discuss the intricacies of purple teaming in cybersecurity. She underscores the significance of fostering a collaborative environment between developers and the security team. Drawing from her experiences, Maril shares the challenge of development overlooking her remediation recommendations. She chose to engage directly with the developers, understanding their perspective and subsequently learning to frame her remediations in developer-centric language. This approach made her recommendations actionable and bridged the communication gap between the two teams.

Maril also looks into the future of purple teaming, envisioning a landscape dominated by automation and AI tools. While these tools will enhance the efficiency of certain tasks, she firmly believes that the human element, especially the creativity and intuition of red teamers, will remain irreplaceable. She envisions a future where dedicated purple teams might be replaced by a more holistic approach, or white teams, emphasizing collaboration across all departments.

Maril's powerful message on the essence of security: "You get what you inspect, not what you expect." She emphasizes the importance of proactive inspection and testing rather than relying on assumptions. And she re-states the centrality of cooperation between teams. Maril's insights serve as a reminder of the dynamic nature of cybersecurity and the need for continuous adaptation and collaboration.

Helpful Links:

  • Follow Maril: @shewhohacks
  • Purple Team Exercise Framework: https://github.com/scythe-io/purple-team-exercise-framework
  • Scythe: https://scythe.io/
  • MITRE ATT&CK Framework: https://attack.mitre.org/
  • MITRE ATT&CK Navigator: https://github.com/mitre-attack/attack-navigator
  • AttackIQ: https://www.attackiq.com/
  • SafeBreach: https://www.safebreach.com/ 
  • PlexTrac - https://plextrac.com/
  • Atomic Red Team: https://atomicredteam.io/

Book Recommendations: 

  • Security+ All-in-One Exam Prep: https://www.mheducation.com/highered/product/comptia-security-all-one-exam-guide-sixth-edition-exam-sy0-601-conklin-white/9781260464009.html
  • The Pentester BluePrint - https://www.wiley.com/en-us/The+Pentester+BluePrint:+Starting+a+Career+as+an+Ethical+Hacker-p-9781119684305
  • The First 90 Days - https://hbr.org/books/watkins

FOLLOW OUR SOCIAL MEDIA:

➜Twitter: @AppSecPodcast
➜LinkedIn: The Application Security Podcast
➜YouTube: https://www.youtube.com/@ApplicationSecurityPodcast

Thanks for Listening!

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Maril Vernon -- You Get What You Inspect, Not What You Expect


Robert Hurlbut: Hey folks. Welcome to another episode of the Application Security podcast. I'm Robert Hurlbut. I'm a principal application security architect at Aquia, and I'm joined by my good friend Chris Romeo. Hey Chris, I. 

Chris Romeo: Hey, Robert, Chris Romeo, CEO of Kerr Ventures, and I like threat modeling. I'm trying to say something different. I feel like I always say the same thing. What else can I say? The sky is blue, the sun's shining. I don't know, a weather update from Raleigh, North Carolina? It's probably not that interesting to the people that are listening to this podcast right now. 

But who are we talking to today? 

Robert Hurlbut: We are talking to my good friend as well, Maril Vernon, whom I've also been working with recently at the same company. So Maril, welcome, 

Maril Vernon: Thank you. Thank you so much for having me here, both of you. My name is Maril Vernon. I am a red team operator by trade who has now moved into the application security and threat modeling space. 

Robert Hurlbut: excellent. Yeah. I'm very glad to have you here, and as we typically do when we're talking with folks on the podcast, we like to start with a security origin story. So if you would, Maril how did you get into this wonderful, crazy world of security? I. 

Maril Vernon: Yeah. So when I first discovered cyber as an industry, I, 'cause I, always knew career computers and it were an industry, but I didn't know cyber was a completely different function. I was working in marketing at the time actually for Caesars Entertainment, so I was a copy editor and social media manager, and I was just growing bored with that plateauing. 

And I still loved the work, but I, there was nowhere to go skills wise. And I said, I'm gonna pick the most difficult. Mysterious mercurial sounding thing I possibly can. So hopefully the challenges never ends, that I never get to stop learning. And I just decided to try cyber until I hated it or I sucked at it. 

And luckily neither of those things happened. And I knew it was difficult to get into as a field. So I decided to do something I did know, which was to enlist in the National Guard, get into a cyber unit within the guard, and then I leveraged my my guard title and experience to get into my first private cyber position. 

That is how I broke in. 

Chris Romeo: Didn't you just win an award? I believe at in Las Vegas. 

Maril Vernon: I did, I won the cyber United Cybersecurity Alliance cybersecurity Woman of the Year, hacker of the year 2023 in a long line of legendary women hackers. So I'm very grateful to be counted among them now. It's 

Chris Romeo: Very cool. Very cool. Congratulations on that. That's so cool. 

Maril Vernon: Thank you. 

Robert Hurlbut: Yeah, definitely. Congratulations. What sparked your interest and what led you to specialize in purple teaming in particular? 

Maril Vernon: Yeah. So back when I first started pen testing I worked for a small to meets medium sized business and they, we didn't have a large cybersecurity co department, and our IT operations department actually took care of. Pretty much everything from asset provisioning to remediation cadence to SOC investigations. 

A lot of duties were lumped into multiple departments, and as I was pen testing, I realized one day when I did a firewall assessment six months apart and didn't have to change anything on my report, but the dates that everything was persisting and they didn't have time to get to any of my findings, and I felt real bad and like I wasn't affecting any real change or helping to secure anything. 

So I said, you know what, if you don't have time, I was like, teach me your tools. Teach me your ways, teach me your processes, and I will get in there and see what I can find and what I can recommend that you fix very quickly. So it minimizes the impact on your time. And I just started purple teaming by myself before there was really a word for it to save the blue teams. 

Time. And then I realized that there was a need for more cross collaboration. As I moved to bigger orgs, more dedicated red teaming roles, I realized the same thing. The blue team is suffering from. Morale defeat, right? They just feel like the red team's just coming in and winning every time and achieving our objectives every time. 

And it makes 'em feel horrible, which isn't our goal. Additionally, they were also inundated, even at big orgs. The blue teams get inundated very quickly and we realized that they asked us to take a break from Ops for two quarters so they could catch up on our past few operations and we're like, oh, this is also horrible. 

We're also not helping. And we felt bad. So I started to pull them in and take our red team operation. Like repeatable TTPs and just see if we could work with each other on educating them. Like not only the what, not only the what do you see, but where did that come from? Why did we get here? How did we get here? 

What was I after? So that they can hopefully become, better investigators if they do have to do a manual investigation, it's more value added, and we also help them automate more. So my attraction to purple teaming simply came from just feeling like my role as a strictly offensive security person was very limited and not really helping my org, and that I wanted to help my coworkers because we're all on the same team. 

Chris Romeo: So I think my understanding of purple teaming, I may not have the whole picture. So let me ask a question here and you can help me to perhaps understand it better. 'cause I've never thought about purple teaming from the AppSec perspective, but as a purple team. Person who's focused on, on kind of the AppSec side as well. 

Are you breaking and building both, are you finding flaws, finding issues, security vulnerabilities in web applications, and then meeting with the developers and teaching them how to fix it, helping them fix it, advising them? Is that what you're doing? 

Maril Vernon: Yeah, so really I, I've taken the stance lately that purple teaming shouldn't be a proper team, right? It doesn't have to just be soc analysts getting together with pen testers. It really what it is it's just collaborative security. It's a holistic. Collaborative proactive approach where we just sit multiple teams down together and say, how can we help each other learn our jobs? 

How does that make me better at proactively testing the things that you build? How does that make you better at proactively building the things that I will test? And so we do in the application security world it's not really probably a purple team, it's actually probably more of an orange team. 

'cause we do take those builders, those developers, those ux, UI designers, software engineers, and the ISSOs, and we put them together with someone like me who's red. We say, what can I learn from you? And I would say, oh, have you thought of the fact that I, as a red teamer would try to exploit this? 

This is a relevant threat against the system. And they'll say, oh, we didn't think about that before. That's a great point. Can we remediate around that? It is cross education. It's it's building a better, DevSecOps mindset. Helping them to think about these things as they're building the products in the future. 

And then if we have to, helping them to analyze their system from a whole to see, have I considered not just what I think are the threats as the builder, but what an offensive person sees as an opening from their perspective. So it's really just about that cross pollination. 

Chris Romeo: That's very helpful. That's, I don't know, somehow I never connected those dots. I don't know, like that's one of the things that, that I've learned in doing multiple podcasts and interviewing people. It's I don't really know that much. Like I know some things, but there's a lot of things that, that I still need to learn and like you just connected two dots for me that somehow I never connected. 

But that's really, it's a powerful idea though, that you're breaking something and then you are advising on how to fix it with the developers themselves. I always thought purple team was red and blue. It's those, the folks in InfoSec that are off to the side that we, in AppSec. We talk to, we're we know them, but we don't really spend a lot of time with them. 

Like you're talking about this thing where everybody's all together. That's such a cool concept. 

Maril Vernon: Yeah, I love it personally because I think more cohesion between the teams is what's gonna make us better. Back when I first started pen testing, for anyone who knows my story I talked myself into a pen testing position with no pen testing experience. I also talked my way into cyber with no cyber experience. 

So when I first started pen testing things and learning these tools and learning these capabilities, I would just write as my remediation recommendation. Go fix all the places where this happens and they wouldn't do it. And I'm like, I went to the developers, I walked down to Flores and I was like, hi. 

You may have seen this report, just realized none of these things are getting done. I just wanna understand why. Where am I going wrong? They're like, oh, because that tiny little sentence you put there is 30 hours of work for us to go through all the lines of code and find all the places that happens. 

And there's no intelligent way to do it. And. We're just not gonna do it from a realistic perspective. And I'm like our attack surface is huge. So I was like, how can I do that more intelligently? How can I write it in developers so that it makes sense for you? It's, it is time efficient, it's cost efficient, it's effective. 

If we can't find it everywhere, can we do something? And they were like, yeah, let us teach you. How we would actually approach something like that, what makes more sense? And I learned how to write my remediations in Dev so that they would actually get responded to. And I think that's a beautiful place where practical application security purple teaming today, has taken an evolution 

Chris Romeo: All right. We gotta unpack that. Writing your remediations in Dev. I must understand the, I know we're completely off the script, but that I don't care. That's why we have this podcast, so we don't, I don't, we don't have to follow the script. I wanna understand, so what is, what if you were to tell me a couple, like three or four best practices for writing. 

Remediations in dev, which I'm assuming means dev speak, that they understand they can do. What would be the three or four things you would advise me to do? 

Maril Vernon: So like instead of just saying key management, this isn't effective. You should really be managing all the keys in a central place. You should be blah, blah, blah. Don't hard code, keys in code da. Developers need that kind of thing, especially in development to help them. 

So I would just say, stick it in a specific module, give it a special tag, give it a place where you could go through after gr for all the instances of that thing, or hopefully most of the instances. Find a place to replace it, eradicate it. Rotate it. Also, that's gonna make it easier in the future for when you do need to rotate that service account key, that admin account key, let's build these things in from the get go. 

It's like labeling all of the stuff you put in your pantry, right? This is rice, this is spaghetti, this is this. I'm not just go find all the grains and tell me what my options are for dinner. You're tagging that on the front end so you can say, these are the four pastas we have, which one do you wanna work with? 

And I just learned to start saying things like module and saying things like, That made sense to them. So I would, I would obviously say like tag as many things individually as possible. It doesn't make it as easy to code uniformly. 'cause you have to keep in mind all the different variables that you've defined for yourself, but the more variables you have and the more you can keep track of those things, the difficult, the more difficult it will be for me as an adversary to find all of them and abuse them. 

Other things I would say is really strong input sanitization. Make sure that the user doesn't have as much creative license as they want to, just. Put random things into input parameters and boxes and stuff like that. Try to control error messaging as much as possible. You'd be surprised just how far we can get learning that. 

Oh, you put your username incorrectly, it should be first name dot last name. I'm like, beautiful. Love that. For me, I'm just gonna go look up all the employees who look work at this organization. So I was just giving them really specific things they could go for, targeted, easily find, fix, and then hopefully not do that practice in the future. 

Also, rather than just say, Code more securely or stop doing this or go find all the instances of that. Like that doesn't work very well. I learned. 

Chris Romeo: So are you in the code as you're preparing your remediation? You're writing this guidance to the developer in a Jira ticket or a bug or whatever, or in the report. Are you, do you have the code open? Are you giving them line numbers and things of pinpointing where they need to go to fix it? 

Maril Vernon: If I can, I'd love to do that. If we are doing a p i testing or fuzz testing or source code analysis, great. I would say, here's an instance of this on these lines. That's an instance of what I'm talking about. But we don't oftentimes have access to all of that. So instead, I would say this was made possible. 

Can we look together at where this is and why that was possible? And then I can help you find an intelligent way to fix it. But I don't, I'm not always able to provide them the Splunk role or provide them. The exact syntax that they can grab or something like. Unfortunately, it's just a limitation of the testing, but that's why I love purple team exercises. 

'cause maybe we found that thing in the Red team operation and then I would love to sit down after and go in depth with it with them and find it and say, okay, if we can't remove that completely, what can we put around it? Or what can we alert on? Or something like that. Like a. An unintelligent remediation recommendation would be to like, don't let anyone use the clipboard because obviously we can copy and paste a lot of info and ex bill and that's bad. 

They're like, that's unpractical. Everyone's gonna use clipboard. I'm like, but could you alert on people in like soft skill roles like sales and marketing using clipboard from the command line, which is not typical for their behavior, right? That would be something that's suspicious. So you know, you just try to make it as tailored as you can. 

Robert Hurlbut: Let's talk about collaborative methods of AppSec. So that's a sort of a term that I know you've used in referring to threat modeling and purple teaming. So can you talk about that concept and also how does that make AppSec collaborative? 

Maril Vernon: So talk about what I think application security is. 

Robert Hurlbut: Or the roles of threat modeling and purple teaming and helping with collaboration in absec. 

Maril Vernon: Yeah, I think application security is like a mindset. It's a, in itself is not a methodology, right? It's a practice. And I think that there are multiple tools that enable us to implement AppSec. Effectively, right? Kinda zero trust. Like zero trust is a framework. But it's not something you can just click a button and implement and there it goes. 

And it's not like I zero trust tested this. That's not a thing. But it helps guide some of your actions to be more secure. And that's what application security to me is. So when you do things like threat modeling, we, everyone says is there a tool for this? Can I automate this in some way? 

Can't we just run a scan? And we're like no, because. When you use a human to systematically and strategically look at something as a whole, we're gonna catch things that scanners would miss. Scanners are gonna look for, configuration items and known CVEs and stuff like that, but they're not gonna catch a lot of cws common weakness. 

Enumerations things like a weak password policy. They wouldn't say you have a weak password policy, and that's like affecting most of your users. They would just say, I was able to bruteforce that password. So bruteforce is a problem, but why is it a problem? Let's look at the source of why it's a problem, and let's try to, from a policy and people and technology perspective, change as many things as possible. 

So it trickles down to the C V E level of stuff, the actual vulnerability level of stuff. So we're working on the. Threats when we threat model, we're not working on the vulnerabilities, right? So you can't just do it with a scan. You need humans to do it. And the people and humans who know the system the best are the ones who built it or advised on building it or gave the objectives for it being built. 

So we have to get all those people together. Like the ISSOs are like, do I really need to be here? Yes, we would love it if you were here to put, to give input. 'cause it's crucial for all the points of view. And when it comes to things like purple teaming, again, it's a collaborative approach. 

I've seen so many benefits, not from even purple teaming, but just from getting lots and lots of engineers and like offensive people and the Zero Trust people and the policy people on One Call and just hearing them talk about their work or how the work they're doing affects the product and they make so many light bulb moment moments and connections and realizations and realize, wow, we could work together on that. 

And then it would be easier for both of us. I'm like, yes, this is beautiful. This is how we build more secure applications. It's not just about the code, it's not just about the tech stack that we select. It's about how we as humans work together. Because we all have a different area of responsibility to securing that thing. 

And that's how things like threat modeling and purple teaming collaboration help to influence better secure design, in my opinion. Sorry, that was a long answer. 

Chris Romeo: Yeah, I think that's. Definitely the collaborative method is a very powerful thing to bring different groups of people together that don't normally speak to each other and open that door to have them work with each other and understand. And we've been talking a lot about empathy over the last couple of years, like empathy plays into. 

That collaborative process as well, because now if I have a better appreciation as a security person for what the product manager does, for what the developers are doing, for what you're doing as a red teamer for my network security, my InfoSec friends that are part of this equation too, it's just powerful to have a bit more understanding for each of these groups. 

Maril Vernon: It really is and it helps because the builders and the defenders don't just see the end objective, right? They don't just see the tip of the iceberg where the red team won. Sometimes it's good for us to them to see us flounder and see our stuff not work, and our payloads not go off and stuff not execute. 

And we're like, oh crap guys. How do we get creative? And we're pivoting and we're like, if we can't figure this out, we're dead in the water. They're like, yes, dead in the water. But we somehow pull it together Dang it. It tells them that we're humans too, and our job isn't as easy and flawless as we make it work. 

Look nobody's is, so it makes you a human to them, and we're all humans on the same team. 

Chris Romeo: Can you take us through a story of purple teaming? Is there an example that you can think of where I'd love to get, I. Have you take us through the story kinda, and you can anonymize it or whatever you gotta do to protect the guilty or innocent, depends on how you look at the equation. But I'd love to hear an end-to-end story about a test that you did and then the collaborative process that you went through working with the different parties and then how things got fixed. 

And I'd love to just I'd love to have that end-to-end perspective on this. 

Maril Vernon: Yeah, a lot of my purple teams follow the same formula. They're the same exercise. And what I basically like to do is I like to open up the call and say, it's usually for one of my bigger orgs, it's usually following a red team operation. And for smaller orgs, I'll be given individual objectives. 

We wanna evaluate a new solution, or we're ping a product. Can you tell us if this really is addressing gaps that we have, or if it's just. Layering on top of something else. And what I like to do is say, these are the things we're gonna be executing today. These are the things we're gonna be repeating and why. 

And then I start by saying, now listen, all you know is that we, Adversary in the middle captured credentials, right? You don't know how, and you don't know why. We chose that method over another method. Here's why we chose this. We had all this menu list of things we could have done. We picked this one for this reason. 

We had the payload execute this way and tie it back to this thing for this reason, 'cause it made it easier for us to blah, blah, blah, blah, blah, blah. And I'm walking them through and showing them my screen and executing, the exploit, having a hopping over to a user computer, going to the website, clicking the link, entering my credentials, whatever, and I'm showing them. 

What's happening and what I'm getting from it, and why it's valuable to me and how that's furthering me to the next objective. And I say, now, I could have gone here, or here, choose your own adventure. But I went here because I knew if I got this, that would give me that, and then I would achieve my end objective. 

And I literally just start walking them through that. And I'm like, now I'm gonna do it. I'm gonna execute that thing. Go look at your logs, go at your alerting. Go look at your, you don't see it. Okay? Dial it down. You know it's coming from my endpoint, my IP address, my host name, my username. Look at those things, okay, now you see it, but two levels up. 

You don't see it. Why don't we see it? And they're like, oh, crap. Why don't we see this activity? We should totally be alerting on this. And oh, the rule is broken. It's not specific enough. It's, we didn't think of this syntax. We didn't think of this thing. And the guys are talking to each other. I don't have to tell them. 

Why don't you try to alert on this? The detection engineers and the SOC analysts are starting to play off each other and, oh, bro, do you see that? And yeah, that looks great. Okay, we're gonna try this. We're gonna try this, Maril, do it again. And I do it again, Maril, do it again, and I do it again. And they're seeing it work in real time, and then they start seeing the logs come in and everybody kind of celebrates and it's fun. 

And then, we re we test it again, test and make sure the rule is working as intended. Try it from a different endpoint. Try it from a different user. See if it's really working. Not just from one known signature, one known host name, but like from different points of telemetry. See, if you hadn't seen this alerting actively, could you have gone to the logs and seen when I did this, where I did it from and who did it? 

And I'm trying to fill in as many gaps for them as possible, but I'm showing them all the places I would hide and all the places they should look. And they're figuring out amongst themselves from just that little seed of information, how they can defend against it. 'cause they know their job the best, right? 

They know how their Splunk rules work the best, how their SIM tool and SDLs work the best and how all their logging works. And if logging is broken somewhere, I'm like, we should turn that on, see what happens. And then they just slack somebody like, Hey, can we turn on logging? It's gonna cost this much money. 

Yeah, fine. We'll be catching all this stuff. Yeah, fine. Great. And at the end we're able to say, these are all the things that were created. This many new detections, this many new rules, these buckets were created, this logging rule was made, this new service was turned on. We adjusted the configurations of this thing to catch 20% more of these likely scenarios than this one. 

And we present all that at the end and say that's what came from simply drilling down on 12 TTPs that made the previous red team operation successful. I. Now, ideally, if the red team did that again, they would not be successful for exactly these eight reasons. I think that's pretty powerful. 

That's a very general aspect. I can't recall a specific story right now, but most of mine would typically run that way. 

Chris Romeo: And how long is this process going? Is this a couple of days, couple of weeks, couple of months? What's the duration of one of these exercises? 

Maril Vernon: It depends on the number of TTPs that you're testing. One of the first Purple team 

Chris Romeo: what is, remind me what t P is again. 

Maril Vernon: it's a tactic, technique and procedure. It comes from the Mitre attack framework. So a tactic is one of the phases of a kill chain. A technique is one of the ways you can execute that goal. So like initial entry discovery data collection a technique is a sample of a way you can do that. Like brute forcing would be a me method of credential capture, et cetera. 

And then the procedures are exact multiple ways you can affect that thing. But we also refer to them as T P. So a brute force is a T P A fish is a T P. So what we do, it depends on the number. Really what I try to have them do is they're very concentrated doses. 'cause I wanna take, again, as little of the blue team's time as possible. 

So what I'm saying is give me all these people from all these teams for three straight days. Like we log on to the zoom call at eight, we log out at three, I do some reporting and we're done. And we do that for three days, sometimes five days depending on the number. But I always say dial down more than go for big. 

One of the first purple team ex engagements I ever executed. I did. All of the known Mac TTPs in Atomic Red team. All of them, like all 194. And that was a little beefy. We got through, I was like, okay, another one. Okay, another one. Okay, another one. But it was a, it was too much. It was like overload. And that's what we're trying to get out of with purple teaming, right? Those big, long, 80 page reports with like numerous findings that go to a backlog and die. So I started doing no more than like 20 at a time. Let's really focus on these 20, make sure we have these 20 from multiple points. Call those good and then move on to the next batch. 

So ideally no more than three business days and I can do all the reporting on the back end 'cause I wanna get in. Take your time, get out. That's my goal. 

Robert Hurlbut: So continuing on about purple teaming, what are some you've talked a little bit about some of the techniques that you've used, but what are some resources in order for anyone to learn more about purple teaming and getting started and so forth. What are some things that you recommend? 

Maril Vernon: I've got numerous talks out there about, how to build a purple team who should be on it methodologies. A great resource to start is the purple Team exercise framework, which is put out by sife. And if you've never conducted an exercise, it's got a really great back and forth flow between the teams, like how a sample dialogue could go. 

And as you do them more, you'll get more comfortable with it. I've got talks on if you have two people, how it could look if you have one person, how it could look if you have eight people, how it could look, who could be involved, how you can play it. There's different ways to do it. You could have a red team or ride along with the blue team and try and point them in the right direction. 

You need to catch me if you can, and not let the two teams talk to each other at all. I think the best method is to let all the teams talk to each other and do it open book style. And I was exposed to that at Cyber Shield. At Cyber Shield. Our blue team couldn't keep up with us at all, so we ended up stopping the exercise, day two, opening our books up and repeating everything from day one with them watching us. 

And that's where I. My favorite methodology came from there's numerous great reporting tools out there. Everything from Snap attack to AttackIQ to safe breach to Plex, track to site. They have a tool. Atomic Red Team is a great place to start if you're like not sure what to test for, but you wanna cast a really wide net at the Mitre Attack framework and see, give yourself some baselines, some qualitative grades on where your defenses stand currently so you can focus on your gaps. 

I always throw my layers and attack navigator. I'm a big fan yeah, there's definitely a number of great tools out there. If those aren't a good starting place, someone dmm me and I'll give you more. 

Chris Romeo: So from a vendor perspective, so you mentioned a couple of different vendor names I heard in that process, Do these vendors provide purple team specific tooling, or are these red team tooling and does do purple team tools exist that are called purple team tools, like things I could buy. 

Maril Vernon: Are a number of things that call themselves purple team tools like. Plex track and Sy do call themselves purple team tools. But you're gonna have to keep in mind because the purple is a combination of different disciplines. What that's really just gonna do is bring a little bit from each discipline. 

So some of the tools have runbooks, you can execute, you can throw manual campaigns that you've built, turn them into a runbook, throw them in the tool, set the blue teams as they build detections. Can a retest for themselves, click a button and have it automatically script itself out. Then they can tie those mitigations to that campaign and say, when first executed 80% successful, 20% blocked, and now we're at 40% successful, 60%. 

They can tie it to specific. Operations or campaigns. You can also see the improvement of detections over time. So you've got a little bit of the red team exploit in there and you've got a little bit of the blue team reporting the retesting hamster wheel built in there. I always say it's best to get a tool. 

I can do it off open source, but not as effectively as if I have tools because I become a single point of failure, right? I'm moving on to the next engagement, and they're like, we need you to repeat that. We think we, we built, like we left the exercise with eight items pretty well covered four. For work on this items and we think we're done. 

Can you do it again? I'm like, I can't. I'm on the next thing. Also, we need a version of this report and this report, and I become the single point of failure and that's not effective. So eventually your purple team will need to get a tool to start becoming effective. But your suite of tools might look different than someone else's ideals. 

Suite of purple team tools. A lot of the folks I know are doing Enterprise Purple team for big enterprise like thing, size, level like meta, they dev their own in-house tools. They have their own C two. They combine it with their own reporting capability. They have the ability to build and define their own TTPs and stuff like that. 

'cause we have to keep in mind, MITRE is limited. It's limited to what we've actually seen in the wild. It's not everything that's possible. So you might be testing and defending against things that are relevant to your org, but Mitre hasn't published yet. Things like that. So it's just important to, to try a product and see if it works for your tech stack, your people and your environment, and make sure you pick the solution or combination of solutions that's best for you. 

Chris Romeo: How do the tools the purple team, commercial tools stack up against somebody such as yourself that's got a lot of experience doing this and I don't know free styling's the right word, but I can imagine you're doing some free styling when you're doing a test and somebody points you at something. 

You're going, you have playbooks, you have ideas, but you may create something new in the process of that test, how do the tool, how does the tooling stack up against what Merril has in her brain and how she can identify different scenarios and change on the fly make adjustments and to get to an a successful test. 

Maril Vernon: I love that question. 'cause I get it all the time. The answer is it's the same as what automated tools do for pen testing. So it really helps to have. The things that you can have automated. Automated, right? You don't need me to manually try and download known malicious tools that are signature based. 

Let's have a tool do that and see if it's successful and if so, where and why. You want me to be spending my time on that manual validation of the really critical issues on that, getting creative on that, building my own thing. Like doing my own secret sauce. So I like to use the tools to give myself, some foundational items covered. 

Maybe give myself a good starting point for my manual investigation. Automate as much as possible so you can spend, that 20% of your precious manual time on the things that matter, the big impact items, the blast radius items. That's how the purple team tools really help you as a purple teamer as well. 

It's the same. 

Chris Romeo: Where do you see purple teaming going in the future? Is this something that it will eventually solve all the problems and then we'll move on and we won't need it anymore? Or where do you see this going? 

Maril Vernon: I love this question. I'm so happy you asked. I actually see purple teaming again as not being a dedicated team. Let's not just put the pen testers and the soc analysts and the detection engineer on a team and call it the purple team. Let's instead move towards a collaborative security mindset. Let's let a lot of teams talk to each other all the time. 

Let's have them playing off each other's exercises and outputs. All the time. If the blue team creates a new process, I as a red team are wanna know that. I wanna know what your investigative process is. 'cause if I'm like, Hey, if they see this over here, they're gonna drop everything they're doing, investigate that 10 layers deep and waste all their time so we can inject over here, we can mess around over here and they won't even know for hours. 

That makes me a better tester. And then they'll develop a process to address that and it makes us each be better. And I think that's the per the. The future of purple team. It's not gonna be a proper purple team, an orange team, a green team, a blue team. It's gonna be just all of us working together. 

And from a light spectrum perspective, that's, white teaming com combination of all the colors. So it's just like Mitre has implemented white teams for years now. I think two years they've had white team leads for the evaluations team. And white teaming is truly the combination and the embodiment of collaborative security from all the disciplines across the org. But I think eventually, AI tools, We already see a good number of pen testing functions being on the way out because AI tools will again, really beef up that automated process. You can never get rid of humans completely. There are things red teamers can do that automated bots and AI and tools cannot, but a lot of purple teaming will be automated and we will find better ways to feed each other information and get more visibility and then purple teaming as a proper practice. 

Dedicated purple teams won't be a necessity 'cause you'll just have like total security solutions, cyber resilience departments and stuff like that. That's where I see the future of it going in the next three-ish years. 

Chris Romeo: Let me read some, lemme read that back to you. And just make sure, I think, make sure I understood. So from the tooling so the tooling, the purple team tooling is good to. Check things that you've already put your seal of approval on. You've already, so if you leave and go somewhere else, the tool can replicate whatever the test was that you did. 

If they want to test it every day for the next two weeks, 'cause they're trying to solve the problem. So the real benefit of the tooling is that the tooling will allow make some of the part parts repeatable. It doesn't have to be creative. At that point because they just wanna see, did that log fire? 

It won't fire. We'll try it again tomorrow. We can't figure out why this log won't fire. They don't need you sitting there typing manually.  

Maril Vernon: Correct. Yeah, you got it. Absolutely. 

Robert Hurlbut: All right. Merrill, we really appreciate you joining us today. We want to finish out and we've got a few questions that we'd like to ask you as our lightning round. So first one is controversial. What's your most controversial opinion on application security and why do you hold this view? 

Maril Vernon: Developers and red teamers should be friends. We should be best friends. We're all on the same team, and I hold this view because I know that. It's an us versus them and we shouldn't give them a, a local user account, an assumed breach. They didn't really do their jobs if they didn't get through the firewall. 

And let me tell you how ineffective that makes our security program as a whole. You don't want me toiling with your firewall for two straight weeks. 'cause I'll eventually, I'll get past it and that was a waste of time. Let's test the defense in depth. Let's see. All of the microsegmentation and all of the user groups that we define and everything else is working as beautifully as we hope. 

Because you get what you inspect, not what you expect. And I know we all think we do a beautiful job when we code something too, perfection. But there's gonna be something we miss. 'cause we're humans. We're humans, and it's really important to let us get in there and find the bad stuff and look under the rug before China tries it. 

So red teamers and developers should be best friends. If you're not best friends, become best friends. 

Chris Romeo: I'm just writing down what you said. You get what you inspect, not what you 

Maril Vernon: Not what you expect. 

Chris Romeo: Now, is that an original line? Is that a, is that original Merrill line 

Maril Vernon: unfortunately not came from my very first CISO ever. The guy to give me my first pen testing and Infoset gig, Bertram Carroll, genius of a man. 

Chris Romeo: and make a great 

Maril Vernon: would say that to me. Yeah, he would say that to me all the time. All the time. Grade did my brain. 

Robert Hurlbut: definitely. That's great. All right, so here's another one. What would it say if you could display a single message on a billboard at the RSA or Black Hat Conference? 

Maril Vernon: Give more non-traditional talent entry-level positions. Give more people entry level jobs. Oh, so many orgs are afraid to trust their security program to newbies like I once was. 'cause they're like, ah, it's gonna crash and burn. You're gonna mess up. You're gonna take a table and prod down. You're gonna miss a false, like you're gonna think a false positive is real or worse, that a false negative isn't. 

And I'm like, let me tell you, you're already not doing a good job anyway. So why don't we just let the newbies come in here and try? Why don't we just get more people in there and give your. Oversubscribed under resourced departments a break so that they can come in with better invigoration, better creativity, more clarity of thinking, get more diversity of thought in there, because if no one had given me a chance four years ago, I wouldn't be here today. 

Security wouldn't be where it is today. Purple team, you might not be where it is today give more newbies a damn chance please. 

Robert Hurlbut: Excellent. And what's your top recommendation for those interested in security and why do you find it valuable? 

Maril Vernon: Ooh, to get into security. For me, my most valuable book was the all-in-one book that I read for my first certification, Security+, I think it gave me a really good 360 degree view of that top like 10%. Why we do things where they come from. I think those are valuable. I think books that are specific to your vertical are really valuable and there's a number of them. 

So I liked the pen tester blueprint. But if you're in the, into secure coding, there's secure code, there's macOS books out there. But honestly, for career development, my two favorite books are the first 90 days and. Make your next move because the first 90 days talks about if you're in a new position, if you're in a new industry, or if you're in a promotion, even at the same org. 

How do you prove value in your first 90 days? Because there's a value curve where you're sucking resources from the company as you learn and as you orient and not giving much back and value. And then you need way less input and stuff from the company as you give massive value back. And that's a way to reduce your. 

Your break even point on that curve and start providing value back, getting immediate wins in your position, demonstrating some change to prove that you're good at your job and that you can be trusted with the next job and more budget and more people. So I think all InfoSec professionals should read that book. 

Chris Romeo: Yeah, it sounds like a great book to to check out what what would be your key takeaway or your call to action. For our audience here, if you can give them homework, which you can, I don't know if they're gonna do it. I'm not gonna ask 'em, I'm not gonna check the homework. But if, what would be a key takeaway or a call to action for the audience? 

Maril Vernon: To start getting on collab calls with each other. That's where it can start. Red Teamers, please reach out to other people in the org, not just the technical departments, not just the devs, not just your soc, not just your detection, not just c t I. Reach out to the business departments, the finance people, the audit people, the operations people, and start making yourself a human and an ally to them because they will become your insiders, right? 

They will become that insider information you can, c t I use to influence your operations and punctuate big problems for your org that aren't getting enough time and attention. And to everyone else, please reach out to us. We're humans too. We want our paycheck to keep being signed too. We are either all gonna be breached together or not breached together. 

So I just think that a lot more people should get on collab calls and just see what happens. See what products emerge as a result. See? See who starts collabing on stuff. Like the most insane people. Like when you see compliance and devs collaborating, you're like, this is weird, but I love it. So I would love to see more of that. 

So I would just urge you all to reach out to each other, slack each other, say, hello, I work on this team. I'm interested in what you do. This is what we do. Can we collab and just see what happens for your org? You'll be surprised and 

Robert Hurlbut: Definitely all good advice. So thank you Maril. Really appreciate it today. Really enjoyed talking with you and just learning some more about AppSec and purple teaming and intersection with threat modeling and many other things. So thank you again. Really appreciate you joining us. 

Maril Vernon: Thank you so much for having me. And by the way, Threat modeling and purple teaming are a great tool to use to influence operations like pen testing and zero trust implementation. If that's what you're interested in, you should check 'em out. This is my last parting gift to the audience.

Podcasts we love