The Application Security Podcast

Kim Wuyts -- The Future of Privacy Threat Modeling

Chris Romeo and Robert Hurlbut Season 10 Episode 14

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 41:47

Kim Wuyts discusses her work in privacy threat modeling with LINDDUN, a framework inspired by Microsoft's STRIDE for security threat modeling. LINDDUN provides a structure to analyze privacy threats across multiple categories such as linking, detecting data disclosure, and unawareness. The framework has been updated over the years to incorporate new knowledge and developments in privacy, and it has become recognized as a go-to approach for privacy threat modeling.

Kim believes that privacy and security can be combined and highlights the importance of protecting individuals' rights and data while securing systems and assets.

Privacy by design, which focuses on reducing unnecessary data collection and considering individual needs, is discussed in relation to secure architecture and threat modeling. The Threat Modeling Manifesto is emphasized as a significant resource for promoting privacy threat modeling. 

Kim addresses emerging trends in privacy, including the concerns surrounding AI and responsible AI, and stresses the need for increased awareness among individuals and companies about privacy issues and the importance of privacy protection.

Listen in as Kim explains the importance of collaboration between security and privacy teams, integrating privacy into security practices, and recognizing the value of privacy for both privacy protection and overall security.


FOLLOW OUR SOCIAL MEDIA:

➜Twitter: @AppSecPodcast
➜LinkedIn: The Application Security Podcast
➜YouTube: https://www.youtube.com/@ApplicationSecurityPodcast

Thanks for Listening!

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

Kim Wuyts -- The Future of Privacy Threat Modeling

[00:00:00] Chris Romeo: Kim Wuyts is a Senior Privacy Researcher at the imec-DistriNet Research Group at KU Leuven, Belgium. She has more than 15 years of experience in security and privacy engineering, so, Kim is one of the driving forces behind the development and extension of LINDDUN, a privacy threat modeling framework. She's also a co-author of the Threat Modeling Manifesto Program, co-chair of the International Workshop on Privacy Engineering, and a member of ENISA's Working Group on Data Protection Engineering.

[00:00:32] Kim joins us to catch us up on the world of privacy threat modeling with LINDDUN, and also extend our knowledge into privacy by design and privacy engineering. We hope you enjoy this conversation with Kim Wuyts.

[00:00:50] Hey folks. Welcome to another episode of the Application Security podcast. This is Chris Romeo. I'm the CEO of Kerr Ventures. Also joined today, as always by my good friend Robert Hurlbut. Hey, Robert.

[00:01:41] Robert Hurlbut: Hey, Chris. Yeah, Robert Hurlbert, uh, principal application security architect and threat modeling lead at Aquia. And excited about talking about privacy today. Hi.

[00:01:52] Chris Romeo: Yeah, something that, uh, I never know enough about.

[00:01:55] I always wish I knew more about privacy. Whenever I start talking about it, I'm like, I really don't know that much about this. But, uh, so it'll be, it'll be excellent, uh, to be educated by our good friend Kim Wuyts, who, uh, is back. Joining the podcast for her second visit. Uh, the first visit Kim made to the podcast was in March, 2020, where we talked about privacy threat modeling specifically. And, uh, it was awesome to see Kim as the keynote speaker at OAS Dublin back in February of 2023. And then also on the RSA conference stage. I don't remember what month that was. It's a blur, but I know it happened in the last, uh, I think it was April maybe, or somewhere 

[00:02:35] Kim Wuyts: so. Yeah.

[00:02:35] Chris Romeo: In that timeframe. So, um, it was great to see you, Kim, on the big stage talking about privacy, talking about threat modeling, bringing that message to the industry that needs to hear it. Um, so we have so much more to learn. Now I understand you're, you know, we're not gonna hear your security origin story cause people have to go listen to the first episode to hear that. But I understand you're in the midst of a transition from the world of academia to the private sector. So give us a little bit of context on that if you would.

[00:03:04] Kim Wuyts: Yeah, so I'm, um, almost at the end of my academic career, I've been a privacy or an academic researcher for, I think it's 17 years now, where I've been doing all kinds of, from foundational research to more applied research. Mainly on privacy and on threat modeling. And while the other time has come to move on, um, I'm looking for something more like practical hands-on, um, making a real impression and making a, and really applying it to the real world instead of, um, just focusing on the, the academic foundation. So I'm really excited about that. Um, still looking into my options. So if anybody is looking into a privacy engineer or privacy threatening experts, definitely reach out to me.

[00:03:53] Chris Romeo: Yeah, I would. Uh, and I will second that, uh, that word there. If, uh, if, if somebody is looking for someone to work for your company in the field of privacy, I can't think of anybody that I would rather have representing privacy in my company than Kim Wuyts. So, um, Sure thing. So let's, let's catch up a little bit on privacy threat modeling, LINDDUN. Um, the, this, this, let's pretend that people don't know what it is.

[00:04:25] Like I know what it is like, but let's pretend that, that people don't, let's start, kind of start from the beginning and lay the foundation before we get into what's, what's been new and happening there. So just, let's just, let's just introduce LINDDUN if you would.

[00:04:36] Kim Wuyts: Okay, sure. So LINDDUN is a privacy threat modeling approach. Um, very similar to STRIDE, um, STRIDE, the, the security threat modeling approach created at Microsoft. We've used it in our, um, research to, to work on well security analysis and we realized that there wasn't really such a thing for privacy.

[00:04:56] So we got that inspiration from STRIDE and started building a privacy specific, privacy threatening approach, which is LINDDUN it. Evolved over time. I think the first publication is from 2010. Um, it got a big update in 2015. Um, we added a more, um, kind of lean, applied, um, deck of cards, which is lean and go.

[00:05:20] Um, In 2020 and since then we kind of, um, try to bring all that knowledge we captured throughout, uh, 10 plus years, um, on privacy, on threat modeling, on different domain specific things, on ways to capture all that knowledge. Because what is the, the main contribution of LINDDUN is that similar like STRIDE, STRIDE is, um, basically an acronym for the specific security. Um, threat categories you need to analyze. LINDDUN is also an acronym for this. The different privacy threat categories you have to analyze, which are linking, identifying non-repudiation, detecting data disclosure and unawareness and non-compliance.

[00:06:04] I hope I didn't miss any. Um, but, so in addition to being an acronym, LINDDUN has this. Whole set of knowledge that will help you understand those specific privacy threats and will guide you, will facilitate discussion. So that's where the, the deck of cards can help. Um, but so we decided that the, the LINDDUN Knowledge base needed an update, so we added more structure to that.

[00:06:30] Um, um, well my colleagues were the main driver there. They built this, um, knowledge base that. Captures all that information and makes it easy to extract both a set of cards, um, a paper catalog, uh, something that can be used by, uh, a threat modeling tool, such as, well, we have our in-house tool for risk assessment, but you can extract like an XML docu, uh, file for let's say the Microsoft, um, threat modeling tool or other tools.

[00:07:01] So the idea is that we capture all that knowledge in a more structured form, in a way that can be extracted, um, so that it's use useful, usable for all different types of, um, requirements, people, tools. Um, and we rebranded some of the categories as well, added some new information because, well, the world of privacy keeps evolving.

[00:07:25] Technology keeps evolving, and that way we also hope to be able to keep our knowledge base more up to date when new things arise. So that's basically the, that, that final part is the, the last three years of, of research, of work by the, um, LINDDUN team.

[00:07:46] Chris Romeo: So when I think about how prolific STRIDE is in the world of security, threat modeling, uh, anybody, almost anybody who's done threat modeling from a security point of view, started with STRIDE. At least those of us that started, you know, 10 plus years ago, like it's, it's a very well known term. When you interact with people that are, that are focusing on privacy, does LINDDUN have the same impact in the privacy, like in privacy engineering teams that STRIDE does insecurity engineering teams.

[00:08:20] Kim Wuyts: That's, that's a good question because I'm probably biased because the people that come talk to me know I created LINDDUN. So that's probably why I get a lot of people saying, well, yes, of course we know LINDDUN. LINDDUN is like the go-to to place when you want to get started with privacy and, and privacy threat modeling.

[00:08:38] Um, so it's kind of hard to say. I mean, there are different, you have trim, um, as an extension to, to, um, STRIDEs, or at least elevation of privilege, which is also a deck of, of, uh, knowledge base resources or an extension of, um, well, you have striped, which is STRIDE plus P. Um, MITRE is also working on something for, um, privacy threat modeling.

[00:09:06] So that's also really exciting to have a look at. But I, I think, and that's hard for me to say. Because, you know, I'm I, I don't like to brag, but I think LINDDUN is kind of a well known approach, at least within that space of threatening for privacy. Yeah.

[00:09:26] Chris Romeo: Okay. Um, So do you think. LINDDUN, when will LINDDUN need to be refreshed? So I started a discussion, I was talking to Loren Kohnfelder, and you both probably saw this. I think, uh, Kim, you even commented on this on LinkedIn.

[00:09:45] Um, I was, I was exchanging emails with Loren for a, for a different podcast interview process. And Loren's the person who created STRIDE back at Microsoft 24 years ago, not 25. Let the record you, he corrected me. Me after, um, or corrected his initial guess at 25 years ago. And so the question we posed on the STRIDE context was STRIDE's almost 25 years old. Like what needs, does it still stand up? And so I'm curious for your thoughts on LINDDUN, you said what 2010 is when it kind of came into, into, so I mean Lyndon's 13, it's a teenager.

[00:10:18] LINDDUN's reached the teenage years, um, with all the angst and other things that come.... No, not at all. But what like, does it, is it still, does it still meet what you need as a privacy engineering person, or is there something that could be added to it that does it need to be adapted right now and, and updated?

[00:10:40] Kim Wuyts: Yeah. Yeah. So that was kind of also part of the, the work of the past three years to have a look at that. So, We, we added more structure and we made it in a way more generic that we think we now covered kind of a more complete set of linking specific threats and identifying specific threats. What we did was also, we rebranded the second D of LINDDUN, which used to be disclosure of information, and that was kind of a placeholder to plug in STRIDE or at least, uh, Information disclosure, um, threats there.

[00:11:13] Um, but we decided to, um, rebrand that to a very privacy specific integrity, which is now data disclosure, which has nothing to do with confidentiality, but which is really about minimization, minimality, like collecting too much information, uh, sharing too much information, collecting two specific data types, um, Storing it for too long of a time. So that was, I think, our main update because we felt we, we, we captured that somewhere. But it's such an important part of privacy that we, we decided that it needs that to be captured in that entire category in a specific category there. Um, What else did we do? Um, well, an awareness now covers both the, the need for transparency and control, which are two, especially from a GDPR, European perspective or data subject rights, like the system needs to be able to support those rights for the individual.

[00:12:15] Um, and we kind of have a, a placeholder in the non-compliance category because, well, privacy kind of. Well cannot be done on its own. We rely on security. I mean, you can do all the fancy crypto stuff and anonymization or de-identification that you want. If it's not confidential, you're still losing data, so it's not private.

[00:12:40] Um, you also need to, um, work closely together with legal because, well, you need to have like a legal basis to process personal data. Um, If you are, for instance, also collecting large amounts of personal data and you're not based in the eu, you need an EU representative, there's also a lot of things that you need to do from a legal perspective to be okay with data protection and, and compliance.

[00:13:05] Um, you need to work on data life cycle management because, well, privacy, it's all about personal data, so that also needs to be. Managed. So I think we also focused on that interaction that privacy should not be done in isolation, but it's kind of a team effort you need to team up with, with other, um, qualities, with other experts, with other teams to make it a good, um, whole scenario, preferably as soon as possible.

[00:13:33] So that's where threat modeling is, uh, is great.

[00:13:39] Chris Romeo: I think you just taught me a new word in the midst of explaining that too, "minimality." gonna work that in.

[00:13:45] I'm gonna, I'm gonna use that 

[00:13:47] Kim Wuyts: I hope it's an actual English word, but, uh, yeah,

[00:13:50] Chris Romeo: I, I thought, I'm like, I'm gonna work this. I'm gonna try to, now my challenge is, can I use this word sometime In the 

[00:13:55] Robert Hurlbut: In a sentence somewhere.

[00:13:56] Kim Wuyts: Okay. I hope. 

[00:13:57] Chris Romeo: it somewhere. Let me, uh, let me ask one more question about the, cuz you talked about security and privacy. Then Robert, I'll let you cover the kind of privacy engineering things we were thinking about. But since this is, you just made a, you made a mention of privacy and security and this is now my favorite question to ask privacy people. Why does privacy have to be separate from security? Like why is, why is there this whole separate discipline? Like we have security engineering, we have privacy engineering, we have privacy threat modeling, we have security threat modeling. We have, they ha we have these two. They're, they're very different. They're, they're being separated between the two. And is that the right way to do it? Is that what, like 10 years from now, would we expect security and privacy to be the same? Or would we, would we expect it to be different? Like what's your, what are your thoughts on this whole issue?

[00:14:45] Kim Wuyts: It's a great question. I think it kind of. Crew because privacy was not a technical thing. It, it just came from compliance and was more an organizational or legal thing. So that's why it's often not in the security or the technical teams. I think, um, whether it should be combined, well, I, I, I think there's a lot of value there.

[00:15:09] Um, I mean, um, One of my, my privacy friends, Nan Nala, who's a privacy engineer at DoorDash, she actually combines security and privacy threatening together. And she made this statement saying, well, by combining it, um, we get 60% more coverage and more efficiency. So it does make sense, but I think one of the big challenges there is, and that's also there's some misconception, especially with security people, when you say also do, do privacy, they say, well, we cannot do it because, Um, well, it will screw up our security or, um, well, we don't need it because we do confidentiality.

[00:15:47] Um, I think the main issue is that privacy and security are different things. You have the CIA for security. You have, uh, a triad for privacy, which are three completely different words, which are, um, unlink ability, intervenability, and transparency. And I know some people hate the words intervenability, so I prefer to call it control. So it's about transparency, control, and making sure. And you cannot link information. You cannot deduce more information than you should. Um, and because these are two different triads, that means you need to have two different mindsets because the CIA is really about let's protect the system as a whole, let's protect, uh, this process or this data store, or like really big chunks. And for privacy, you really need to look into the, the data items is this personal data. How can we minimize it? Do we need it all? Um, so for security, you look at how can we protect our system, our assets, and for privacy, it's more about how can we protect the individuals and the individuals rights, and how can we, act in the best interest of these individuals. So it can definitely be done together, but you need to like change heads, um, going because you have different viewpoints to tackle that. But it, it can definitely be done and it should be done, but I think we're still have a way ahead of us before everybody will start combining it.

[00:17:19] Robert Hurlbut: Yeah, tho those were some of the challenges I remember seeing, um, in a large international, uh, Company I used to work for, uh, when they would, uh, try to think about, um, the legal aspects of privacy versus the implementation. And that sort of leads us into this next question about key challenges that you've encountered in the field of privacy engineering.

[00:17:44] Kim Wuyts: Yeah, I, well, I, I, I tackled a couple of them before. It's like the, the big one is just convincing people that privacy is important. Um, because people will say, well, I have nothing to hide. I don't care. Um, while, while we start thinking about it, everybody has some stuff that you want to keep. Well, Not as a secret, but only share with like your close friends or your family.

[00:18:08] And it doesn't need to be public, so everybody needs privacy. And because there's so much personal data out there, because we have so much cool technologies and, and devices and whatever, we still kind of give it away. And even when we think we have some cool privacy settings and protect everything, we see so many examples of of situations where it goes wrong.

[00:18:34] So I, I hope people are becoming more aware of that. We need to have privacy. And you, sometimes, I, I, I know some people switch from WhatsApp to Signal because there was an update in privacy policies, for instance. So people are getting more aware now. It's up to the companies to follow that trend and to really embrace that privacy idea.

[00:18:55] And I think that's now where the, the big challenge is convincing companies that privacy is worth the investment. Um, which is a challenge. I mean, if I talk to, to, like you guys, the security people, I, I hear that even getting, uh, funding for security is already a challenge. And that's about protecting our assets.

[00:19:15] Now, investing in privacy, which means protecting, well kind of. Reducing the information we collect, which is kind of, well, the more data, the more money, or that's at least the, the misconception that a lot of companies have. So I think there, there lies still a big challenge in convincing companies because there are studies that show that you get.

[00:19:38] Uh, twice to five times a return on, on investment. If you invest in, in privacy, and I mean if you don't, well, especially in Europe, you get lots of fines if there's an, a checks if, if you fail to have privacy by design. So there are sufficient reasons, but now companies need to follow that, that idea.

[00:19:58] Chris Romeo: And you mentioned privacy by design. What's the, I've, I've always legitimately always wondered this, and I never knew the answer to this, so I'm curious to, to get directly from you, what is the connection between LINDDUN and Privacy By design? Is there a connection or is there supposed to be like, what's what?

[00:20:16] What's your take on that?

[00:20:17] Kim Wuyts: Yeah, yeah. The connection between LINDDUN and Privacy by design is the same as as security threat modeling to security by design or shift left, or whatever buzzword you wanna use there. Um, I think, um, threat modeling can be a great driver for that by design process. Because you already start early with thinking all the stuff that can go wrong.

[00:20:37] And well, you know that if you think about it early, you can start fixing that. And the outcome of the threat molding exercise can then be the guide to, to have architectural decisions to, um, define pent tests, to, to have all these things. So I think having a threat bullying approach for privacy by design such as wind, or for security by design can really help you.

[00:21:00] Kind of structure, maybe even that process. I know the focus is a lot on threat modeling now, but I, I think that, that, that's an approach, that's a technique that can really help. I don't know what your feeling is about threat modeling for security by design or shift theft. Um,

[00:21:17] Chris Romeo: Yeah, I think, uh, I mean, I think secure by design is really our ultimate goal. It's one of the things I've realized in the last, eh, maybe number of weeks, number of months as, as CSA has become more prolific in their writing about Secure by design, secure by default. Like, and somebody else explained it to me this way.

[00:21:36] Um, I think it was Matt Coles actually, he said, um, Secure by design. Threat modeling is a vehicle. You might not have used the word vehicle, but I'm gonna use it now. Threat modeling is a vehicle to secure by design, but threat modeling is not secure by design. They're not the same thing.

[00:21:52] Kim Wuyts: Yeah.

[00:21:53] Chris Romeo: Just like secure architecture is also a piece of secure by design.

[00:21:56] When you, when you pull it up and you start looking at the, the 50,000 foot view of the system, that's not where I normally. Recommend people live in the threat modeling world because if I look at the, you know, folks like us, we can, we can survive looking at the 50,000 foot view because we can extrapolate all of the threats that exist by looking at one box. Just because we have security and privacy experience, we've looked at so many systems, but the average developer can't look at a secure architecture in one box and say, Oh, I know all 50 threats that could potentially have to be dealt with in that one box. But, but Matt's point was secure by design, secure architecture, threat modeling. These are all things that are assisting us, get to that secure by design. But yeah, I've become more and more convinced that that's really, that's really what our goal is. That's where we should be aiming. And we use threat modeling as a vehicle and secure architecture as a vehicle to get there.

[00:22:51] Kim Wuyts: Yeah. Yeah. I, I completely agree there. Yeah. Um, so for, for privacy, well, I'm, I'm gonna stretch the definition a bit because GDPR mentions data protection by design, data protection by default, explicitly, I'm kind of, Massaging that into privacy by design. There are some differences I probably shouldn't go into because.

[00:23:13] Everyone's definition of data protection and privacy and the overlap is different, but I I, I definitely see it fitting in there. And even in, in the, the iso, was it privacy Engineering or Privacy by Design? I think both actually mentioned the need for privacy threat molding there as explicitly so. There is that, that recognition from the community in privacy as well, that threat modeling is definitely beneficial.

[00:23:38] It's not a pseudonym for by design, but definitely as you say, like a vehicle that will help you and and facilitate that process.

[00:23:46] Chris Romeo: So when I, when I do privacy by design, like I feel like security or secure by design is, is me looking, is is building a solid architecture. It's ensuring I have all these assurance, creating activities. It's also knowing that I'm. That I am threat modeling and looking at a list of things that I'm not exposing data, I'm not exposing confidential information that belongs to the company. Um, is privacy by design a similar journey or thought process that you're going through? Like, like where do you even start for privacy? By design. Like if I wanted to, if I wanted to apply this,

[00:24:28] Kim Wuyts: Yeah. Yeah, I think it's similar, but then again, that mindset is different. So it's basically, you know, what functionality you want to reach. And the question is then can we reach the same but violate the privacy of the individual's list? Well, ideally we don't want to violate the privacy at all, but, but, um, like, how can we do this by collecting less information or by collecting less specific information?

[00:24:52] Do we really need all of that information? For that functionality. So the sooner you start with that, ideally at ideation already, the, the sooner you can start thinking of, well, okay, this is the end goal. This is the thing we want to achieve. Now do we really need to do that in that way that we think we, uh, I don't know, need to have our users have their location data shared with us 24 7?

[00:25:18] Or maybe we can get the same functionality, but just with an aggregated, um, Um, location once a day or once a month or whatever. Um, will that still be enough? So it's kind of, it's the same, but that different mindset is again, like, how can we make this, how can we do this in the best interest of the individual?

[00:25:44] How can we make sure that the individual is fine with it? So it's, it's also bringing the individual and their needs, their expectations into the equation basically.

[00:25:54] Chris Romeo: So with privacy by design, do I start with a set of data and then do I reduc or reduce? To, to reach privacy by design or is, is there a state that I can get to where I know so much about privacy by design, that I just don't even choose the, the, I don't even make the bad decisions upfront? Or how does that come together?

[00:26:15] Kim Wuyts: I think that's really tricky because, well, The more data there is, the more you can deduce and it's really hard to, like, before you actually know all the data you will be using to see like how it all can come together and what can you deduce from it. So sometimes it's really about seeing the whole picture of the data that all the data items that you think.

[00:26:42] We'll be in the system. Also, often it's not a cream cream field kind of analysis, but you have this, this system and you want to add some functionality and you have all these data, so let's just use it. But then the question becomes like, well, do I really need all that data that we already have in the system do?

[00:26:58] Is that, well, we collected it for a specific purpose. Can we reuse that purpose for this purpose? Does it, does it match, um,

[00:27:11] Um, yeah, I, I think it's really tricky. I, I mean, for security, you can never say, well, this is a hundred percent secure. I think for privacy, especially technical privacy, data privacy, that's also tricky, but I think you can at least show that you did your best efforts and, and that that should, let's say, hopefully be enough.

[00:27:31] It's, it's a risk based approach as well.

[00:27:35] Chris Romeo: And it sounds like when you're describing an existing system, Something that we, you know, we've already built something. We're already using a number of pieces of data. We're storing a number of pieces of data. In that case, privacy, applying privacy by design is likely gonna be a reduction. Because we're gonna look at it and say, why did we even, we don't even need to store this.

[00:27:54] Why are we keeping this?

[00:27:56] We never use it and we don't care about it. But yet it, we we're taking on the liability of carrying that data forward and it provides no business value to us. And so for an existing system, just like a lot of security threat modeling, like we always, we always wish we were threat modeling something before. It was ever built, but

[00:28:12] 99% of the time that's not the case. We're getting, we're we're joining the project after something, you know, we're like, Hey, let's see if we can change out this aircraft engine while we're mid-flight without, uh, and still keep the plane up in the air. So, yeah. So I could see a reduction being something that we're focused on in privacy by design for existing things, because people have notoriously. Not practiced good privacy principles and reduction in data and all those types of things, but in creating something new, hopefully you've got enough perspective to maybe make decisions in the process so that you don't, because you can see what'll happen if we keep that data.

[00:28:59] Kim Wuyts: Yeah, yeah, yeah, yeah. Indeed. If you can like at ideation, start reflecting on, well, wait a minute. It's not because we used to do it like this, that we have to do it like this. Is there a way that we don't need to have, I don't know, full name, address, and date of birth to let people read a white paper or whatever?

[00:29:19] I'm just coming up with some example, but I mean, that's kind of a common situation there. Yeah. Um, yeah.

[00:29:28] Chris Romeo: Are there patterns. In, I'm really going on ara down a rabbit hole here, but I'm, I'm, I'm, I'm learning a lot in the process.

[00:29:36] So are There 

[00:29:37] Kim Wuyts: patterns. There is an entire catalog of privacy patterns. It's, it's, it's, it's an academic catalog, but there is a, an ac uh, well, you have privacy patterns.org and you have privacy patterns.eu, which has like 90% overlap, but there's still kind of managed by different people. Or though I think at some point there was an overlap there.

[00:29:57] Um, but yeah, you have, you have privacy patterns there too. And you have also, um, a set of strategies and tactics, um, created by, uh, Jaap-Henk Hoepman, who is a Dutch professor, uh, and his PhD student, former PhD student, Michael Colesky. Um, and a tactics there. Um, they, they have this, they call it little Blue Book, I think with a lot of privacy strategies that say like, you need to minimize, you need to hide, you need to enforce um, so that gives you also some ideas of. What are the things I can do if I want to get started, or if I found these, this, this bunch of privacy threats, how can I start tackling them? So you don't immediately start looking for, I don't know, some crypto solution for something, but that you can really start by looking at more high level strategies and tactics and patterns.

[00:30:47] So, 

[00:30:47] Robert Hurlbut: That, that sounds to me like, um, you know, just like security threat modeling can lead to your requirements. Those strategies and what to collect, what not to collect can certainly lead to, uh, similar requirements related to privacy, engineering and, and, and development and so forth. Uh, shifting gears a little bit, um, you know, we were all, uh, co-authors on the threat modeling manifesto, uh, but in particular for privacy engineering.

[00:31:18] Could you explain, uh, the significance of the manifesto in that field?

[00:31:25] Kim Wuyts: Yeah. Um, well, I think it has the same significance as it does for security engineering. Um, I think for privacy engineering, it, it gave an additional boost because, Now we have this group of people, including, well, mainly security people who also say like, look, privacy, threat modeling, it's a thing. It's equally useful as security, uh, threat modeling.

[00:31:47] We should really embrace this. Um, So I, I, I've, I've seen lots of people like using it or, or, or, uh, referencing it, um, not just in the security community, but also in the privacy community saying like, well, this threatening thing, this is something we should have a look at. And look it, security and privacy.

[00:32:06] See, it's not just us saying privacy needs to go with security. It matters.

[00:32:13] Chris Romeo: Very cool. So, When you think about emerging trends or new things that are happening in privacy, like, uh, is there, are there things that, I mean, chat, g p t and generative AI must be I'm just thinking about that off the top of my head. That's gotta be something that's, that's causing some privacy wrinkles for the future.

[00:32:34] But like, what, what else is on your mind as a privacy

[00:32:36] Kim Wuyts: You're stealing my answer already. I was gonna say AI

[00:32:39] Chris Romeo: Sorry. That's, that's everybody's answer to every question I ask

[00:32:42] Kim Wuyts: Yeah. Yeah. 

[00:32:43] Chris Romeo: oh yeah, AI. 

[00:32:45] Kim Wuyts: yeah. Well, responsible AI is like, I think one of the, the, the big things there. So it's not just about security and privacy, it's about ethics and, and doing that in a responsible way, actually. Um, A privacy friend of mine has also, well not extended, but based on, on the, the LINDDUN cards, created a whole deck of, um, threat cards for responsible AI specifically.

[00:33:07] So there's even responsible AI threat modeling, um, support there. Um, yeah, I think, I think that will be a very interesting one. Um, a while ago I read a paper about, um, brain to text translation and so that, that basically our, our brainwaves are being put into computers now and that we need to think about the security and privacy impact of that stuff too, and that threat link can help there.

[00:33:38] So I have no idea what crazy technology stuff will go on, but I'm, I'm sure there will be a lot of fancy things there. Um, Let me think. Other than that, yeah, AI is the big I I think for now AI will be the, the big one to tackle. Yeah.

[00:33:57] Chris Romeo: what are, what are the privacy threats then, since we're, we're all threat modeling fans, we're gonna turn this back around on threats. What are the privacy specific threats when we think about, let's just, let's not even use a specific generative ai, let's just say spec privacy threats in regards to generative AI systems as we're seeing them deployed right now.

[00:34:22] Kim Wuyts: Um, yeah, it's, it's all the, the information you put in there, right? I mean, some of that contains personal information and you, you kind of lose, well, ownership is not a term that the legal people prefer, but you, you kind of lose it. I mean, you don't know what happens with it. Um, How, how is that, that that used?

[00:34:43] How is that abused potentially? Um, if that's used in, in just some AI process that that feeds back only to you, then that might be okay. But if that personal information gets used in, in that bigger process, then other people get to use that too. You cannot trace it because typically you don't see or, or get any feedback on what information is specifically used for a certain.

[00:35:10] Responds or a certain answer. So it's, it's, yeah, it's, it's a bit scary if you start thinking about that. Um, yeah, I think that's, that's one of the big ones there.

[00:35:24] Robert Hurlbut: What?

[00:35:25] Chris Romeo: and I've seen other, other folks calling for, we should have a source attached.

[00:35:31] Kim Wuyts: Yeah.

[00:35:31] Chris Romeo: To, to AI related generative AI related answers. Because right now you get a pro, you, you, you enter a prompt, you get a response, but you have no idea where that came from. And if you had a, if you, if we could see a source attached to it, it would allow us as humans to say, okay, I, I generally trust that that source of data, or that's the wackiest place they could have ever found a piece of information from. And there's no way that's true. It, it lets us kind of measure it. 

[00:35:59] Kim Wuyts: Yeah.

[00:36:01] Robert Hurlbut: and, and generally you have to do that anyway, right? Like you said, you might get two or three, let's say three outta four answers are correct, but the fourth one, how do I know it's correct or not? So you still have to do a little bit of due diligence and make sure, so there's that as well. I was gonna ask about intellectual property, for example. Is that another issue for privacy? That generative ai that, you know, maybe somebody's core secrets, company secrets and so forth that get pushed out there and, and now that's no longer private.

[00:36:33] Kim Wuyts: Yeah, yeah, yeah. So intellectual property, that's more probably corporate law than that's privacy, but definitely. Yeah. But also, yeah, if, if you, you make, you type your notes and you have, I don't know what Chet, G p t generate a summary or a deck of slides or whatever. That can contain corporate secrets, but maybe you, you wrote down, well, this person wasn't there because they were sick, or, um, well, this person asked an annoying question or whatever.

[00:37:05] That's not company secrets. That's potentially already personal data. Do you want to have that all, not just in the PowerPoint, but somewhere in that chat sheet, PT or whatever, um, interface or, or knowledge base stored forever.

[00:37:21] Robert Hurlbut: Right.

[00:37:23] Chris Romeo: Hmm. Okay, so here's my final question and then, and then we'll go to key takeaway and call to action. If you had a magic wand, You gotta love when a question starts with, if you add a magic wand and you could change one thing about privacy across our industry, what would be the one thing that you would change? I'll, I'll just stop there. I won't even, I won't even add any more conditions on it. I'll just say what? What would be the one thing you would change about privacy? 

[00:37:57] Kim Wuyts: such an easy question. Um, , um,

[00:38:04] yeah, I think. Well, this is gonna sound stupid probably, but awareness to me, it, it all starts now with awareness. Like having both individuals realize, well, this is a thing worth fighting for, and then having at least the company see, well, wow, this is a valuable thing if you, if it's not just to do good, but we really need this to grow our business and get trust from the individuals.

[00:38:28] And, um, so I think the overall awareness like. Getting people to embrace the idea, that would be so helpful. Um, so that's, I guess that's why I like talking about it, to help bits and pieces get that awareness across.

[00:38:46] Chris Romeo: Yeah, I think that's an excellent answer. I think that's, I think that's still an area that we, that we lack and, and you mentioned earlier how people will say, Oh, I just don't care about my data. Like they already have all my data. Like, that always pains me so much. It's like that's such a, that's such a cop out for somebody to say that.

[00:39:04] Like, do you really want them? So it's okay if they extracted to your earlier point as well about brainwaves to text. So you're saying it's okay that if, if there was a digital system that could extract all of your thoughts, feelings, and other, and, and deepest, darkest secrets and then posted it in a, in a gender of ai, you'd be okay with that? I think most people are gonna say, well, hold on a second.

[00:39:26] I, that wasn't what I was talking about. I was talking about my social security number here in the United states. That, or my driver's license number. So yeah, I think awareness is a is. I think there's still a long way to go too. I think there's still, when I think about the difference between. What people know and have embraced about security versus what they know and embrace, have embraced about privacy is there, there's still a big gap and a big delta between those two and privacy's. You just got more room, more work to do to get the word out. But I think it's definitely a worthy cause because you know, after I looked at GDPR the first time I realized I'm a privacy, I'm an individual privacy advocate as well.

[00:40:07] Kim Wuyts: Yeah,

[00:40:08] Chris Romeo: I'm like, I don't want my, I don't want my information out there. I want my information protected as much as possible.

[00:40:17] Kim Wuyts: absolutely. Yeah.

[00:40:17] Chris Romeo: So how about a key takeaway or a call to action? Then you can, and you can now, you can't use the same one. I'm sorry. I'm sorry. I, I checked our rule book.

[00:40:25] The application security podcast rule book says you cannot use the same, the magic wand answer and the key takeaway answer

[00:40:32] Kim Wuyts: Okay. Well

[00:40:33] Chris Romeo: same one.

[00:40:34] Kim Wuyts: it's the security application podcast. So let me say that I think it's important that security and privacy teams up, that we bring that privacy into security and that because. It shares so many, like approaches and techniques. It just requires a bit of a different mindset. And, um, yeah, if, if you do privacy, if you implement privacy, if you minimize data, then it doesn't just help privacy.

[00:41:00] But the less data you have, the less you can leak, the less privacy breaches, uh, the, the Zeta breaches are there. So I think it actually also has value for security people too.

[00:41:11] Chris Romeo: Very cool. Thanks Kim for sharing your wisdom, expertise, knowledge, experience, all of these things in regards to privacy. I always learn something when I get a chance to, to chat with you and so really enjoyed the conversation. And uh, just to reiterate, Kim's leaving academia in the next couple of weeks and, uh, she's looking for her next opportunity in the field of privacy. So, um, if you wanna hire the best possible person on earth, she's available. Thanks, Kim.

[00:41:42] Kim Wuyts: Thank you. Always a pleasure to talk with you guys. 

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

The Security Table Artwork

The Security Table

Izar Tarandach, Matt Coles, and Chris Romeo