Modern Cyber with Jeremy Snyder - Episode
61

Celina Stewart of Neuvik

In this episode of Modern Cyber, Jeremy is joined by Celina Stewart, Director of Cyber and AI Risk Management at Neuvik, for a wide-ranging discussion on how cybersecurity findings are best translated into business risk—and why most organizations still struggle with that step.

Celina Stewart of Neuvik

Podcast Transcript

Alright. Welcome back to another episode of Modern Cyber. I know it's been a couple of weeks since we spoke to you last. Hopefully, you have not been starved for interesting conversations around cybersecurity. I do say I always recommend there's a number of other good podcasts to go out there and listen to.

Cloud Security Podcast, AI Security Podcast, two of my favorites that are in regular rotation on my side. So hopefully, you maybe found one of those to dip your toes into while we were away. But welcome back. We're excited to have today's conversation and share with you some thoughts around risk management kind of at a strategic level. And what does it mean to partner with organizations as they think about risk management as part of their strategy?

And to help me in today's conversation, I'm delighted to be joined today by Celina Stewart, the director of cyber risk management at Neuvik. Neuvik is a cybersecurity services company. They specialize in designing and optimizing cybersecurity programs, taking a risk based approach to cyber strategy, cybersecurity program management, and aligning technical controls to reducing business risk, including risks from artificial intelligence. We will definitely be diving into that topic prior to joining, Nuvek Celina worked for McKinsey and company where she was a founding member of the cybersecurity practice. In this role, she served fortune 500 clients to develop cybersecurity strategy, to optimize program performance, and to integrate cybersecurity with enterprise risk management.

Her research has been published in McKinsey on risk and other publications. She's been a featured speaker at numerous cybersecurity and industry conferences, including SecurityWeek's AI Risk Summit, HackRedcon, CloudX, and ACSAC, ACSAQ. That's not what I'm familiar with, so I'll be curious to hear more about that. In 2024, she had the highest selling training at Blue Team Convention. She holds an MBA and a MSDI from the Kellogg School of Management at Northwestern University.

Celina, thank you so much for taking the time to join us today. Thank you so much, and certainly happy to dive in on any of the above in any of the conferences. It's been, it's always nice just to do conferences and obviously get to meet other folks from across the industry. So happy to share any of that. Awesome.

Awesome. I always tell people, you know, I travel around a lot and I do go to conferences and give talks quite a lot. And one that I've been doing recently is around kind of, like, the two sides of AI in terms of, the blessing and the curse. And I always think about it as a double edged sword. And I'll I'll I what I find though to your point is that, like, you get a ton of energy and engagement from going to these events and, you know, standing on a stage and talking to people.

And I love the conversations that always happen right after that when you get off stage and the people who come up to you because they've got really interesting questions for you. I'm sure that's been your experience as well. Right? 100%. And frankly, that's where some of the best conversations happen because the conference talk, obviously, you're preparing it with a certain audience in mind or or certain talk track based on other conferences you've maybe presented at.

But the real fun is actually just hearing from people what their reflections are, what's what's sparking for them. So very much the exact same reaction. Yeah. Awesome. Awesome.

Well, let's dive into one of the questions I wanna kind of kick today's conversation off with, which is something that was mentioned a couple times in the intro and in the bio there, and that's really kind of translating findings from, let's say, like, cybersecurity risk analysis into business risk analysis. Can you walk us through how this really plays out in your world? So when you're sitting down with the customer and you say, hey, mister or missus customer, here's what we found on the cyber side or on the technology side. How do you now want to interpret that? Or how do you wanna think about the overall risk to the organization as a result of what we found?

How does that process go? Absolutely. It's a great question. And frankly, it's why I love having these types of conversations because to your point, I think a lot of folks don't necessarily take this step when it comes to whether it's a cyber risk assessment or one of the things that's not in my bio is obviously, Nuvic does much more than just the risk management side. So, we have to do a lot of offensive services.

So pen testing, red teaming, any kind of technical assessment. And so for both the nontechnical cyber risk assessment, as well as any of those offensive pieces, what makes us truly distinctive is we do tie it to risk. And so to answer your question, what we typically do is we like to think about a couple things. The first is really what matters from the finding. Right?

Because, obviously, any any type of test, whether it's a risk assessment or some type of pen test, you're gonna have, you know, up to hundreds of findings in some cases. Right? And what you don't wanna do is deliver, you know, here's an 80 page report. Thanks. Bye.

That kind of thing. Yeah. Yeah. So the first thing is really just thinking about from those findings, what actually matters. And then the piece that actually translates to risk is, okay, given it matters, what's actually the impact to the business?

And so that that's where my team typically likes to think about maybe a dollar value, if it's something that we can actually quantify. But more often than not, it's actually just putting it in terms that will resonate with a business stakeholder who maybe has no idea what cybersecurity is. Maybe doesn't even understand what a pen test or a risk assessment is, but they need to know for their job, hey. If the system goes down, you're looking at x downtime. You're looking at, you know, certain things maybe that your teams won't be able to do.

There's gonna be lost productivity. All of those different pieces, right, where you can make it very concrete for someone who has no technical understanding, that's what my team typically likes to do. And frankly, we found that risk is really the language of the business. Right? That's what executive leaders are thinking about, whether it's financial, operational, obviously, different strategic risks even when it comes to reputation or or kind of the regulatory pieces.

And so I think doing that translation, it sounds like it's such a simple thing. It's actually it it's quite hard to make sure that you're hitting that right nuance and and you're actually kind of tying it out in a way that the business will understand. But frankly, risk is the best way that we found to do that, and it it resonates really, really well. So I'm curious if you could shed, like, a little bit more light on what that means. Right?

Because, like, you mentioned okay. If I can quantify something in terms of, like, okay. You know, this vulnerability to this system is, like, okay. You know, maybe that system's down and that's a revenue generating system, so you can kind of estimate, let's say, the dollars lost on an hourly basis or something like that. But what are some of the other areas where you can really translate well?

Because I I think this is an area of frustration for a lot of organizations. It's like, I tell you you've got this risk, but it's kind of fuzzy and it's kind of vague. And it's like, it could happen, but we don't know the likelihood or we don't know whatever. Like, how do you think about that? And what are like, maybe what are some of the other examples of things that you've observed where you've been able to, like, explain to a business leader, like, hey.

Here's what this thing could do. Absolutely. It's a great question. Probably the best way to answer it is actually just illustrating with an example of of a recent case that we had. So we actually we do a fair amount of work in the mergers and acquisition space.

Right? Whether it's, for, you know, private equity and VC firms who are obviously assisting with parts of the deal cycle. But more commonly, we're also just helping organizations as they do their own diligence for corporate acquisitions. And so that's the most recent example I can think of where this actually materially changed, not just in negotiations, but actually the deal price for this company. And basically yeah.

It's it's really interesting because you don't think about cybersecurity in in that way necessarily, or at least most folks in the industry, right, aren't aren't necessarily using that as their case examples. But for us, it's it's kind of our bread and butter. And so in this example, we serve a lot of health care companies, and this was a large health care provider, thinking about basically bringing in a new practice under their umbrella of different, you know, healthcare organizations. They they kinda span the whole gamut. And so as we were going through, they had us do, both technical and nontechnical assessments.

So full cyber risk assessment as well as pen testing, number of different other, technical assessments as well. That, we really started to identify that it wasn't actually a set of technical findings or risk findings, but it was really almost organizational things, right, that were creating risk for them. So one one great example might just be obviously all of this tech debt. They had tons of legacy system. Some of which obviously in the healthcare space you actually can't move beyond right, but they need to be segmented, things like that.

So we were able to tell a really compelling story not just about, you know, hey, here's the cost if you were to rip out and replace those systems, but actually what's the cost to your team of just keeping these online during that merger period if you are to acquire this company? What may be be some of the risks actually to you of inheriting some of this tech debt? So we were able to tell a compelling story around that. There's also oftentimes interesting, almost process management and procedural things that organizations aren't thinking of. Right.

And so we're able to go through and by doing both the technical and nontechnical, actually identify, you know, hey, is the perception of cybersecurity or the cyber program actually better than it should be? Or is it less than it should be? Maybe the team is performing super well, but they're just not able to tell that story. And so we kind of come in, we we obviously have those findings, and then we can tell a very concrete fact based story. Again, using the language of risk, right, where it's, you know, hey.

Maybe your teams actually don't have the visibility they need, and that's hindering their ability to gain buy in. Maybe folks don't have the awareness they need. So we can again, that's not something you can necessarily quantify, but that's something that you can frame as a risk where, you know, if your executive leadership isn't seeing what they should be seeing, you're definitely missing parts of the picture. And, obviously, then you can't prioritize, you can't invest, all of those different pieces. So I hope that that kind of explains a little bit more.

And then, obviously, through throughout all of that, right, that that kind of helps with negotiations for that type of scenario. Yeah. For sure. And I actually think m and a is a really overlooked scenario for a lot of org larger organizations. Right?

If you think about risk in general, typically, the way one kind of, let's say, like, core underlying thing is that the larger the organization, usually the larger the risk. And when you compound that with things like what are the most, let's say, highly valued data elements. Right? Like, PHI, private health care information is, I think, the some of the most expensive data on the dark web. Right?

So it's, like, very highly sought after by threat actors. But you can take, you know, simple calculations. Size of org, time sensitivity of data, you know, leads to a larger risk. And similarly, like, size of org, time sensitivity of data means that, you know, the value is higher, and it's more valuable for a threat actor to go after to extract a ransom or to, you know, all those different types of things around it. And a lot of organizations to your point, they might have already developed their own internal programs.

But typically, when they're acquiring a company, it's very often a smaller organization that doesn't have the same level of maturity or proficiency with that. And so, couple of the things that you said, I I know firsthand examples of areas where I've seen a larger organization swallow up a lot a smaller organization, And they're just like, yeah, well, their processes were terrible. You know, they just had, like, no structure and definition around how they responded to whatever, you know, vulnerability management or password policies or whatever the case may be. And And as somebody who's recently gone through a SOC two recertification, I can tell you, like, process is actually probably more valuable than the technical controls in one of those types of assessments. Right?

So I I think what you're saying is, like, certainly spot on and and something that a lot of organizations need to be thinking about a little bit more. When you think about as you go through some of those processes with those organizations or whether it's the larger side, let's say, the acquiring or the acquiree, how do you approach kind of, like, identifying and assessing the risks? And then how do you approach kind of giving them, let's say, like, mitigation guidelines around this? Because I think, like, a lot of the times, it's like, hey. As you said earlier, you know, I don't wanna leave you with an 80 page report.

I'm gonna leave you with an interpretation. But a lot of organizations need also then, like, a plan of action. How do you approach that? Yeah. It's a it's a great question.

And in classic services slash consulting, right? It depends on the organization. It depends on the goals of their assessment. But typically what I like to say is, first and foremost, just from a tactical perspective, obviously you have to start with the way that cybersecurity is actually being reported at that organization. And so often we find our clients are using some type of framework, whether that's a NIST framework, ISO.

Obviously, there's various industry specific frameworks as well. Sure. And, typically, what we do then, obviously, we we do the whole assessment. Right? Whether it's a pen test, risk assessment, whatever.

Our findings then map back to that framework. And then we actually sit and we work with our clients directly to actually understand how they're reporting. Right? Because we we know this doesn't happen in a vacuum. And oftentimes, also the person who's actually purchasing, quote, unquote, the assessment isn't necessarily the person that we're directly working with, just given how budgets work and and things like that.

And so it's critical for our core stakeholders to be able to tell whether it's the buyer, whether it's their executive leadership, maybe it's their boss, maybe it's someone in a totally different business unit who's just, you know, kind of been like, hey. This is now falling in your lap that you have to to do this assessment. Right? And so we'll work with them, and we'll kind of step through. Obviously, none of the findings are reported in a vacuum.

You know, we keep that close communication throughout a different assessment. And so we'll pull out basically what seemed to be those prioritized items. We'll hold a working session with them, and we'll just go through, kind of explain to them what we're thinking about putting in the report. And then basically from there, obviously, we we kind of synthesize everything and then make sure that we're telling a comparing, compelling narrative around it. The two things that I would say as well, I think one is context is just hugely important, and and that's probably, you know, kind of clear through the way that I'm even stating this, but we understand that every organization is different.

Right? And so oftentimes, even the internal politics can completely change the way that a report is interpreted. And so what we don't wanna do is just to your point, number one, we don't wanna leave them with that 80 pages of a report. But we also don't wanna just give them, you know, here's a prioritized, you know, road map in a vacuum. And so we'll look at things like, you know, are you migrating, for example, from on prem to the cloud?

We still have many clients who are are in the process of that type of transition. We might be looking at, hey. Are there different models that you're bringing on? Like, are you transitioning to a lot of SaaS tools? Things like that.

Right? So we we can kind of contextualize. The other thing that we run into, and this may be where your question is leading, is you you can't just say, here's the priorities. You know, good luck implementing all of these. And so we really try to think about that road map.

Right? And and kind of tie in, you know, maybe there is something that's a critical or high, but if you're already going to go through some type of transformation, does it make sense to address that right now? Obviously, yes. You should mitigate whatever you can. But in cases where you have to rip and replace systems, maybe you should wait.

Maybe that vulnerability will actually disappear based on the context of your organization. So we try to be very mindful of those things as we're building out the roadmap. So it's not just, you know, here's best practice with absolutely no view on prioritization or effort or sequencing, things like that. Yeah. Yeah.

And to that end, I mean, as you think about kind of the contextualization around those things, you mentioned something there that is certainly, you know, something I've spent a lot of time on in the past, which is like cloud environments. Right? And a lot of organizations, there are organizations like ours that have, I would say, like, the advantage of being younger organizations where, like, you know, from the time that we incorporated, we've been a 100% in the cloud. We've never had any on prem or kind of, quote, unquote, legacy infrastructure to deal with. But I know a lot of larger organizations, which would probably be much more your customer profile, they've probably got little bit of column a, little bit of column b, or they're in the process of migration, etcetera.

As you've worked with them, are there any kind of common threads or, let's say, like, common misconceptions, misconfigurations, vulnerabilities, any kind of commonalities that you've seen for larger organizations making the move to the cloud, whether it's, like, infrastructure or kind of SaaS based stuff? Absolutely. It's a great question. And this could take the entire remainder of the podcast, so I'll try to keep it brief on this one. But definitely, we see a lot of different problems, or I guess just considerations.

Right? They're not necessarily problems unless you you kind of let them become problems. I would say the first is a number of organizations, especially those that are just adopting different SaaS tools. I think there's two things we commonly see. One is that they're actually not doing their diligence on the vendor side.

Right? And and vendors can increase a lot of risk, and bring a lot of risk into the organization. And so, certainly, we recommend even if, you know, the marketing looks great, maybe you have a great recommendation for a vendor, still do your diligence and make sure that, obviously, for your own risk appetite, your risk tolerance, that that vendor is within, obviously, that that threshold. And that's a key one. I think the second thing too is just make sure that you understand how to actually leverage those vendors.

Right? Because they may be doing incident response with you. Obviously, you wanna know how your data is being handled, whether it's at risk or inference it. All of those different pieces. Right?

So really having that diligence and making sure you have them integrated, and that the communication is there is huge. And the second thing when it comes to SaaS in particular, and I'll talk about, you know, of full cloud, in just a moment. But with SaaS especially, it's also one thing I find a trap a a little bit, if I can use the word trap for this, is that people will just buy point solutions and point solutions and point solutions and not actually think about if there's duplicative functionality, if they're almost paying for multiple things that they shouldn't be. And so, again, that's where I would recommend go beyond the marketing, go beyond the recommendation, really understand what you're actually bringing into the environment and whether or not you actually need all of these different tools, to just do different parts of the tech stack. So that's definitely how I would think about SaaS as, you know, mitigating vendor risk and then make sure you actually need all of those products before you pay for them.

And then on the the kind of pure cloud side. Right? So, when we think about infrastructure, you know, platform as a service, whatever it may be, that's where we see a lot of problems, especially as it becomes, clear that just default configurations are not going away. And we see holes all that we call them holes in the bucket almost, right, where it's just anything that you can see, default passwords. Sometimes there's almost just anonymous accounts, service accounts, anything that you can see, where somebody hasn't gone in and actually changed a setting, those are absolutely ripe for attackers to take advantage of.

Yeah. And it's the same too when it comes to access controls. I find that access in the cloud, people just assume they can set it, forget it, don't need to go back and change it. And that's where, again, number one, the default configuration is gonna be a problem. But number two, it's so easy, especially, for some of the large cloud platforms for an attacker just to get in, you know, hide within, those different services, especially as it's related to account access.

So I'll kinda pause there. I realize it's a lot of different pieces. Right? But I know. It's it's basically those defaults are absolutely killer, and we see it over and over.

Yeah. I mean, this question of default, I think, is, again, something that, like, to your point, a lot of customers don't really understand. And and, you know, if they're making that move, especially if they're making the move for the first time, they don't really have an understanding of what the shared responsibility model actually is and where those lines are drawn for different types of services, SaaS, PaaS, infrastructure as a service, and, you know, who who's responsible for what. And candidly, I think the cloud providers haven't done I think they've optimized for ease of adoption and ease of use, and also even for ease of data access, as opposed to optimizing for security. And so to your point, some of those defaults are like really quick and easy for a single developer to spin a service up and upload some data and get access to that data and start playing with it, experimenting with it, etcetera.

But that also might mean that it's pretty easy for that individual developer to just accept the defaults that might leave that data exposed. Right? And the vast majority of breaches that we've seen on cloud platforms are not the result of, let's say, like, malicious action. They're the results of inadvertent exposures. Right?

And that's been consistently true since 2015 when I started working on cloud security. Right? So, like, I I totally get what you're saying there. There's one other point that I would certainly, raise in this regard, which is, like, you kind of have to understand that, you know, as we record here in 2025, cloud's been around for a good fifteen plus years. Right?

Threat actors understand how to use this. I always tell people, like, whatever tools you have access to, they have access to as well. They might use stolen credit cards to get them, but they have access to the cloud. They have automation. They have, you know actually, they they probably run copies of all of the leading products across most product categories and most security product categories to try to figure out how to evade them or go around them, etcetera.

So they know how to live off the land in a cloud environment. They know how to use IAM roles to transit from an e c two instance to an s three bucket to an RDS instance, etcetera. So, I totally appreciate what you're saying there. Well, we've gotten almost twenty minutes into the conversation without talking about AI, so I think it it is time. Otherwise, we're running close to the, statutory limits of bringing it up in a conversation.

So I know part of your title is director of cyber and AI risk management. So, you know, without trying to set the broadest question that could take hours and hours to get through, I guess it as concisely as you can, what are the risks that you see organizations kind of being worried about with regards to AI adoption right now? It's a great question. And it's actually interesting because I might almost reframe the question, which is that Okay. We're actually not seeing organizations worry about AI, and that's the most concerning thing that we are seeing in the marketplace.

Right? Is is basically, I think the biggest risk that I see is just complacency, right, where folks are like, oh, yeah. Let's push for AI. Let's bring it on. This is great.

Obviously, everyone's talking about the productivity gains and and obviously just a number of things. And we can't deny that. I mean, obviously, AI is fantastic for a number of purposes, and certainly don't wanna downplay that. But what we're not seeing is that concerted effort to actually be thoughtful about how you govern AI, to be thoughtful even about how you're bringing it on. Oftentimes, we're seeing, hey.

Business leader in in this business unit is bringing on AI for this, and business unit, you know, leader in unit number two is bringing on for this. And so it's this really interesting it's an interesting dynamic, right, where nobody's really thinking about risk. I think people are starting to be like, okay. Yeah. Vulnerable.

You know, chat GPT went down. Okay. That wasn't great. There's there's some, you know, kind of scary headlines, but no one is really looking at it from just from what we would even say is kind of the fundamentals, right, of of risk management where it's, you know, hey. You're very thoughtful.

You have an AI specific asset inventory. You do, you know, for example, like the third party review. You you treat them like a vendor before you even onboard them. So so it's really those basic pieces that are missing. And so to your point, it's it's actually really interesting because I think right now, a lot of my role is just educating folks.

Right? Where they're, you know, hey. Or we should be worried about, you know, you've seen it like all of these different headlines, right, where it's, somebody joins a voice call and and they don't realize it's not actually their boss, and so they authorize some kind of transfer. Those situations, obviously, still relatively few and far between, still very scary and legitimate. Yeah.

But I think the bigger risks, right, are almost if you don't know what's in your environment, you can't protect it. And you also can't train people appropriately to use it in responsible ways. And so that that's where we're really seeing a lot of the initial risk is just what's actually being hallucinated and incorporated into work product. Or, a bigger thing that I'm seeing too is just even negligent insider threat, right, where maybe people are using these tools but not sure, what data they should upload or how they should incorporate whatever they're getting out of it. So that's kind of what I'm seeing with risk right now.

So it's it's actually not companies who are driving the conversation. I think it's folks like myself who are like, this is great, but we need to pause or else things will be a lot riskier going forward. I mean, I, you know, a little bit of a shameless plug for us over here at Firetail, but, you know, this is an area that we have some some really interesting capabilities around in terms of AI discovery and building up an asset inventory to to what you just said. And to your point, I think, like, the thing that we're hearing is very much in line with what you're saying, which is that the the the customers that we're talking to about this stuff, it's usually because they feel like the horse is out of the barn. AI adoption has already happened.

And they've, at least to their credit, started to understand that, like, there could be some risk around this that is really impactful to the organization, whether that is in the form of, let's say, like, a regulated dataset being uploaded somewhere where it you know, where the customer hasn't been informed that you might be sharing it with this provider or that provider and you're violating something like GDPR or you're uploading it to a vendor who doesn't actually sign up to, let's say, HIPAA regulations and, you know, you shouldn't be uploading health care data to that LLM provider or what whatever the case may be. But it is very much along the lines of, like, we just need an inventory. We just need to know what's going on inside the organization because exactly as you said, this, you know, app owner or this business leader, they've all decided, like, hey. There's great productivity and, efficiency gains to be had. Let's go.

And, without, you know, pausing to think about, like, what might be the impact here. So it's great to hear that, you know, you're kind of seeing it the same way. When you give that message to people, what's their reaction right now? Yeah. I think it's one of two things.

One is the folks who really understand the risk once it's explained to them, they just feel overwhelmed, frankly. Right? Because to your point, I mean, building an asset inventory when you haven't been structured and you don't know how to do the discovery, that's either gonna be a really miserable process or obviously, you know, you you kind of have to think about, okay. Do we bring in a vendor to do this? How can we do this using maybe some tools we already have?

So I think that process is is very overwhelming. And then the second challenge for those folks is typically then they realize, oh, no. You know, this is two or three levels typically above me. How do I actually, you know, kind of bring the organization along, make sure that we're actually getting that type of governance in place? So that's one.

I think the other reaction that we're getting is is what folks who maybe understand, but for whatever reason, I mean, maybe business priorities, maybe just the way that their organization is run. They don't necessarily actually still perceive it as a problem. And, frankly, that's something I'm sure you see this in cyber all the time. Right? Peep people know.

I mean, cybersecurity, it's ubiquitous. It's been in headlines for, you know, fifteen plus years at this point for for some of the major breaches. And even so, a lot of organizations just don't invest in cybersecurity. So why would they invest even more to secure AI? You know, it's kind of that chicken and egg problem for them.

So, fortunately or unfortunately, it kind of goes back to I'm seeing a lot of parallels just to the same conversations I have about cybersecurity in general, where if you care about it and you get it, you wanna make change, but you're probably constrained in some way. And then if you don't Yeah. Necessarily you know, maybe you understand, but you're not, you know, gonna prioritize it for any number of reasons, then it is what it is. Yeah. Just as just as you said, like, to be it feels eerily similar to cloud ten years ago It does.

When orgs were like, oh, we've gotta move to the cloud. We can move so much faster. We can get things done cheaper, better, blah blah blah blah blah, all the things. Right? And I'm what I'm seeing, what I'm hearing, the conversations that we're having with organizations, it is so so reminiscent of what we saw in that kind of mid twenty tens time frame with regards to cloud.

Definitely. Yeah. Well, I've only got a couple of final questions to kind of wrap up today's conversation. First one, what is the Sac or a c s a c conference? That like I said, that's not what I'm familiar with.

No. No worries. It's actually more of an academic conference. So, yeah. It's it's really interesting.

Basically, they bring together cybersecurity researchers. There's actually an only a couple of us from the vendor side at this previous one, but really, really interesting from an academic perspective, things like, you know, new cryptographic algorithms, things like that. So, definitely for any folks who are looking for kind of cutting edge academic cybersecurity, it's it's a really fascinating conference with some really interesting research. Awesome. Awesome.

And if people are looking for more information around that, it looks like it's just a csac.org. Mhmm. And when it looks like it's in Hawaii. So I mean, come on. There's a black place.

Secret. Yeah. Exactly. Awesome. Awesome.

Well, I guess, like, last thing that I wanna kinda wrap up today's conversation with sorry. Two questions I wanna wrap up today's conversation with. Number one is you mentioned at the beginning that you guys, that at NuVic, you guys do pen testing and and offensive stuff. When you do that kind of stuff, do you find that there's any difference in how leaders on the customer side interpret those results as opposed to other risk assessment exercises that you do? Or is it really just, like, you know, the offensive forms part of the overall risk assessment?

Yeah. It's a great question. So, to to answer the second part of your question, we do bifurcate between pure risk assessment, right, which is typically slightly less technical, and then obviously the pen test to the red team where it's deeply, deeply technical, you know, hands on keyboard, obviously, emulating different attackers using tactics, techniques, procedures, all of those good things. But I would say we do the translation whether it's a nontechnical risk assessment or a pen test or red team. It's really fun for my team because obviously then we get to see behind the scenes of, you know, what's it look like to actually craft the phishing context before you send a phishing email or, you know, all of those different pieces.

And so I think there's a couple of things that are interesting to me from the offensive side. One is we often see, because of the way that we do the translation that a lot of ex executive leaders actually realize it's not a bad thing to get the list of findings. You know, sometimes people, especially when it comes to offensive testing and if you're trying to, you know, check the box, whatever it may be, you're looking for your team to get a gold star, and so you want as few vulnerabilities, as few findings as possible. It's interesting when when you incorporate risk, right, that flips the script because they're like, oh, no. Actually, we really wanted to know this because now we know what our risk is and we can go prioritize it.

Or, you know, maybe we can divert some part of our cyber budget to actually go do whatever the high priority finding might be. So I think that's interesting. And then it's also I think a lot of executive leaders just don't realize how much is actually out there for the grabbing. And so sometimes just seeing that list of here's a very basic, almost informational findings is is sometimes interesting for them where it's like, oh, okay. Like, good to know.

Good to know. Like, you know, even though we have maybe, you know, they've invested in a lot of tools, but that's not actually moving the needle. So it helps with those types of conversations I find, more often than not. So Yeah. Yeah.

Cool. Cool. Awesome. Last question. On the topic of AI, let's say the next one to three year time frame, you've already talked about a little bit of the risks that you're seeing right now, which is, I would say, generally around, like, lack of awareness and lack of visibility as far as what's already happening.

What are some of the things that are on your radar that you think could come up in the next one to three year time frame as far as, like, big new meaningful risks around this? Yeah. It's a great question. Honestly, I would say we're already starting to see some of this, although mostly in almost research context as opposed to a big headline, but I suspect probably not even within the next three years, I think probably within the next six, seven months. We're we're just gonna see probably some major attack, I would imagine, either that leverages some type of prompt injection or memory injection.

And frankly, I find the memory injection attacks injection or memory injection. And, frankly, I find the memory injection attacks to be the scariest. And and for folks listening who maybe don't know what that is, that's actually where you're changing the way that the model interprets its own data. Right? So not just changing the inputs or or kind of telling it, hey.

You know, treat me like a pen tester if I'm actually a malicious actor. But you're actually changing the way the underlying model works. And so the reason that's so scary to me is is actually to use your cloud example and and some of the parallels we're seeing. Folks just aren't even folks who are doing pen testing or leveraging AI in in their security systems, they're not thinking about the fact that someone can actually change that fundamental training, that fundamental way that the algorithms are actually working. And so no matter how many times you test it, no matter how many, you know, security controls you put on top, if someone's fundamentally changing the way the thing works, you're not actually able to protect yourself against it.

And and given AI is also pretty nondeterministic. Right? You you you can kind of prompt it and get different results every time. Yeah. Those things I think are what we're gonna start seeing, and I think that's when people will immediately say, oh oh, no.

Like, we we need to to, you know, claw back, I think, quite tightly after that. So Yeah. It's funny. You know, there's been a couple things. I mean, we saw our first couple of examples of real world indirect prompt injection in the last couple of weeks, and, we put out a blog post around that recently.

I think as I think it might have gone up yesterday. And it's to your point. It's it's, you know, it's subtle things that seem subtle on the surface. They're not as immediately, screamingly obvious as a piece of, let's say, like, malicious code. It's literally just some human language embedded in a larger context of human language that kind of leads the models to behave differently.

And we've been noticing this in some of our own testing and some of our own LLM usage that the way you ask a question or, let's say, the constraints and the qualifiers that you put on it in terms of what is an acceptable response can very, very, very, very much affect the the the outcome. And it's it's really interesting. So I think there will be a lot of evolution on this side in the next, yeah, I mean, as as early as not like three years, but like three weeks, three months. Right? Yes.

Very much so. Awesome. Awesome. Well, Celina, thank you so much for taking the time to join us today on Modern Cyber. For people who are looking to learn more about yourself or about the organization, it's just Neuvik.com.

Right? Correct. Yep. Neuvik.com and then Neuvik on every social media channel as well. Awesome.

And that's Neuvik spelled n e u v I k. And we'll have that linked from the show notes as well. Celina Stewart, director of cyber and AI risk management at Neuvik, thank you so much for taking the time to join us today on Modern Cyber. Thank you so much. Really appreciate you having me.

Awesome. Awesome. Tune in next time. We'll talk to you soon. Bye bye.

Protect your AI Innovation

See how FireTail can help you to discover AI & shadow AI use, analyze what data is being sent out and check for data leaks & compliance. Request a demo today.