Modern Cyber with Jeremy Snyder - Episode
100

Joseph Carson of Segura at RSAC 2026

In this annual recap from the sidelines of RSAC 2026, Jeremy is joined by Joseph Carson, Chief Security Evangelist at Segura. They discuss a conference floor that felt more like an AI event than a cybersecurity one, exploring the convergence of agentic AI and identity security.

Joseph Carson of Segura at RSAC 2026

Podcast Transcript

All right. Welcome back to another episode of Modern Cyber. Our annual RSAC recap episode. And if the background looks familiar, that's because it should be. We've recorded in almost exactly the same spot for the last three years running. But if the guest looks different, that is real. So Mikko, who has done our recaps for the last couple of years, didn't make it to RSAC this year, so we had to replace them from somebody who lives in a very close by country of Estonia. Joseph Carson, thank you so much for joining us on Modern Cyber.

Thank you. It's a pleasure to be here. And Mikko, you are awesome. And it's a I mean, it's such an honor to be filling your shoes because it's big shoes to fill.

Big shoes to fill indeed, indeed. So let's get into it a couple things. So I want to go over a recap of the conference. And you know, this is one of the, I call it one of the two to three major, um, biggest cyber events of the year. It is global in scope. People come in from, I think I met people from at least four or five continents, I think pretty much.

Yeah, yeah. So, so, you know, people come in from everywhere to learn to network all that good stuff. So any high level first observations, big thematic things that stuck out to you this year.

So absolutely. I mean, one of the things is I love the theme this year, the RSAC community, you know, the RSAC conference is all about community. Yeah. And I think that's one of the things is definitely we all need to come together. We need to share our knowledge, we need to learn from each other so that we can accelerate and make the world a safer place. So the theme is amazing. Yeah. Um, big observations. It felt like more like an AI conference than a cybersecurity conference.

Yeah. So walking the trade floor, looking at the messages, AI was everywhere. Yeah. Agentic AI, you know, was one of the biggest main themes here. A lot of the sessions, a lot of the talks. Yeah, it was covered a lot in the keynotes. Yep. Another big area was about identities and non-human identities. Yep. Another massive topic. Um, and it was kind of like looking and trying to understand. Exactly. Sometimes it was felt like what problem people were solving was a bit lost in the area. Yeah. But I felt like, you know, it was such a great conference. Uh, the sessions I attended, I was lucky to attend some great sessions. I got to attend Rob Joyce's session, which was fantastic. I went to Paul Simmons, he did the identity 101. So I really enjoyed sitting back and and it was kind of like a an update on the old sessions he did years ago, adding a lot about agentic AI as well. So I really enjoyed that. Yeah. And then the Microsoft keynote, I also really enjoyed as well. So it was awesome.

So let's come back to those three in a second, because I want to kind of double click on a couple of the things you said at the beginning. I agree with you. It felt very much like an AI conference with a side order of cybersecurity. Absolutely. And one of the things to your point about kind of the message getting lost in there is that, you know, there's like cyber for AI. AI in cyber, I don't know, I don't, I literally don't think I saw a single booth that didn't have AI in it in some way, whether printed on it or whatever. Similarly, every session that I looked at in the catalog and I didn't make it to many myself had an AI component to it as well.

I do wonder whether there is a risk in the kind of in the industry as a whole that we overindex on this so heavily that we forget some of our cyber like first principles basics, like just remember, you know, micro-segmentation is still a good idea. At least privilege is still a good idea, whether it is for AI and non-human identity or for humans and regular networks. So I worry about that a little bit that, you know, the, the noise and the interest levels and the hype around AI do bury some of the other core messages that are great reminders.

Absolutely. I mean, we have to do the fundamentals and we have to do them really well. And that was actually something that, you know, Rob Joyce had mentioned during his session. That's what I wanted. He said, you know, on that he said that, you know, we have to do the basics really, really well. Yeah, AI is great because it can help you move faster. But unless we get the basic things, the basic hygiene, make sure we have the right policies in place. We understand the risk because putting a lot of tools in place will not help you. Um, you know, be protected. It will help you move faster. So it's really important to understand where to go. And in the end, I do believe that AI is a good thing, but we have to make sure that, you know, one thing is you know it. We need it to be successful. Yeah.

You know, we can't deny that for cybersecurity to be able to move at the pace and stay ahead of the cyber criminals and stay ahead of the attackers, we need AI because we need to be able, you know, we're so under resourced, we need to make sure that we're actually moving at a pace that stays ahead. Yeah. And AI helps us get there. I always say it's like the mushroom in Super Mario Kart. It helps us move fast. It's like the fuel to the fire. It helps us accelerate. Yeah. So for me we need it, but it also needs us. Yeah. So because we need to put the right guardrails in place, we need to secure it. We need to protect it. Yes. We need to make sure it has the principle of least privilege a zero trust. Yeah. Security by design by default. So a lot of those means that yes, I see the convergence. Um, maybe this conference is a really big indicator that these two areas, AI and cybersecurity is converging together because they both need each other to be successful.

Yeah, I agree with that one hundred percent. And Mikko and I have talked about this on the show before about, you know, one of the big questions that I've gotten asked and he as well, and I'm sure you have as well, is like, okay, great. Now there's all these AI capabilities. Um, who has the upper hand as a result of that? Attackers or defenders, right. And Mikko has made the argument that whoever masters it first and deploys it more effectively first, almost by definition has an advantage in that in that realm. And I kind of get it and I kind of agree with it.

You know, we do a series here, kind of a sub series on modern cyber this week in AI security weekly, kind of like ten to fifteen minute episodes. And we cover stories from the last week. And one of the stories that we covered just this week was around the FBI, um, producing evidence about three known, you know, criminal actors, Qilin, Scattered Spider and I can't remember who the third one was where they have actual forensic data to show. Absolutely. AI has been used in these attacks, whether it was in generating the phishing emails or whether it was in, I don't know, probing an external attack surface or whatever the case may be. So, so gaining that upper hand is critical.

Coming back to your point about Rob's talk on identity 101, whether human or non-human, there's a couple areas of identity that have always been a challenge. Like there are a couple of things in my experience in my twenty-five year career in cybersecurity. You know, we've never gotten that much better at vulnerability management and, you know, reducing mean time to patch and things like that. And least privilege is one of those things where I feel like we've been talking about least privilege since I was a Windows NT 4 admin in the late nineties. And, you know, you back then it was as simple as make sure that Jeremy's not in any NT domain groups that he shouldn't be in because that's how we assign permissions to folders on our file server. And it feels like it's still the same thing. So was there any insight from that session on how to think about managing identity better, or how to make better or faster progress towards least privilege.

Absolutely. One of the big things in this area is about, well, first of all, identify, you know, they have identities as well, right? So first of all, you have to make sure there's a big question into should it be a delegation of the human identity to the identity agent. And I'm kind of big, you know, it should not because my agent should not be using my credentials. Right? It should be using ephemeral keys. It was just in time. It should be temporary. Yeah. Because ultimately, what makes me different from the agent is actually, you know, there's usually contracts, there's legal agreements. And if I use my authentication and authorization for signature, then there's an implication accountability to me, right? Do I want to give my agent accountability to work on behalf of me? Yeah. For certain tasks, yes. But I don't want it to represent me fully. Yeah. And that's a big separation.

So ultimately it has an identity. The other side of things is then we have to make sure it gets to just in time the access it needs just to do the task. So when we look at agentic AI, we don't want these massive agents that have access to everything, right? It needs to be micro agents that are task based, that are doing very specific, focused things. Right? Right. And I think when we look at a lot of the revelations from the FBI about Scattered Spider and so forth, is that what if you combine AI with humans? That's very powerful. When you use AI by itself, it's still, you know, it's not quite there yet. Well, it needs human oversight. It needs human interaction. Yeah.

And that's one of the things I'm finding. I look back at there was we do this competition in Tallinn every year called Cyber Spike. Right? And it's a kids, you know, doing capture the flag. Yeah. And one of the things was, is, you know, those who had really good skills and techniques that actually, you know, did really well, but it was the team who won that actually used AI with their skills. They may not have been the best skills, but when you combine it with AI, they moved much faster. Yeah, yeah. And that's what the thing is, is, you know, like a Formula One race. Yeah. When you actually, you know, combine the right technology and the right, you know, people and the skills. You move a little bit faster and in CTFs speed wins. That's ultimately. Speed is what is essential there, and I think that's what's important we have to look at, is that when we combine these really amazing capabilities and technologies with humans together, we can do great things. So we shouldn't look at it as AI replacing humans. You know, AI, you know, will fight AI. We will see AI versus AI algorithms competing. But when we combine it with human knowledge and human skills, yeah, I think that's when great things happen.

I think that's what I'm starting to see in this conference is that we, you know, we were scared, you know, that is AI going to replace us? Yeah. But I think we're getting to reality is that no, it's complementing us. It's allowing us to do things that we enjoy doing much faster.

It's funny that you say that, but because I think there is a lot of temptation to to replace humans. Yes. And not necessarily for job cuts, although we are seeing some, I think it's a little bit of AI washing some of the layoff rounds that are couched in the language of what we're doing this for AI efficiency, what have you. But there is an interesting thing. Another story from this week is that Air Canada was just found liable for a mistake made by a customer support AI chatbot that I can't remember exactly the circumstances that either gave a discount that it shouldn't have or didn't give one that it should have.

I remember this, yes.

And it was basically the is like, it doesn't matter if you attributed it to Jeremy or Joseph who wrote the agent, the responsibility and the accountability rolls up to the organization. So I think like, that's something to keep in mind. Now, I one hundred percent agree that the agent itself, when you think about the ability that it has or the data that it has access to what have you. Yes, time bound, scope bound just in time. All of that is good. But remember that like that identity is separate from corporate accountability, which I think is, is an important point to keep in mind. So other sessions, let's talk about the Microsoft keynote for a second. What was your biggest takeaway from that?

Absolutely. One of the biggest takeaways is that, you know, AI is here. It's here to stay. There's no going back. no going back. Um, one of the key things was around the Zero Trust was was a big topic in that keynote also. Identity was a massive item. Um, and it was exciting. So for me, it really shows that the power of what's possible. Yeah. And the direction we're going. And that was really exciting. But again, you know, we must be approaching it with responsibility and accountability. And we must approach it with, you know, de-risking it where we possibly can.

You know, and there's a bit of a mix between also, you know, North America and EU between regulation and non-regulation. Yeah. And that's a bit that's a bit that's my scary part is that, you know, regulation is kind of giving a freefall. Yeah. And if you don't put the protection in place it's hard to go back from that. It becomes almost like a yeah, a GDPR nightmare. Yeah. For many organizations that if we don't do this safely, um, eventually, at some point, I think agentic AI agents will be the next insider threat and next ladder in risk. Yeah. And that's scary. I do expect this year that we might see a data breach just like the Air Canada, you know, going into is that we might see a data breach that actually might be attributed back to an authentic AI agent.

Yeah. Unfortunately, a lot of organizations, when we look at actually, you know, attribution and root cause, yeah, sometimes they want to find a human at fault because sometimes that's what actually pays out the insurance coverage.

Well, I mean, from a legal perspective, there is no precedent or case law other than this Air Canada case. And that didn't find a human or an agent that found an organization kind of at fault or at least liable for the things. But please continue.

Yeah. So I do think, you know, sometimes because if you do find a process or an agent fault, that's a technology and that's a process failure, right? And insurance will be like, they won't pay it out. Yeah. But if you find human at fault, it's much easier. Or an organization in general. Yeah. Then that's where you tend to get the coverage. Yeah.

It's interesting though, this, this question of identity, this question of, let's say, like least privileged, zero trust and the things that go around that. There's part of me that says absolutely one hundred percent need those things. We've needed those things again for, you know, twenty plus years that I've been working in cybersecurity. But I also feel like there is a, that's a little bit of a future state problem. Yes. Or except for the leading edge adopters, whereas a lot of the, you know, kind of the middle of the pack, if you think about that bell curve of, you know, early adopters, laggards, etc., where the middle, the big part of the curve is right now is I don't even know what AI is going on. And I lack basic visibility, inventory and kind of monitoring of the AI usage. Does that resonate with you and what you've seen with different organizations that you've interacted with?

Absolutely. I mean, we do have this big element of shadowy AI organizations don't have full visibility of how much their employees are actually really using it. Yeah, yeah. And I think this is really, you know, we need to, first of all, have a reset into let's, let's have the visibility, let's do the discovery. Let's get the understanding about, you know, what is, uh, you know, what, how AI has been really used by both the organization in what they are aware of, but also the shadow AI that they may not have that visibility. Yeah. That their employees are doing without, you know, getting approval in advance or having a monitoring program in place?

Absolutely. Yeah, absolutely. I've seen I've seen some statistics this week, which indicates about actually fifty percent of employees are using AI without the actually oversight or visibility of the organization, which means, you know, potential data loss, potential security issues. That's right. Potential, you know, vulnerabilities, uh, potential credential compromise as well. Yeah. And without that visibility, we need to get down to the basics again, which is discovery, which is, you know, asset management. Yeah. And it's also about making sure that we're able to put the right security controls in place. Yeah. So that we can move faster because that's what security helps us move faster in a safe way.

This is one of those things where I feel like there's a chicken and egg argument that I very often comes up and it's, you know, on the one hand, we could sit down and you and I could have a conversation and debate the relative merits of each potential service and then decide and kind of create a gated system where we say, like, we're going to define a governance policy in advance of the things that we are going to allow, and then our employees are going to opt into it and or at least be constrained by it. And I think that's a very idealistic approach towards it, but it also has a risk in my mind a little bit to the point of like, AI is here to stay. And, and I think there's this FOMO element that, you know, if you're not adopting it, your competitors sure as heck are and they're going to gain an advantage over you.

But, but my point on this is that, um, we could take that approach where we kind of like build all our governance policies in advance, or we could take the approach where like a lot of, let's say, like city planners do when they make a park and they want to decide where do you put the paths through the park, you either plan the path in advance or you put the park out. You see where the people walk, and then you wherever the grass gets worn down, you put those paths in. And I think I'm seeing that as an approach from a lot of organizations as well, is where, you know, they, they finally realize, okay, there's a lot of shadow AI usage happening. Let's actually now discover it, see what of it is good or bad and where risky behaviors have been observed, and then we'll build risk informed policies that then become our governance overall.

Absolutely. Two differing approaches. I don't know if you have any particular thoughts on that.

So so I mean, I do one of the things I was fortunate enough to be part of is the EU AI Act, and I was actually working on it many years ago. I think we discussed it, you know, on a previous episode. And what I found was, is that absolutely, I would like to organizations to really take a, you know, an understanding of risk across the organization. Well, there's things, you know, processes and things that impact human life and really critical decisions. Maybe there's a, you know, a situation where those types of things. Let's, let's take a slower approach on using AI because the impact is irreversible. Yeah. And other things that are less risky. Yeah, absolutely. Let's, let's adopt, let's embrace let's move forward with AI.

So I think it's really a risk-based approach. So I think it's a bit of a balance. If that if that park, you know, had, it was in a dangerous area, then of course you, you want to plan it out, you want it, you know, so and you probably put fences around the park itself and maybe, you know, have the gates only open at certain hours of the day and, you know, maybe even you have people who kind of watch people come in and out of the park. Absolutely. So, so for me, it's about it's getting the balance right. Yeah. And I think, you know, it's a top-down approach. The organization, the executives really need to have a strong AI policy. They have to have, you know, an embed that and merge it into the existing security policy. Yeah. So for me, absolutely, it's about finding that balance.

Yeah. And you know, so for me being part of that EU AI Act, um, and looking at, you know, that risk-based approach. Yeah. Um, for me, you know, I've had the question before, what am I comfortable with AI doing? And what am I not comfortable with it doing? Yeah. Anything that's math-based. Yeah. It's awesome at it. You know, let let it do those things. Yeah. Probability-based where it's not basically a critical decision outcome that is best effort. Yeah, absolutely. Great at when it comes to like life and death situations, scenarios. Yeah, yeah, I'm a bit let's, let's use it to help us. Yeah. But let's have human oversight.

So it's really interesting because this AI Act, it's something that we help some of our customers deal with and figure out. You know, do you have use cases that are either in violation or potentially in violation? And they very often can't make that determination unless they A) see all the usage that's happening and B) see all the planned usage that's coming down the pipe through things like code commits and you know, where AI is being built into applications and future agents that they want to run. And it leads me to think that there's kind of a, um, there's potentially an adoption pattern difference that we're going to see between North America and Europe, where, to your point, some of these organizations, they say, well, okay, the high-risk use cases we need to be aware of, we need to be cautious about. And the regulators will open up access as we get better and better at, I don't know, let's say like nondeterministically enforcing ethical guardrails and maybe like not having potentials for bias and discrimination and some of these things.

Because if I remember right, it's mostly health, finance and public services, where decision-making is potentially in the hands of an AI system that are kind of like prohibited use cases.

As I understand right now, it's still prohibited. What it is, is that you have to take a risk-based approach and where it impacts human life, okay, if there's a high risk to it, then you have to basically follow the regulation and put the right controls in place, the right mitigation. Okay. So it's about, you know, the impact. Ultimately, it's, it's a risk kind of association to the rights of citizens. And therefore, if it gets increased, then you have to make sure that you do it very cautiously. Yeah. Um, and, uh, you know, make sure that you minimize it where possible. And so to that point, one of the analogies that always comes to my mind is, let's say, learning to ride a bicycle. It's one of those things where you say you learn that skill once and then you get on. If you learn to ride a bicycle and you understand that there are risks, it changes the way that you learn about riding the bicycle. So if you learn that if I ride in a careless, reckless manner, I'm going to fall, I'm going to get hurt. I could break an arm, scrape a knee, whatever it is. Uh, that informs how I learn to ride the bicycle.

And so these companies that are in the EU that have to think about like, how do I tread those waters of those high-risk use cases as opposed to an organization where there's effectively an as of yesterday, forty-nine out of fifty states where there's no regulation around this. You can, you know, crash your bicycle, skin your knee, but it doesn't hurt you. You can break your arm, but it doesn't hurt you. And you just get up and go again. And by the way, there's also no punishment and there's no organizational accountability. Colorado is the first state to now restrict high-risk use cases, which is not defined, but high-risk use cases. Fair enough. I mean, we'll see how that plays out. Yeah. I wonder how the difference in organizational maturity and learnings around AI adoption are going to shape up as a result of these kind of differing standards?

No, I completely agree. You know, I use the analogy, I'm a big snowboarder as well. So I love snowboarding. And when I'm snowboarding, what I try to do is minimize the risk where possible, of accidents. Yeah. You know, wear a helmet. Yeah, I'll wear knee pads. Yeah, I'll have wrist guards as well. Yeah. Uh, and, uh, you know, for me, it's also, you know, being aware of your surroundings, the situation. Yeah. So yes, you can do things. And with all of those, if I fall, it's going to hurt. Yeah. And it could be, you know, potentially dangerous, but minimizing, putting the right protections in place will mean that when I do fall, that I'm minimizing potentially, you know, I can get back up and continue. Yeah, yeah, yeah. Versus that they didn't have those protections and fell. Yeah. You know, it might end up in an emergency room. Yeah.

And my one for me, myself personally, I've learned that whenever I feel like I want that one more run, don't take it. Yes. Your body is so tired that your risk of falling on that last run of the day that you want to get. I've made this mistake enough times on my snowboard that I just don't do it. You know, it's. I've been out since ten AM in the morning and it's like five, five-thirty, six PM like, that's actually my limit for the day. And a lot of organizations don't know what their limits are like, because actually, I think the organizational maturity level and some of the other things that feed into it around, let's say, you know, do you have good data safeguards, do you have good permission safeguards, etc.?

All right, I want to move off this topic because we could probably talk about just this all the time. And we've got a couple other things I want to get to your talk. Yes. Share with the audience what your talk was about and how the audience reception was. And let's say if there were any, you know, any particular insightful questions that you got out of it?

Absolutely. It was actually it was one of my favorite talks to do. So it's about from cyber war to digital nation. And it's the playbook for resiliency. Yeah. So for me, it was it was it's an updated talk where basically I'm taking everyone through the entire, you know, from 1991 to today into all of the incidents, all of the innovations and all the digitalization that the country of Estonia has had. Yeah. And I really focus on, you know, the last couple of years of how cyber attacks have changed and evolved. Yeah. When you built really good resiliency into your information systems and the protections and your digital identity where you've got really strong authentication authorization. So we've really done a lot around resiliency. And also, you know, using embassy locations for data embassies, you actually decentralize the country. So there's lots of protections in place.

That's a, by the way, just on that, that is such a brilliant strategy because, you know, as a lot of people will probably know that sovereign territory, right?

Yes. Within each of those locations.

So you have the data sovereignty and the data residency requirements needed to protect that. And I mean, that is super resilient. You think about the diplomatic relations and the kind of distributed storage from a disaster perspective and business continuity, right? If you could reboot the whole country off of any surviving embassy, I don't know what the actual, let's say, data sharding is. But anyway, please continue. Sorry to interrupt.

No, no, absolutely. That's exactly so. And I think sometimes even today, there's a bit of confusion over data sovereignty and data resiliency. And I think, you know, what most countries want, they want the resiliency, but sometimes the terminology they're using is sovereignty. Yeah. It feels like sometimes a bit of protectionism. But what they're really looking for is de-risking the impact of the data. Yeah.

So the other part of the talk was really around, you know, how is, you know, the attacks evolved? Yeah. And one of the things I want to highlight was, is that when you have a certain level of high level resiliency, the attackers and those who are trying to, you know, put like disruption and chaos into society, they revert to other means. Sometimes they look for the cheapest mean as well, the cheapest delivery. And in recent years, GPS jamming has been something that we've had to deal with across the border of Estonia. We've had, you know, you can't do drones. Your navigation systems will not function that well. Yeah. We've had situations where flights were landing in one of the cities, Tartu, and they had to basically divert it back because the jamming became so strong that it impacted the airport. And then we had to change it to radar-based landing. Yeah.

So you start looking at, okay, you know, how things evolve. And then even in recent years, we've also had in the Baltic Sea, you know, ships dropping their anchors and dragging them along and cutting the cables. Another area, you know, another area of, you know, basically impacted society. Um, it used to be the energy cables that caused disruption. Uh, and then we put resiliency into that. And then it was the data cables get disrupted. So those are the techniques. And we have to look at, you know, it's reverting back to the physical aspect of things. Yeah. Which does ultimately have an impact on your, your digitalization as well. Um, and I think this is really where we need to start thinking about, you know, how do we kind of improve in that area? How do we become more resilient and more protected over things that might be have been challenging in the past to protect? So I do see that as you know, that's been the, the situation that we've had to deal with in Estonia for the recent years.

Yeah, yeah. And I wanted to highlight that to, to, to the rest of the audience in conference here about, you know, that's what we're, that's what we're facing. Yeah. You know, we do still see ransomware. But yeah, those are the things that disrupt the society.

So I mean, it's really interesting. So many thoughts go through my head as I think about this. One of them is actually like, if you were a business as opposed to a country. Yep. And I know it's not your native country, but like, this would be such a case study in business continuity planning and disaster recovery. And, you know, cyber resilience has become, I think, over the last five years. I, you know, it was something that was maybe in the back of people's minds. But if I think of my history, the twenty plus years, it was always, you know, defense, defend, keep people out of your network. Oh, now I need to keep viruses and malware out. Oh, I need to protect against phishing. Oh, I need to protect against misconfigurations. Oh, I need to protect against, you know, like, uh, data leakage across a number of external providers that I'm using for SaaS and what have you and all these kinds of things along the way.

Resilience was never a key concept in a lot, even though in that CIA triangle, you know, availability is always there. Yeah, everybody, I think everybody overindexed on confidentiality and integrity more than they did on availability. But in fact, this resilience and business continuity and the availability has been, I think one of the key lessons, and if this was a company as opposed to a country, everyone would be like, those guys nailed it, right. The other thing I think about is I think about some of the most famous cases of the last, let's say, ten years in Maersk and the ransomware attack that they had. You know, they just got lucky as heck that they had one disconnected AD instance in, if I remember right, like either Lagos, Nigeria or Accra, Ghana, and I can't remember which was, I believe it was Nigeria.

So yeah, so, so, you know, just one port happened to have an intact, um, you know, copy of the AD that happened to be offline and was, you know, unintentionally air gapped for that point in time. So it didn't get scrambled. And, you know, that was also a nice disaster recovery, but completely, you know, coincidental and accidental lucky.

That was super, super lucky. And they even had to fly somebody into into Nigeria to, to get that system, to get the disk and bring it back and actually restore everything. Yeah. I mean, I've seen similar cases as well. Yeah. Um, ransomware case that I worked on a couple of years ago, very similar situation where the organization they became victim to ransomware attack and the attackers, you know, they bought access, they went in, they had two weeks of basically reconnaissance, the lateral moves, the elevator privileges, again, full access to the Active Directory. They basically then the deployment, the ransomware, um, and completely took everything offline, including the backups. Yeah. And the backups were encrypted. So in that situation, that's, that's why I wanted to highlight my talk about, you know, think about resiliency, think about all those scenarios, right?

When the internet's offline, what's your fallback when your identity is on, you know, not accessible, your IDP is offline. Yeah. You know, when the power goes out, what is your alternative? Well, how can you keep your business going? Yeah. And I want organizations to think more about those scenarios. Yeah. And think about even, you know, the best and worst case scenarios. And this organization became a victim of ransomware. What ultimately happened? They were offline. Everything was gone. Yeah. Employee history, uh, their financial history, Their stock inventory. Um, the contracts employees. You know, money that they're paid. Everything completely gone. Yeah.

And again, similar to the Maersk, what saved them was when we're actually doing the asset inventory, we find a system is like, what is this system? They're like, oh, it's a, you know, ERP and a database that was actually migrated from one year before. Oh, okay. I was just like, oh my goodness, like, so we got a system. At least it's one year old. Yeah. And they did a hardware migration because the software didn't support the old legacy hardware. And we were just fortunate enough that that was actually in the plans for a long time to be decommissioned. Yeah. And they just never got around to it.

Hadn't been decommissioned, hadn't been decommissioned. Yeah. So we actually used that as a baseline. We used that as the foundation of rebuilding their entire organization.

So that's your recovery point to start from at least.

Exactly. It was it was better than nothing, better than nothing one hundred percent. And it meant that we, you know, they had to hire something like thirty people, data analysts that did scraping and recreating. It took two months to rebuild the business. Okay. Um, and lucky enough, you know, they were able to get all the paper receipts, they were able to get. You know, scrape some of the hard drives and get some of the data back. But, you know, in that case, they were lucky. They were lucky, as you said. Lucky. Yeah. And this is, you know, you don't want to plan for being lucky. You want to plan for, you know, having a strong resiliency, a strong simulated practicing it. Yeah. And think of all the scenarios.

Yeah. When I was getting my MBA, I had a strategy class and one of my professors in there said, just remember that hope is not a strategy. And I would say like, luck is not a strategy. You want to plan around either. One other thing in what you said there, I just want to tap on to as we kind of wrap up, we've got a little bit longer than I intended for today's episode. Hopefully our audience is still with us. But, uh, something that you said in the Estonia case study around, let's say, like the anchor dragging and the GPS jamming and so on, you know, all of that is kind of physical infrastructure that keeps the country running. For a lot of companies, that's not how they operate. Right? You know, smaller companies, digital native companies born in the cloud, whatever. You don't have to think about a lot of the physical infrastructure.

And that, to me, brings it to that shared responsibility model that all of us are familiar with. You know, your your cloud vendors, etc. they provide your confidentiality and availability and integrity of their platforms and then they provide the security of the platforms. And then you're responsible for your security in the platforms. And one of the things that I think about is like, most organizations wouldn't take those layers into account in their business continuity or resilience planning. They wouldn't think about, well, what if my actual critical vendor went down? What if I actually lost access to AWS? And just in the past couple of weeks, we've actually seen that be the case with one AWS region going completely offline due to a drone attack. So there is, I think there's a renewed focus on resiliency. And I think it's probably a little bit overdue. It sucks that it takes these kinds of events to bring it back into our mind. But remember, like, what is the resiliency that you're responsible for? What are the layers of that kind of shared responsibility that you can think about?

All right. So in summary, for me, just to wrap up today's episode and we'll get your final thoughts to close this out. AI, AI everywhere. Yes, multiple flavors of it, AI that's bringing efficiency to cybersecurity, but then also cybersecurity to allow secure AI adoption within organizations, non-human identities, identities, least privilege, hopefully, ideally, that we hopefully get better at, but also then the responsibility and accountability of the organization a little bit back to that shared responsibility model. Maybe Joseph messed it up, maybe Jeremy messed it up, but whatever company employs them is on the, you know, is on the hook for whatever. They're responsible. Accountable. Absolutely. Other themes, resiliency, renewed focus on that, which is, again, I think, a little bit overdue.

Anything we missed?

No, I think I mean I think you've covered it a great summary of everything. Um, but I think overall, I mean, the networking part, bringing people together. Yeah, that has been, I mean, the only way we can do this successfully is through sharing and collaboration. Yeah. Um, and again, you know, shared responsibility, but not just as an organization, but as a community. And that's what allows us to move forward at a quicker pace.

Yeah, absolutely. I do think that information sharing is something that is, is so critical. I myself haven't participated in a lot of the ISACs, and I've been kind of talking to my team about like, we should absolutely get in there. One of the funniest things is, you know, I work at a venture backed startup and there's such a push for competitive advantage and whatnot. If all of us who are working on helping organizations secure AI adoption together got together, we could probably advance the whole space quicker.

Yes we could. One of the things I try from my own perspective to do, just as somebody working in this space is, you know, we do our This Week in AI Security fifteen-minute weekly podcast about all the things I try as much as possible. I don't make it all about us. At Firetail I give credit where credit's due to other researchers who are finding these things, surfacing all of that, sharing all of it. To your point about, you know, information sharing and we can all learn so much better. One of the things I always tell my team in our annual things is that all of us together are smarter than any of us individually.

Know that for me, that's what that's what's always helped me in my career is that, you know, I have a certain, you know, knowledge of my area of expertise. But in order for me to move forward and really get lots of insights, it's surrounding myself with amazing people like yourself, like Mikko and others who really have supported me and helped fill in the knowledge what I don't have. And that's what's key, is that, you know, community, it's a team effort, and all of us kind of working together, uh, we work better, we work more efficient and we move faster. So that's what's always fun is that it's a social, um, kind of involvement and that's what makes us different.

Awesome. Awesome. Joseph Carson, thank you so much for taking the time to join us. To recap RSAC. RSAC conference. I'm not sure what the official name of it is anymore, but to recap RSAC twenty twenty-six into our modern cyber listeners, we will talk to you on next week's episode. Thank you so much. Bye bye.

Thanks.

Protect your AI Innovation

See how FireTail can help you to discover AI & shadow AI use, analyze what data is being sent out and check for data leaks & compliance. Request a demo today.