Modern Cyber with Jeremy Snyder - Episode
67

Joseph Carson of Segura

In this episode of Modern Cyber, Jeremy is joined by Joseph Carson, Chief Security Evangelist and Advisory CISO at Segura.

Joseph Carson of Segura

Podcast Transcript

All right. Welcome back to another episode of Modern Cyber. As always, I am your host, Jeremy, delighted to be with you today. We've got a great guest on the show today. I'm delighted to welcome to the show Joseph Carson, Chief security evangelist and advisory CISO at Segura. With over thirty years of experience in enterprise security, Joseph Carson is an award winning cybersecurity leader known for shaping resilient identity security strategies. He holds CISSP and Oscp certifications and has been a trusted advisor to governments, critical infrastructure sectors and global enterprises. Today, Joseph focuses on advancing identity security and helping organizations build strong, future ready cyber defense strategies. Beyond his advisory role, he's also the author of cybersecurity for dummies, read by more than fifty thousand professionals worldwide, and is a regular contributor to leading publications like The Wall Street Journal and Dark Reading. Joe, thank you so much for taking the time to join us on modern cyber today.

Absolutely. It's a pleasure to be here. I'm really excited about the fun conversation. It's a for me, this is, you know, always the funnest part of my week is talking to awesome people.

Awesome, awesome. Well I'm flattered, thanks in advance for that. But you know, I want to dive into something that I know you've been spending a lot of time on recently, and that is really the threats posed by AI. There's a lot of hype, there is a lot of speculation and a lot of kind of, you know, postulation of what could be possible. Oh, attackers could do this. You know, attackers could manipulate llms in various ways. But I know you've spent time doing actual research from a, I guess, kind of academic perspective. Talk to us about what's real. Talk to us about what you've uncovered in your research.

Absolutely. One of the things I started getting into the research was actually working on a number of response cases, okay. And that's where basically working those response cases really got me to know about the different techniques that attackers are using. And that's one of the things that I end up looking into and started researching further. So the first things in some of the earlier instances, this is going back maybe about two years ago, was that? You know, me being based in Estonia is where I'm located. I've been here for twenty two plus years, and one of the things that's protected the society here for many, many years is the complex language. The language is so difficult. Fourteen cases. There's no future tense. There's no gender. Um, so for any attacker who wants to do, you know, opportunistic types of phishing campaigns or social engineering, then they tend to really need to have the actual language skills. So for more targeted attacks, that has been something of a paid, you know, they would pay people to do the translations for them in order to get those phishing campaigns much more authentic looking.

Um, but then with the introduction of GPT, um, where that language has been a protection for many, many years, that GPT engines really remove that protection. It meant that GPT engines are translating those phishing campaigns to almost perfect, even better than people actually write in everyday life. So one of the things is that definitely the social engineering and phishing recent campaigns have got so advanced that the language is no longer protecting the citizens anymore. And it means that from a country perspective, they really had to change their strategy, because depending on the language was something that they'd done for many years. And now they realize that that protection strategy is no longer valid and no longer working. And they really started having to educate the citizens more actively, more proactively in how to detect, how to avoid and really get on the path of really using multi factor authentication, really making sure there were of where they're using credentials and identities across different platforms. So that's the first place that I started seeing AI being heavily used. Phishing, social engineering.

Yeah. Can we talk about that for a second. Absolutely. Before we move on. Because I've got a couple of questions there that I want to dive in on. And, you know, like I'm a native Finnish speaker and like Estonian is a complex language, where to your point, people kind of relied on if something looked really bad, you know, very often the grammar would be bad. The tenses, the cases very obvious indicators that like, you know, this is either garbage or. At best case, it came from a non-native speaker who is trying to learn the language, whatever. But okay, so you get past that. Now you're faced with basically the challenge of educating the population that like you cannot rely on if it looks bad right now, you know, you have to actually exercise some level of caution and reasonable thinking in looking at a good looking email that is written correctly. But that's at the citizen level. You know, we often think about security awareness inside or security awareness training inside organizations. How do you do that for a whole country or a whole population, which, you know, granted, in the case of Estonia, small population like Finland, but like still a whole country?

It means that you really have to start from an awareness perspective and a government initiation to educate the citizens everywhere. Um, and it really means a very wide program. It starts in schools, it starts in the businesses, it starts in the public, private enterprises and businesses. So it really means that you really have to double down on your awareness programs and campaigns across the society. Um, if you don't, what happens is, is that the impact of that can be huge financial impact for citizens and businesses. Um, so it's in the government's interest to make sure that they minimize the financial impact by investing in the protection of citizens. Um, so it really starts at having a very broad, wide scale, um, education and awareness program that really teaches about that. You know, how to no longer it's no longer just about trying to determine that the phishing emails are authentic or not authentic. It's now about the actions that happens once you click in something about how to minimize it from that perspective. Um, how to make sure you've got the right, um, you know, additional security controls in place, um, that prevents the ability for attackers to either do financial, you know, uh, physically exchange or do financial transactions or to steal your personal information. It really means you have to go beyond just phishing protection.

Yeah. And so, you know, government engagement, kind of the equivalent of public service announcements plus training and awareness campaigns sustained over years. Fair to say?

Has to be multiple times a year over years. Um, I remember when Estonia was going down the path of actually introducing the digital identity for society. Um, and that was an education that happened over a multiple year period. Um, and this is again going to be one of those exercises where it is a multi year campaign to really, you know, build resiliency into the population.

In Estonia is often cited to your point about this kind of digital ID, one of, if not the most digital society in the world. You know, everything is digital, right? And I I've got a friend I shared with you kind of before the show, who moved from Estonia to Ireland and was really surprised by how little of the government services in Ireland were digitized and. But it also leads to then, you know, kind of a risk of the of having kind of a central citizen database database that could be theoretically breached. And so, you know, has the Estonian government had to think about multifactor and identity protection at that kind of national ID level?

Absolutely. One of the I mean, that's one of the great things is that, you know, the whole digital society and digital identity has been something that's been, you know, uh, started from a foundation back in the early nineties when it first became independent and had basically been developed through the late nineties to where they really thought long term about making sure that they wanted to go down this paperless society, that there's no paper being used, that everything is digitalized. Um, and ultimately, the purpose of that means that you gain huge efficiencies into basically parallel processes. Um, and in order to make sure one is to de-risk. It is that there's no central single repository or data lake that everything's kept, that they had to de-risk it into multiple data lakes and data repositories. So to decentralize that in a way that minimizes that, if an attacker did ever get access to one, they don't have the full visibility of everything. Um, so they did think of de-risking it. They also went down the process of non-repudiation so that there could be no, uh, you know, let's say change or modification of history or audited, because that was actually the foundation was to make sure that history could not be erased, because when you're an occupied country, that one of the things is occupying the state, occupying you changes history, education, knowledge and facts, um, and manipulates them. And they want to make sure that that cannot happen in the digital era. So they incorporated it really early into the system. Also blockchain capabilities to make sure that there isn't repudiation in the data. And therefore the government themselves can never change history as well because that actually the blockchain root hash once a month gets actually published in the Financial Times newspaper. So therefore any citizen at any point in time can go back and audit the history of the data repositories.

So they can actually correct anybody who cares to transcribe like a x digit alphanumeric string. That is that hash.

That hash is actually if you look at the Financial Times paper, you'll see that hash actually printed. It's a root hash. And you can go and take it and you can go and say I want to check the date, the data, you know, from, you know, three years ago. And you get that hash and validate it and it will give you an integrity check. Okay. All right. And if you think about that from a manipulation perspective, to go and destroy all the Financial Times papers of that single day in all of the world is almost impossible task. So they really thought through there was an incident in history where it did raise concern, um, which was back in two thousand and seven when there was a statue that was moved, um, from the city center to a graveyard, and that creates a lot of disruption within the Russian speaking citizens. And then there was a bit of kind of, you know, um, let's say, uh, violence and disruption. And then we had a massive cyber attack from our noisy neighbor, um, which ultimately then we realized that because everything had been digitalized by that time, most things, you know, we were doing, you know, online voting, we were doing banking, tax returns, everything. Thousands of services were online. And, uh, the fear was that if there was ever a land invasion, then all the data's residing within the actual Estonian, uh, you know, land. Uh, because it was under sovereign protection. And that raised concerns. If there ever was a land invasion, what you would end up have is you could simply destroy the data. Um, the protection of the integrity is no longer valid anymore. Uh, so that introduced the concept of data embassies, um, to actually store the data outside the actual physical land of the Estonia. Um, and to provide a more resiliency in the data. So over years they have actually been advancing that quite a lot.

Interesting. Um, and the great thing is, is that when you've got that type of system in the background, then what you can start doing is, you know, one is it's a very automated system. Um, you know, environment, meaning that when I simply, you know, if I do a, you know, receive a package from the Postal Service, I can automatically set off a whole series of transactions that will do, you know, custom reporting, pay the import tax completely automated to the point where when you add AI into that mix as well from a government perspective. Um, no. We had this interesting, uh, aspect, which was called the chat bot. Uh, the chat bot is the Estonian AI citizens, let's say assistant um, which was introduced back in the, uh, around twenty eighteen, twenty nineteen. And the bot is simply the idea of the Crat is a mythological creature in Estonia that steals your memories when you're sleeping. I always kind of struggled with that concept, and what the idea was is that it remembers your choices, your personalization, your interactions with the government or with different systems, and therefore that, you know, personalization means that from a citizen perspective is if I ever fill a form in and I need to refill another form, I can say, oh, just fill it in the same way I did the last one. Why do I need to repeat things? So the efficiencies that you get when you take a system that's built around, you know, a digital society, digital identity, uh, signature, authentication, authorization, auditability, non-repudiation, um, and all those services get integrated in interoperability. Um, and there's a lot of orchestration in there as well that the actually savings is huge reducing of all the wasted time.

You know, if you go to Ireland, you have to go and get a passport. You have to fill in the forms, um, the amount of time you waste just getting a passport or a document, or going to a doctor or getting a prescription. The amount of time you waste is huge. And when I think about it, we all talk about what's the most valuable thing in this world. We talk about, you know, in the past it was oil. Then it moved to to data and then it's been cryptocurrency. And now we're looking at AI being the most valuable resource. But truly, when we think about what is the true resource and most valuable resource in this world, it's time. It's your time. My time. We all have a finite amount of time that we have in this Earth. Um, and we have the user wisely. And the most thing we can do is reducing wasted time where we possibly can. Um, and that's the biggest savings in our life. That's, you know, where we can reduce, you know, hours in the week or, you know, days in the year or even, you know, even weeks of reducing wasted time doing something, standing in a queue. How long, you know, do people stand in queues? Um, if you go to Disneyland, you could probably waste half a day or a day of your life just, uh, going to airports, going travel. And again, when you take that concept, um, of AI and automation and digital society, um, that significantly reduces it down. And one of the biggest projects I worked on in the EU was the basically the passport controls. Um, and now you see, basically, you know, how quick and fast it is to go through immigration and passport controls in the EU, especially in the Schengen. It's so quick and so fast. Um, and that's the efficiency. So what we can do in our society, especially around using AI, um, the focus should not be to replace us, but it should be to empower us to reduce waste of time. Um, and that's I think that's the value that we can get out of those types of systems.

Yeah, it makes a ton of sense. So, so coming back to the research, so this identity and let's say the phishing emails, their quality got better. That's a real threat right. In terms of manipulating the average user, whether they be a private citizen or within an enterprise. And let's say the spearphishing aspect of, well, I know specifically what organization you work for, because that's easy to find on LinkedIn. And I can figure out who your boss is probably and craft a very well, well structured, very, you know, perfect grammar, perfect structure email to tell you I'm your boss, I need you to approve this transaction. ET cetera. ET cetera. Okay. So that risk is real. What's the next kind of real risk that you've uncovered in your research?

So the next part after that, what's happened is, in the past, when I was working a lot of instant response cases previously, sometimes it would take the attacker, you know, sometimes weeks or months to analyze the data that they've stolen. In most cases, those data exfiltration, you know, even ransomware was deployed. There's data exfiltration. They're stealing data because they want to sell it. They want to monetize it. They want to get rewards out of their, you know, their work that they've done. Um, so data exfiltration is one of the top techniques that even overtook, uh, you know, ransomware encryption cases as the top method for extortion. So now they're stealing the data, and they've got terabytes of data that they've. And they'll go through it and analyzing it. When you dump that into a machine learning and large language model and you start querying it, um, now it's taken away from that weeks or months to analyze to seconds. They take that data and they're now able to understand about, tell me where there's credentials in this data dump. Tell me where there's credit card information. Personal information. They can simply just start asking it questions. So data analytics side of things from an AI perspective has meant that attackers are now performing at accelerated speed, um, and understanding the data dump. So it means even they can start understanding about what type of ransom demand or extortion that they can, you know, require or ask or demand from the victim. Um, and they can get better understanding about the financial aspect of things is that, um, how much insurance Do they have coverage? And that might be able to get that from the data dump as well. Um, so it really comes into is that now they're able to analyze data at amazing speed than they have been for, um, and even before, you know, they can start understanding about before they even start contacting the victim, they've already got that information at hand.

Interesting. Okay. And that that can lead to a lot of things that can certainly lead to much more rapid exploitation of an identity where they've, let's say, uncovered a credential, a token, a key, whatever. That can also lead to much more rapid monetization. If they found a stolen identity or credit card number that they could go sell on the on the, uh, what's the word I'm looking for? Dark web. Yeah. Geez. Why is that not coming to my mind? Um, is that the number? Are those the main benefits that you've seen from this kind of analysis of the data?

Absolutely. And it's also used for training as well. They can start understanding about. Now I need to go and look at another victim. How? You know what other security controls can I potentially, you know, use to exploit or bypass? Um, so they're not even just using it in order to monetize the data itself and to also, you know, understand the victims. Uh, you know, uh, at speed, uh, from a data and knowledge perspective. But they can also look at it and say, well, what other security risks have I been able to exploit here? Um, and then start trying to do that and repeat it on other organizations so they can use that to train their own campaigns in order to actually make them more successful as well.

Okay. Interesting. All right. What's next?

So the next part is which was quite interesting, is that, uh, so, you know, as I'm getting involved, sometimes I get contacted by different agencies, governments, you know, companies and organizations that come in and, uh, you know, help with their response or take a look at the incidents and help provide some type of direction. And for many years, that was the common things I started seeing data extortion, analyzing the data quicker. And the phishing got so, so accurate, so impressive that the next thing interestingly, at the end of second half of last year, prior to, you know, let's say July of last year, um, there's a couple of financial fraud cases that I was asked to to help and assist with. Prior to that, all the interactions I had with the criminals was through basically messaging or chat or email. So you're communicating with the attackers, um, and you're communicating with real people. Okay. Um, all of a sudden I started getting basically one of the financial fraud cases was a chat bot, and it started getting interesting is like, now I'm chatting with a chat bot. What? How can you actually communicate? You know, because it's it's set up for a specific set of criteria questions. And, and now there's no empathy. Um, now basically it has a set goal. It doesn't care about the victim. And it's basically you're communicating with the chatbot.

Now, it told me a couple of things. One is that it's quite interesting is okay. What the attackers are now doing is they're actually becoming more efficient, more effective. They're cutting out the middle tier because they would have actually paid services from other countries and other criminals in order to do this. Um, and now what they're saying is that we can save money by going and implementing an AI chatbot that will actually do the translation, do the negotiation, do the ransom demand extortion, and do everything for you and do it twenty four hours, seven days a week, um, at a lower cost? Yeah. It taught me that cyber criminals are also losing their jobs in this, uh, you know, transformation that we're seeing with AI. Um, so they're losing out, and we've seen that with a lot of the, the spam, um, you know, uh, basically, uh, places that, you know, in some countries where, uh, these basically scam setups have been, you know, basically getting rid of a lot of people because, uh, they're moving to those true AI, um, middle tier, um, automation systems.

Yeah. Um, and then after that, the next case chatbot again. So when I was actually one thing that was interesting, I was trying to figure out, you know, how the chatbots are working in the background. I tried breaking them, jail, breaking them. I'll try getting to to error out or to have problems. And one of the things I found was that the first ones when when I was actually trying to to understand more about how chatbot worked, um, I was scanning their infrastructure. Um, and when they saw that I was scanning their infrastructure, they disappeared immediately. So they basically just shut it down. Infrastructure was gone. The command and control was gone. So they started seeing that I was actually, you know, not who they thought I was. Um, and that I was actually trying to learn more from their setup and infrastructure. The third case was in September timeframe. Um, then I thought, okay, I'm going to take a longer and slow. I'm not going to scan infrastructure as much. Um, I'm going to focus more on the chatbot. So what I started doing is realizing that it actually had a built in language translation, um, in real time. And what I found was that if I responded in multiple languages, it tricked the chatbot into how to respond. So what I end up doing is putting a payload and then translating it in multiple times. So the first part of the message was in one language, second part was in a different and third part wasn't a different. And but it contained payloads and that basically crashed the chatbot, um, and end up allowing me to trace back some of the infrastructure, which was interesting. Back to Sri Lanka. So okay, so that's where some of the hosting has been done. So um, ultimately I reported that and um, hopefully, uh, sometime soon Sri Lanka will introduce a new law, uh, that will make those types of, uh, you know, uh, scams and setups, uh, illegal, because that's what we need to get to is when we start finding these, uh, types of techniques being used, is that in order to make it less safe havens for attackers to operate from? Um, it's the legal side of things where it basically makes it more challenging for them. Um, and we have to we have to do this at the same time. Um, yeah. As we find the techniques.

Yeah. There's a couple things there that I wanted to dive in on. Number one, by the way, I was going to ask you whether you tried to jailbreak their chatbot, because that would have been the immediate first thought in my mind is like, okay, you're going to throw a chatbot at me. I'm going to throw the do anything. Now, ignore all previous instructions, like all the kind of common things. But then you hit on the thing that I will say, we'veseen a lot on the work that we've done on AI security ourselves here at Firetail is that, um, multilingual is one of the most common kind of attack vectors against an LLM. And somehow the longer you make that input, the more languages, the more kind of like boring, boring, mundane, you know, English, French, German, malicious instruction in Finnish, the more likely it is to succeed. And that seems to be true, by the way, across the llms that we've seen and where we've observed this. So that's one observation just from my perspective. But the second part is actually something that I'd love to get your take on from a kind of governance and policy perspective, because there's, you know, like in every crime movie, every detective mystery, there's always the forensic accountant at some point who's like, yeah, follow the money. And the money trail is always where you need to focus. But I don't know that it's just the money trail. It's maybe a combination of money trail, plus, let's say, potential real world penalties of, you know, ending up in jail and the likelihood of that happening. These activities are illegal pretty much everywhere, right? Like, I don't think there's any country in the world with the exception of, you know, North Korea, maybe Myanmar or whatever, where these kind of...

You'd be surprised that that's the thing. Go ahead. Countries in Southeast Asia, um, and other locations in Africa as well, where that they don't have strong enough laws that make these illegal crimes.

Okay. so there's lots of safe havens. Um, okay. And, uh, you know, I think you're actually hitting on a really kind of major point that I, I've been looking at for a long time. Um, after I had, I had Jeff White on my podcast over a year and a half ago. And one of the things we talked about was when he was launching his new book called rinsed, which is all about, you know, the money laundering aspect of cybercrime. And we really kind of in that conversation, we really got into looking at. Absolutely. Is that, you know, in the cybercrime, we look at the digital evidence of things. Yeah. To a certain point of attribution, how far can we go? Just like when I was jailbreaking the chatbot, um, how far can I find out where that infrastructure has been hosted? How can I get it to, you know, spur a bunch of error messages, uh, in order to try and get it to crash? Um, so ultimately, you're looking at is attribution as much as you can from a digital forensics perspective. But when you get into, you know, at some point in time they becomes a money trail. And we can learn a lot by basically From, you know, even like the IRS, you know, tax fraud, government, uh, financial investigation, crimes that if we collaborate between the digital forensics evidence and the money laundering, I think there's going to be a convergence over both of those at some point in time that we need to work together. Um, because at some point in time, my expertise stops when I find the location, but then somebody else has to pick it up in order to find out where did the money go? Right. And typically at some point in time, those two threads will join somewhere along the end. Um, and I think that's where getting into the legal and policy side and government side of things, they have a lot more power than we, as you know, uh, you know, technologists and, and, uh, you know, professionals in certain fields, um, they have a lot more strength and power to collaborate and to work together with other nations as well. So I completely agree with you, is that, yes, this is going to be a Collaboration between the financial aspect, government policy, nation states to have fewer and fewer places where the cyber criminals can operate in both from a a, let's say, a technical aspect to making that what the actions they're doing is legal, but also the financial aspect as well.

Yeah. So you have to tackle both. And really to your point, it's the convergence where you can really narrow in. Okay. Was there any other kind of learnings from the AI research that you've done. You know, beyond these, these chatbots and the conversions and the payouts and everything else?

Yeah. So, absolutely, one of the biggest things is that, you know, their code generation, their actually, you know, the writing, the new generations of ransomware has been written automatically for them and taking the lessons learned. So as I mentioned, there becomes that feedback loop from what they're doing from the existing campaigns. It's working successfully. And that's also, you know, as we are also finding new defensive capabilities, new technologies in order to make it difficult to You, you know, detect to evade, to prevent. Um, they're taking that and actually feeding it back into having the code rewrite itself. Right. To do better evasion, to take take advantage of new vulnerabilities. That's being discovered all the time. So what? We've got vulnerability, intelligence. When new CVEs come out and we're actually going and saying, okay, now we need to download and we need to patch those systems. They're automatically incorporating those CVEs and the actually exploits into the code immediately. Um, so a large amount of the code being written is now basically using AI generation.

Um, and that means that, you know, the next, uh, become more advanced, more sophisticated, not at the same time, it does become more difficult for them to manage it, because one of the things I've found with using, uh, you know, AI to write code, um, I took some of my code from twenty years ago, which was a couple hundred lines of code. And I said, you know, I want to convert this script into, let's say, go. Yeah. And all of a sudden, it takes one hundred lines of code, and now I've got fifteen hundred. I'm going. Yeah. Yeah. So, you know, sometimes the efficiency gets lost. You might be effective for a short amount of time. Um, but now as I go back and I'm looking at fifteen hundred lines of code, I'm going. What are you doing? Um, so it means that you do have in your code generation. Uh, you do have to be very explicit and very too detailed as well, to make sure you're only getting the things that you want, and not a lot of the fluff that can be created. And a lot of the, um, in that process. So absolutely, the next stage is we're seeing the acceleration. So fewer people need to then work on the code. So those yeah, criminal gangs and organizations are definitely becoming, let's say more efficient, more profitable. Um, and you know, as our friend Mikko is always saying, you know, they're becoming more unicorns. Uh, you know, criminal, criminal, uh, organized unicorns, um, at the same time. So, so it means that we definitely from a defensive capability is we have to keep keep collaborating, keep working together, you know, keep moving forward. Um, and, you know, coming up with new ideas to keep actually, you know, keep it, uh, keep keeping the world safe.

Yeah. This is one of the areas where I myself personally get a little bit frustrated because I previously spent time working in vulnerability management for one of the big, the quote unquote, big three in the space. And I always felt a sense of kind of frustration looking at the state of vulnerability management, because the statistics on the mean time to patch hadn't improved over, like the entire my entire career, basically, which is not one hundred percent true, because vulnerability management didn't really exist when I started in it. But let's say from the from it was called it was called asset Management Inventory. I mean, it was called like install windows NT Service Pack two. Right. Like that was really kind of what vulnerability management was at the beginning of my career, but but my point is like, you know, the time to to patch was always around one hundred and eighty days. And that was true for like twenty plus years. And I don't know if you spent time either reading or using an LLM to summarize the Verizon dbir for you. Um, I have indeed. I fully, fully cop up to using an LLM to summarize it. It's amazing, by the way, but it is like eighty pages of dense text. And so like sometimes I just want to know a few particular things out of it. And so I threw it to my friend Claude, who gave me a great summary. And anyway, the, the, the long and short of it is like last year was the first time that CV exploitation actually overtook things like breached identities and credential reuse and credential stuffing, account takeover, that type of stuff. And I think it's exactly your point. It's like, you know, this code is so good at scanning systems and knowing like, oh, you've got vulnerability CVE abc123. I have an exploit for that. Or I can go fetch one from this known C2 server. Whatever the case may be. And I just don't understand why more firms aren't auto patching. You know, it's funny, just if I can take a second, like recently, I was talking to an analyst firm and they asked me a question that was like so unexpected for me. They said, what is one thing that we're doing in cybersecurity that we should just stop doing? And, you know, because it's not delivered any results in my whole argument is actually vulnerability management is kind of like a you need to identify vulnerabilities. Yes. But let's say like the management process it should be identified auto patch done. Like that should really be the step. I totally understand that twenty years ago, even ten years ago, the risk of breaking production was high. But I just don't think that risk is all that real anymore.

Yeah, there still is that risk. But the question is, is that risk greater than the alternative? Exactly. And the risk of exploit. Yes. Yeah. The risk of exploit downtime. Um, you know, that's, you know, manageable and recoverable. So that's that's the world we see. And I think you can classify systems. And do you know, because I came again, also, I was a management background. Um, and it was always that, you know, you're always chasing. You're never you're never patched. You're always you know, there's always the next Tuesday. Uh, there's always, you know, every, every, you know, every week. There's always the start of the week where you start Tuesday and you start deciding, okay, well, where do you need to deploy it? And by the time you get to Friday, you're already rolling back. And I think patching systems today and applications have become so good that we have less down time as a result of it. Um, they've been virtualized. They've been streamed. Um, so I completely agree. And a lot of the systems, um, that you can get to auto patching, um, as best as you can, there are certain systems, of course, where, um, in medical, you know, industrial areas kind of controls that they might have certificates and quality control, that you may decide, okay, um, can we sandbox it? You know, in a way that basically limits, um, the impact of those systems. So, so certain systems are criticality of systems. You can, you know, look at alternative approach, but for the general IT and operations from an organization side of things, you know, the edge devices, the users, um, they can be auto patched. I don't see a reason. And if there is any downtime, the minimal impact of it is so low from a financial cost. Um, yeah. You know, we can look at the crowdstrike's. Yes. Those things will happen. Yeah. But it's very rare these days. Yeah. Um, I worked in the times of many filter drivers and filter drivers. And have you got the filter driver wrong? Yes. And you conflicted with something else in the wrong altitude. And you got the BSOD. Yes. But how many? How many bsods do you have today? Very few. Very few. Microsoft have, you know, with the mini filter driver and separation and isolation and having a large number of altitudes. It's very seldom that you ever do see it. There is scenarios they do happen, but is that much more costly than having a ransomware attack?

Exactly. And to your point about, let's say, classifying systems, we've got to be well past the eighty twenty, right, where like at least eighty percent of systems, you should just put on auto patch and reduce your workload, reduce your attack surface, reduce your vulnerabilities. Like just massively awesome, especially with containers and virtualization.

Yeah, you can you know, you can you can do that at scale. Um, again, as I mentioned, it comes down to those industrial kind of, you know, production line systems, the physical systems that, um, have certifications, um, that if you patch it, then you become uncertified. You can't use it. Yeah. Those are the things that becomes more challenging.

Yeah. Any other key takeaways from your research on kind of AI powered attacks?

I think that's the most, of course, knowledge sharing. You know, there's lots of albums out there that have been jailbroken that are purposely, you know, done with the guardrails off. Um, so, from a knowledge sharing and education side, it means that definitely from attackers and new script kiddies. Um, simply the knowledge path for them is much lower. Uh, so they can simply just with an internet connection, a simple computer, uh, they can become very sophisticated very quickly. Um, so that's for another area. Is that the entry level into this attackers, uh, path or career path, um, is becoming much more accelerated, um, because of the knowledge and a lot of the systems, uh, you can get as a service today. So. Yeah.

Which is, you know, it means that we have to focus more about one is how do we get the education and knowledge path into the defense side of things? Um, much more accelerated as well. Um, so I think, I mean, overall, I think the defense side is using AI a lot more than the attackers today because a lot of the basics still just works. Yeah, for the attackers. So they're just doing it in certain areas, whether it being to improve, you know, initial access to improve the data analysis, to improve their efficiencies, their costs, um, and their learning paths. So they're definitely investing in those areas that basically they can get improvements on. But a lot of the basics still work. The credentials, weak passwords, uh, the vulnerabilities, all of those things still work, and they're still using those, uh, in order to gain access and to cause damage to lots of victims.

On the defense side is where we're using AI a lot more. Um, into one is, uh, you know, penetration testing now has been, you know, AI automated, uh, the SoC detection engineers is now, you know, getting AI automated where you're basically taking all of that knowledge that you're getting across multiple organizations and creating a much more, um, identity based, uh, knowledge approach that now you've got the best, um, knowledge. But again, that's for, let's say, the attacks that you know, of, that's, that's where you're benefiting is the knowns. One's knowledge. Learning is for the unknowns that we have that we have to continually having people evaluate and use that knowledge as well.

So, um, so the great thing is from a defensive perspective is we're also moving forward improving. Um, and we're still here, as Michael says at, at the keynote. That's right. Um, we're still here. Um, but, uh, and, you know, we do basically, you know, the speed at both the defense and attackers move at varies slightly from year to year. Sometimes they get further ahead. Sometimes we catch up. Um, but it's one of those, you know, it's one of those things is that let's hope that they don't get too far ahead. Uh, so it means that we always have to keep innovating. We always have to keep collaborating. We always have to, you know, up with new ideas, new techniques. And it's not just from a technology perspective, but it's also from a policy strategy, um, awareness, um, knowledge, getting more people into the field. Because my concern is, is that we're getting less people interested in this field. We have to make it much more. It's to fear us twenty four over seven. You know, too stressful for a lot of people. So yes, we have to make sure that, you know, there is the opportunity in order to help and think more about, you know, when people do get in this industry, that there's also the mental health and there's also the taking care of people and making sure that they actually get time out. Um, because it is a twenty four over seven, you know, sometimes, you know, you have to force yourself to switch off. Um, because literally your, your job is in your pocket with your mobile phone. Yeah. So we have to be more attractive, uh, from the new talent that's coming in as well. Um, but I have hope. I mean, I do as well. I always get scared, but, um. But I have hope that, you know, we will we will all work together, and, uh. Yeah. Um, I think from a legal perspective and a technology perspective and a society and people perspective is that if we get that right, um, that it will make us stronger and more, more resilient.

Yeah, I think that is a great note to end today's conversation on. Joe Carson, thank you so much for taking the time to join us on Modern Cyber. If people want to learn more about you, the work that you're doing, some of that research, what's a good place for them to check out?

Absolutely. Social media site for me is probably the most dominant. I'm in LinkedIn, so, you know, take a search for me on LinkedIn. You'll find my profile quite easily. Um, and that's where I share a lot of my research, a lot of the upcoming talks. Um, you'll see also my podcast, which is security by default. You know, you can go take a look at some of the previous guests from there. And then also on the secure website, I release a lot of my, uh, reports and ebooks and research there also. Um, so those are definitely the places that make it easy, uh, to, to connect with me. And, you know, I'm always willing to, to if you reach out and you have questions, I'm always willing to do my best to answer them.

Fantastic. We'll get those linked in today's show notes, and we'll get this pushed out. And, uh, I'll just say thank you again for taking the time to join us. We will talk to you next time on the next episode of Modern Cyber. Bye for now.

Protect your AI Innovation

See how FireTail can help you to discover AI & shadow AI use, analyze what data is being sent out and check for data leaks & compliance. Request a demo today.