Modern Cyber with Jeremy Snyder - Episode
68

Noora Ahmed-Moshe of Hoxhunt

In this special in-person episode of Modern Cyber, recorded in Helsinki, Jeremy Snyder sits down with Noora Ahmed-Moshe, VP of Strategy and Operations at Hoxhunt, for a critical discussion on the evolving human risk in cybersecurity.

Noora Ahmed-Moshe of Hoxhunt

Podcast Transcript

Welcome back to another episode of Modern Cyber. It's a real treat whenever we get a chance to sit down with a guest in person and talk face to face. And so I'm really excited about today's episode. We're going to go into some topics that are always relevant. I was going to say more relevant today, but they've been relevant for thirty some years. We're going to talk about the human aspect in cybersecurity and some of the risks around the humans as the attack vector, and what we can learn over time. And I'm delighted today to be joined by Noora. Nora, thank you so much for taking the time.

It's so great to be here. Thank you for inviting me. Awesome.

Could we start by just getting a little bit about your background, your current role at Hoxhunt? I guess I'm pronouncing that right. Yes, yes. Yeah. So hi, I'm Noora Ahmed-Moshe, I'm the VP of strategy and operations at Hoxhunt and Hoxhunt. We are a Finnish scaleup company founded here in Helsinki, not too far from where we're where we're sitting today by four visionary founders who are very much still today running the company. So our CEO, Mika Aalto, and our CTO are very, very visionary, um, started with the, um, challenge of the human behavior and how that relates to cyber risk. Okay. Um, and my role, my background is in product development. So I've been working in product management roles for pretty much, um, twenty years in some capacity. Um, I studied media and cultural studies and digital media in the, in the UK. And my personal passion has always been, um, the intersection of human technology and society and how we as humans interact with technology, how it shapes our the way we actually see ourselves. And, um, I love working in cyber for that reason, because before I worked in cybersecurity, I always thought of cybersecurity as something extremely technical. And what I've come to learn is that actually, it's so much psychology, human behavioral science and all of those sorts of things.

Yeah. Awesome. It's really interesting. I always tell people for the current generation that's going through university now, there are cybersecurity programs. But for people who have been in this space for ten, twenty plus years, there really weren't. So most people that I know, nobody studied cyber security at university, right? These programs didn't exist. And I myself, my background is in linguistics. And, you know, you kind of like end up in cybersecurity because it's interesting. It's compelling and it's important, right?

Yep yep yep. For me also like the meaningfulness of work is super important. And every day like I get up and I work on helping people not become victims of cybercrime. And it feels really meaningful. Um, when my daughter was about five years old, she put it as mommy's job is fighting baddies online, and I love that. Um, but I also, I feel like it's so important for us as defenders to have that diversity of different skills and different viewpoints. We need, of course, highly technical people, but we need psychologists, we need sociologists. We need so many different angles to look at the problem, and the attackers are using so many different ways of trying to get through people and breaking, and we need those skills on the defense side as well. So I think the in in studies across the board, in how teams work well and how you gain success is by having a diversity of viewpoints. So we need that in terms of skills and types of people that we have working in cyber as well.

Yeah, for sure. I always actually stress that in whenever people ask me about like the early Firetail team, there was a point in time early and it's no longer true, but there was a point in time where we didn't have any two people who had the same passport. And just like the diversity of opinions and backgrounds, and we could see like just very different opinions about like, oh, how should we tackle this problem? And a lot of the times, tackling the problem in cybersecurity, at least for the work that we do, is a mix of, let's say, like data science and gathering information, but then how you present it to the user so that they can understand, like information from data is actually a real big challenge. I'm curious with the work that you guys do, I know you said something to me in the kind of the pre-interview or the pre-episode stuff that you like. Hoxhunt is analyzing some crazy number of data points per day or per week or something, like how do you filter that down? And not from the technical perspective, but how do you think about, like, analyzing that data into something that the users and the customers can learn from?

Yeah. So we are really lucky in that we've grown so much that we have so many data points, we have so many reported threats that come to us, and we do manually analyze all of those threats. I think the number I gave was about five hundred thousand on a monthly basis. Threats, real threats that are reported by millions of users. Yeah, we should, by the way, we should probably tell the listeners like what kinds of threats we're talking about here because, yeah, there's a huge range of threats online. But yeah. What specifically are you guys looking at? So basically, um, our users report any suspicious emails that come through their inbox using a button that's in their email. We've also got, um, reporting of, uh, text message smishing, things like that. Okay. Um, but mainly, um, the, um, obviously phishing is still the biggest human layer, um, threat vector. So most, um, actually most attacks come in through phishing still. And so of course, um, users are often trained on in different ways about these types of threats. But the problem is that if we tell people to look out for some types of threats, obviously it's like cat and mouse with the attackers the whole time, so they'll come up with new threats. There's no point in telling people that, you know, um, the types of threats to look out for are DHL package delivery because it's been and done, so the attackers aren't going to be using that anymore. So they are constantly evolving. Obviously, the way that they approach the victims and with the use of AI, of course the sheer number of phishing attacks and phishing emails has actually exploded. It's gone through the roof. Yeah. And then of course, what's of course makes it much easier for the attackers. Is the use of AI on developing those emails. Yeah. So the way that they can of course do, um, you know, specific research into the victim very easily, very quickly. Stuff that used to take you, you know, days, manual work, looking into a victim and try and, you know, attack them personally with a sort of spear phishing approach. You can do that in minutes with a, you know, prompt online. You can send out so many different targeted emails to different victims. What we see like across the board also in more sort of obscure on the internet languages like here in Finland, it used to be quite easy to spot phishing emails because there was terrible Finnish Google Translate from back in the day. Couldn't write a proper, um, phishing email that a user would fall for easily. But now, of course, that problem is pretty much gone. Um, so there's themes that also, um, are, you know, trending a lot of the time, what we see in phishing emails, like, we see certain things that are always, always trending like this, um, CEO impersonations, that's very much easier and more effective. Now, obviously, with all the information that you have available about the CEO, their tone of voice, um, attackers are developing deepfakes to make the attack more like relentless and two pronged. So those are always, always popular. Um, the sort of like contractual, um, you know, scams, like DocuSign impersonations, we see, like, um, Microsoft impersonation emails? Like, a lot, a lot of the time. Um, so, um, we are seeing, you know, what we're looking for in the data is, of course, new trends. So what we need to we need to know is because what we do is our once we, um, analyze the threats, we then turn around our training emails to reflect the current landscape. Because, as I said, there's no point trying to train people on, you know, if you get a, you know, DHL package or, you know, a million dollar win from a Nigerian bank, you know, these these are very easy to spot. And the problem is that, um, when we do like anyone can fall for a phishing attack, it doesn't mean that you're not aware of the problems. It's it's so psychological and so psychologically targeted, um, that we need to train people on what kinds of threats are actually out there, what kinds of threats might, uh, At right now. Target you as a person. Yeah, because those are the ones that will will get you, you know, if they like, you're busy and it appeals to your, you know, you're an accountant and it's the end of a financial year. And you get, you know, you know, an approval, you know, a request from your CFO at a time where it's just it's timely, it's relevant. Um, then, of course, you know, you're much more likely to to fall for that. Yeah. So we what we look for is how can we then train all of our millions of users on those very current threats. And when we see new threats, um, trends emerging obviously, um, we then turn around the new training in as little as twenty four hours because we've, we've been an AI first company really, really from the start in terms of like, um, our learning paths are automated and totally personalized for each person. Okay. So what your job role is how you do in the training. And you know, what's out there trending all impacts how you will sort of get trained. Um, and what sorts of emails you will get trained on because, because that's the that's the way that we, you know, get the relevant training to the person. And I need to add that it's also gamified so that when you do well in the training, you progress in a game. And, um, this is this is a big reason for the level of engagement that we get is that people, adults and children alike, but adults love to be a part of a game when they see themselves like gaining stars. Yeah, getting that positive reinforcement. And what's also crucial in, um, you know, security is that I think we, we sometimes talk about, you know, the people as the biggest problem. And I really don't like that because people I always believe. And it's not just a personal belief. It's actually backed by all like behavioral science studies, is that people, when we want them to change their behavior. And when it comes to social engineering attacks, of course, the human behavior is the deciding factor, like what the human does when they get approached by something can make or break what happens to an organization or, you know, it can. It can open the door to, you know, really terrible attack. Um, so when we want someone to behave in a certain way, um, if we apply punitive approaches and we, you know, lead with fear and with fear of punishment, um, we don't get people to change their behavior. That's been found in behavioral science studies across the board, decades and decades of research. That's it with with rats, but also with humans, you know, in schools, in prisons, in every kind of setting. It's it's a known fact that with, you know, if you ever deal with children and you want them to change their behavior. Um, if you give positive reinforcement and use the sort of carrot rather than the stick approach. Um, you just see a completely different level of behavior change. Yeah. Um, but in order for that behavior change to be meaningful, of course we need, um, the, the, the materials and the themes that we train people on also to be relevant.

I want to pick up on a couple of things that you said there, because there's a few things that stand out to me. Number one is, uh, we recently had a guest on from Estonia who also mentioned that, like the quality of phishing emails is just going up. Yeah, and similar to Finnish, Estonian, you know, very closely related language. They had the same thing and they went through like a national training program that they've put out, I think through the schools and through the government. They have their whole e-residency thing and whatnot, very digitally advanced society. Um, how do you now talk to customers around like, okay, the quality is just like way better. And they can be personalized and they can see that, oh, you're Noora, you live in Helsinki, you work for this company. Maybe you can even find out. Like who some of your vendors are online through, you know, public what's posted, customer references, etc. so I can make a very, very targeted thing at you and say, oh, Noora, um, you know, here is something very specific to your role in strategy that is going to look so genuine. How do you think about training users to spot something that looks so perfect nowadays?

Um, I, you know, the the only way that you can, you know, physically do that is by using AI cleverly on the defense side. So of course what we've developed is an AI spearfishing agent which actually does the training in the same using the same methodology. So it actually it will come at you as Noora with all the knowledge, with all the background, with all the understanding. Exactly. And it's, it's been, you know, super interesting talking to customers about it because, um, I think when you sort of zoom out of our, you know, working in cybersecurity, cutting edge AI types of companies and circles that we're in, um, and we see, um, we see the risks and we see the opportunities. Um, but when you zoom out and we work with, um, lots of different types of companies globally, and one of my favorite parts of my job is talking to different people about what's happening in their organizations, because that's the way you get close to, you know, what are the, you know, human risks like push the product forward in the in the work that you're doing forward. It's the only way. It's the only way. Of course, you have to be fully aware of the threat landscape. But we're dealing with humans and we're dealing with so many different organizational contexts as well. And organizational culture is also one of my deep passions, and it's closely, closely related to cybersecurity. Cybersecurity culture is a, you know, massive defense mechanism. Yeah, yeah. And, um, sometimes, um, leaders in organizations can feel quite scared that if you start spear phishing, even in a training context, their employees, you know, is that sort of unfair on them? Because, for instance, as an example, you could develop an internal, um, job, um, you know, promotion opportunity that comes from your CEO all that's reflecting your specific interests that you want to develop. There's like, you know, something you've been doing some writing about on the side, some particular, you know, technology that you're an expert on. And the company is offering you this opportunity to speak about this role. And it's very psychological. So if the if it turns out that, you know, I was scammed, you know, even as a training exercise, um, some people feel that that's a bit too sensitive. But then like when we, as said, never want to, um, instill fear in a person because we don't want that person to, um, have a negative experience with security. Yeah. We don't like most employees in companies don't think about cyber security as a major concern in their particular role because they have another job to do. That's right. They their main focus is somewhere else. And when we need to engage them and bring them with us on the defense team, because we do need everyone to defend. We want to do this positive reinforcement. So when we talk to companies about doing exercises like, you know, the kind of all center type, you know, spear phishing exercises, also deep fakes. So, you know, we do do deep fake training as well. And you know, kind of like the forward looking companies are doing these deep fake exercises within the company. Um, it's a, you know, cultural education piece, in my opinion, that we need to push the boundaries of the training because otherwise we're just not meeting the real threat landscape. I think it's as simple as that. And when we build that engagement and when we've built, like in our case, we've managed to do that with the gamification that's created, that engagement and that sort of really positive feeling about what we're doing and how we've kind of woven into the fabric of that, you know, workflow.

So that that gamification I want to pick up on, because I think a lot of people listening will be like, oh, okay, this is just more, you know, annual fishing training and a lot of people. And I can neither confirm nor deny that I've had this experience myself. We complete our annual cybersecurity training, and it's like, I know people who, you know, put it on a second screen so that they can just like let the video play while they do their other work and then, you know, go back and click the buttons at the end of it to just get through it. But, you know, roughly zero percent of that is is actually absorbed into the brains of the listeners. Like, have you seen that gamification really changes the effectiveness of the training?

I mean, absolutely, definitely, yes. And I couldn't agree with you more. Um, so of course, like, even if you have skipped the cybersecurity training. Yeah. Um, imagine someone who's not really concerned about cybersecurity, who doesn't understand the, you know, their particular role or how they might get targeted. Yeah. Um, you know, and it's like, for even if you were interested in cybersecurity, you'll skip through another like company expense system training or whatever because you won't be interested in that. Um, so I think, you know, there's this like, what is the what is it that you're trying to achieve by doing a training? Do you want to tick a box to say, we have trained our staff. We have, you know, one way method, push this information out on them. And it's, you know, not my problem if it works or not. It's like what you can achieve is compliance. You can. And there is a time and place for compliance. Of course like there's many there's many frameworks by which we have to be compliant in order to legally operate as a company, but that's not going to help us when the actual attack comes through and the attack is successful and the, um, the like, human brain, the way it works is we we have a lot of automated mechanisms working in there. We have a lot of, you know, ritual kind of, um, ways of, of doing things. And it's only like exercising the real muscle that will help us actually act in that situation. And you want to do it if you get something. And when you get a reward and you get entered into a game, it's like it's like you get that dopamine even from being in the game.

But I worry about that a little bit. Right. And I'll tell you why. And, you know, there's there's a lot of complaints nowadays about social media. And I myself basically only on one social media platform at this point, um, you know, the so-called TikTok ification of things because to your point, um, Humans rely on this dopamine as positive reinforcement, and we don't really know it, let's say, unless we actually sit down and talk about it. But when you're scrolling videos on TikTok or Instagram or whatever, it's like dopamine hit, dopamine hit, dopamine hit, is there a line at which point, like you go from gamification and positive reinforcement, where you're learning to where you just cross into, I just need my next dopamine hit. Is that something that you worry about? And kind of presenting the training in a in a more fun way to the users?

I love this question and I love this, um, philosophically thinking about this. And one of the things I said to you in the beginning when I introduced myself is that my real passion that I'm interested in, in life is how this technological advancements actually shape our human experience. And I think we're at the core of that. And because of that dopamine hit that we are used to. It is also what the attackers are using. So the thing is, if we're not kind of if we don't, um, manage to turn it around for good, then like we can say, oh, we we we don't, you know, we don't want people to, to play a game if we philosophically disagreed with that. Um, you know, getting people excited about being in a game. Yeah. But what we we're doing is we're using that mechanic for good. And I do think that that is, it is, um, it is a worthy cause if we're helping people defend themselves and their organizations, their societies, their countries, because that like dopamine hit and that like instant gratification is also a massive reason why many attacks succeed. Because you'll get that, you know, instant, um, you know, um, some, some, um, uh, social engineering attacks will appeal to, like, a negative emotion. So fear and, you know sense of urgency and some will appeal to a positive emotion like a reward greed. You know greed is like you know, the the oldest like, you know, you could probably scam someone with a, you know, papyrus saying that, you know, you're going to get a piece of gold if you go and do this thing. So, so like, um, I believe that one of the most important skills that we need to hone and develop as humans in general, as technological advancements, you know, such as this exponential development of AI that we're currently seeing, things will keep rapidly changing. We need to, uh, bolster our critical thinking abilities. And I think that comes back down to the whole scrolling of the TikTok to the, um, to Cybersecurity to so many. So many of the world's problems could be avoided if we had a lot of critical thinking. And and when you do it in the moment. So when you get that, um, that training trigger, the way that we do it is obviously not just by, you know, you don't get it's not like you can just go and play a game because you get addicted to the game. Yeah, you do get served these training moments at the like at random intervals. So you're not expecting because of course, if you're expecting it like to your point about the annual training. Yeah. I mean, how is that going to help when you know that it's training and everyone gets it at the same time? It takes you away from that mindset that you would have if you were face to face with a real attack. It's going to catch you by surprise, and it's going to be. And again, um, with the AI developments, it's going to be even more contextual. It's going to be even more timely. It's going to reference a project that you're currently working on, you know, so, so, um, when you respond to it in that gamified moment. That is what's sort of developing that muscle, and that is also developing that critical thinking ability that if you fall for it or if you catch it whilst you're in the game, that's going to help you apply that critical thinking in the face of the real attack and hopefully in the face of other, you know, um, misinformation that you see and so on.

Interesting. I want to change gears for a second. I know you guys have done a lot of research lately. Uh, there was something that you mentioned to me and I can't remember. I don't have it right in front of me. I think you have it over there. Talk to us a little bit about what you've observed in the last couple of years, how have things shifted, and some of the research that you guys have done. What have you learned from that?

Yeah, yeah. So as I mentioned, we've always been very much an AI first company. And, um, we have been studying, uh, for the last couple of years how, um, human red teams. So those cybersecurity Security experts like the the defender side, ethical hackers, um, trying to um, devise as good, um, phishing emails as possible versus AI trying to devise as good phishing emails as possible. How do they compare? And, um, this research has also been a passion project of our CTO, albuterol, who's obviously from the start been, uh, very much on the cutting edge of using using these technologies for good. And, uh, when we first started the study, this was in two thousand and twenty three. And the, uh, humans were still significantly better than AI at phishing people. So better is determined by how that fish success click. Yeah. Or if you think of it from the user's perspective, fail rate. So basically success is for the attacker of course when you do what they want. So you give you know, you give your credentials Dentures or, you know, you, you download the malware or and of course, you know, for anyone who's not particularly in cybersecurity, but listening to this, of course, it is often just like a, you know, kind of step by step approach. So once the attacker gets in, they probably don't get everything they want once they're in, but they can gain lateral movement inside an organization and impersonate someone. And that way they can, you know, do a mass load of damage. Um, but yeah. So, um, when the, um, when we started the study and as mentioned previously, we have got a wealth of data, we've got millions of users. Um, so we're able to, to do like very large scale studies, which is a beautiful thing because, um, we are passionate about pushing this whole industry forward and helping, you know, the bolster the whole defense, um, you know, as a whole. Yeah. So, yes, twenty three we started it and I think and I'm going to have to check exactly the number. So in are in twenty twenty three. This AI was thirty one percent less effective than humans at phishing. The victim okay. In November twenty four, the AI was only ten percent less effective than humans. Okay. And then this year in March. So March twenty five, the AI overtook the human. So it was now twenty four percent more effective than, oh, not just overtook, but overtook by a pretty good margin by a really good margin. So if we went from thirty one percent less effective to twenty ten percent to twenty four percent better. And this is in two years.

Yeah. And we're at the beginning. We're at the beginning that I always tell people, it's like, you know, I know that ChatGPT has been out for a while. And so we've had, you know, Llms and GPT for a little while now. Um, but when we've been looking on, you know, and those who know us at Firetail, you know what we do? We help organizations with AI I adoption to help adopt securely. We've been referencing some data points, not our own, but some others that we found recently. One study from MIT showed that kind of the average across different industries is that only about ten percent of AI projects are in production, right? So to your point, we're at the very beginning of this. Yes, we've had the technology for a couple of years now, but a lot of organizations didn't know where they were going to use it. And then they thought, oh, it's going to solve everything. And then they would test it and realize, oh, it's good at this, but not good at that. And that's true of almost every technology, right? Like we saw cloud as a major enterprise adoption thing like ten plus years ago at this point. And there were organizations who were like, oh, we're going to move everything to the cloud. It's going to be perfect. And it's like, well, it's not great for every use case necessarily, right? But yeah, we're just at the earliest stage, right. So if only ten percent of use cases are kind of AI assisted at this point. Yeah. What does that say for things like phishing attacks. and kind of human manipulation going forward. Do you expect it to make up this ninety percent ground, or do you expect it to, like, continue to get worse before it gets better? What's your thinking? What's your projections?

So I think it's it's super interesting. And I think the thing what we if we look at it from a, um, fearful perspective in terms of the how attackers are, you know, developing their, their methods is we can think, you know, I believe they're going to get more and more effective, they're going to get more and more targeted. They're going to have more and more ways to, you know, get people to hand them the keys so they can bypass technical layers more like easily. Because whatever we do to, you know, close the access down if, if you if one person sort of hands you the key to the door and you get in, you're in and there's no technical sort of breaking involved. So with the use of AI, having, you know, helped so many like non-technical hackers. Also like you needed to have some kind of technical expertise in the past to do some hacking. And it's like with these tools that are readily available and becoming more and more sophisticated, the hacker needs very little technical expertise and very little money. They don't really need much. So we can I my projection, certainly for the near future, is that it's going to become more and more relentless. There's more and more, of course, like agentic, you know, um, stuff going on. So the human needs to do less and less. The AI will react on their behalf. They'll send a follow up. They'll tailor their approach, they'll keep, you know, attacking through different channels. They'll send the deepfake to follow up on the on the email that already looked like it came from your boss.

I wanted to ask about the deepfake in particular, because there's one thing, you know, I do a lot of public speaking. Yeah, I have my podcast. You can get my voice more than you probably care to get my voice, which is fine. And I also have our blog and probably like over one hundred posts on our blog that are pretty indicative of my writing. Yeah. It would not be very hard to deepfake me. Yeah. Is that a problem? Is that something that, for instance, like for my employees, should they worry about like, oh, if I get something from Jeremy, I have an internal rule with my team that I'm not going to share on this podcast about, like what to watch out for. Yeah. Um, but, you know, what do you tell people who have the same question? Because, you know, there is a lot of value in doing public speaking and education and so on. And I get a ton out of it, and I really enjoy it. But like, there's part of me that says, well, crap, every time I do this, I'm also giving more of myself to people who could capture this for malicious purposes. Like, what do you tell people about that?

Yeah. So again, I think, you know, this is such a like super core issue on like where we're at with the developments of technology. We cannot stop the development of technology. We cannot stop the development of AI and criminals using it for their malicious purposes. What we need to do is, of course, embrace that and use it for defense purposes. So when it comes to deepfakes, which are, you know, such an interesting and very much a trending topic at the moment, for want of a better term, if you want to think that, you know, criminals, of course they are utilizing it. So much so, um, that when we train people on deepfakes, of course we can. If we can do it, do an example on the defense side, and you can sort of experience in real life what it feels like when you got a call from your boss and it wasn't actually your boss. Of course, it again, develops that muscle that, you know, you learn, you learn to you learn to understand what it feels like when you were, you know, somebody's trying to scam you with a deep fake. But I also believe that this whole cultural whole, like cybersecurity culture and good culture aspects that are fundamental in the defense, like I talk a lot about psychological safety as a key pillar to, um, you know, good security culture. Yeah. I also believe that when you have really good psychological safety in an organization, you are less likely to fall for a deep fake because you're more likely to verify, ask for help, to pause, to stop. And I like I have a good relationship with my boss. Not everybody can say that. But like, if I got now personally, you know, a video call or a phone call from my boss about, you know, um, approving some kind of invoice or doing some giving some information to somebody, I, I, I don't mind sending him a personal message during late hours and saying, what on earth is this thing? What's this? Million dollar invoice from Firetail? Yeah, exactly. And that is like, I think just one example of, you know, you have a lot of companies, a lot of organizations where people genuinely would be, um, you know, fear kind of, um, repercussions if they don't act on, uh, authority. Uh, authorities asks, you know, immediately or do something. And I think, again, we're like such fundamental workings of, you know, human psychology versus cutting edge technology. We we need to use the cutting edge technology to train against that cutting edge technology on the adversary side. But we equally need to keep developing these very fundamental human capabilities, psychological safety, critical thinking. It's at the end of the day if you are the the final frontier, as the defender, as the human, but also potentially the, you know, the giver of the keys to the kingdom, to the attacker, you need both of these things or you you are owed, I believe, as an employee that your organization also tries to equip you technically with the skills, and you know that you should believe that you have all the possible protections in place. You're using AI, you're using AI methods, the same ones the adversaries are using to train people. But equally, you're using this engagement, positive reinforcement, um, psychological safety and encouraging critical thinking.

Is this such a good point? Because it's actually quite opposite to how most organizations work today. And I think, in fact, the larger they get, the worse that becomes. And it becomes this thing where it's like, hey, welcome to the organization. Do your annual training. There's a security team over there, we all hate them. We think of them as the Department of No. They're a necessary evil, even though we have somewhere in our company principal. Cybersecurity is everybody's job and we say these things, but then we don't behave that way, right? We give this annual training that people kind of cut corners on. They watch it on a second screen, they just get through it. And then again, we blame people for failures instead of praising success. It's such a mind shift that probably like really needs to happen more and more. And I think it's very, again, the larger the organization, the more difficult it is to change. Just like anything, changing anything in a larger organization is more difficult than in a smaller organization. Great. Such a great point, Noora. We're coming up on time here. We've got just a couple of minutes left, and I want to get through a couple quick things real quick. Um, so first of all, where can people find this report that you guys did?

Uh, on our website, on our resources, we have the, um, I think it's under The AI outperforms elite human red teams, but you'll find it there on Hoxhunt.com. We'll have that link from the show notes as well.

What do you think about the next two years of AI advancement? So if we've seen, you know, this gain from thirty percent worse to ten percent worse to twenty percent better, what would you tell people to think about as they're thinking about, let's say, their own workflows or their own risks for their organization for the next couple of years? Do they need to be on red alert for the next couple of years, or watch out? Or should they turn email off? What would you tell people for the next couple of years?

I, I believe that we need to embrace all of the technological changes. As said, you know, as humans, our evolution is intricately bound with the technological advancements that we're able to make, literally, from making fire to making AI, We can't stop development and we shouldn't try and stop development. I think we're at this very interesting point with AI and what you said as well, about how large organizations in particular, who may not have been able to embrace, like an AI first mentality into their processes as such, but they're worried about employees use of AI tools. And you mentioned about cloud, and I also kind of liken it a lot. We're both old enough to remember like mobile app craze, right? Oh yeah. So like do you remember the time when like, everyone wanted to make a mobile app and there should be a mobile app for everything, and everything should be done by mobile app. Yeah. And I think with AI, the implications of what can happen with the safe or unsafe use of that technology are obviously exponentially, I think, greater. And I think that's the really exciting thing about it. But going back to the mobile app analogy, I think if we think about it like technology, first we're going down the wrong track. We need to be thinking about like, what is the technology like achieving? What is it when we're, you know, defending against cybercrime? What what is the what is the attacker using it for? What are they able to do. And that way we're able to also do like use it on the defense side. What are we able to do with it. What do we need to achieve with this technology. Of course we need to secure the the tools that we use in companies, and we need to have policies in place, and we need to educate people on what is unsafe and safe. You know, what tools can be used. But, you know, are we using all the same tools in one year as we're using now? We don't know, like things are changing so rapidly and the advancements just get faster and faster. And I think the only like only only answer to that can be to embrace the change. Like like the entire life is just all like things are in a constant state of flux. And that's to do with technology. It's to do with, with like literally in nature. That's just life. So if we don't embrace the change, then like I think then somebody will and the attackers will. And I think, you know, I am really excited by where we're at. I'm really like excited to learn where we go, how we like I do. I'm an optimist as such at heart. So I believe that as defenders, we're able to, at the end of the day, win and apply AI better. But of course it's it's cat and mouse. What adversaries are doing. They they learn what we're doing. We learn what they're doing. Um, but I'm I my my projection is that in the next couple of years on the kind of consumer side and certainly the employee side, and the kind of less people who aren't as ofay with technologies will probably see this type of mobile apps gone out of fashion type thinking. And then maybe like the real use cases, and you know, how we really learn to live in this new, new phase of, you know, human evolution that is enabled by AI. I think we'll sort of start to, to settle. But the technology, technological advancements will never stop. They will only accelerate.

I think the message there that I take away from that, that I think is super valuable, embrace the change. There's no way there's no going back. You you will lose if you try and fight the change. Yeah, well, I think that's a great spot to end today's episode on. Noora, thank you so much for taking the time to join us. For everybody here at Modern Cyber, we will talk to you next time. Thanks so much. Bye bye. Thank you.

Protect your AI Innovation

See how FireTail can help you to discover AI & shadow AI use, analyze what data is being sent out and check for data leaks & compliance. Request a demo today.