In this special in-person episode of Modern Cyber, Jeremy sits down with cybersecurity icon Mikko Hypponen at RSA Conference 2025 in San Francisco.
In this special in-person episode of Modern Cyber, Jeremy sits down with cybersecurity icon Mikko Hypponen at RSA Conference 2025 in San Francisco. Surrounded by the energy of the industry’s biggest event, the conversation dives into the current state of AI in security, LLMs discovering vulnerabilities, and the emerging threat of AI-powered ransomware gangs. Mikko also shares insights on geopolitics in cybersecurity—from North Korean developer infiltration to Russian ransomware operations—and reflects on Europe’s shifting trust in U.S. tech. This episode blends deep technical insight with broader industry trends and personal reflections, recorded steps away from the Moscone Center.
About Mikko Hypponen:
Mikko Hyppönen is a globally recognized cybersecurity expert and the Chief Research Officer at WithSecure. With over 30 years of experience in the industry, Mikko has analyzed some of the most significant malware outbreaks in history, such as Love Letter, Melissa, and Stuxnet. He is a sought-after speaker at international conferences, a TED Talk veteran, and the author of the acclaimed book If It’s Smart, It’s Vulnerable. Passionate about cybersecurity education, Mikko has been a driving force behind initiatives like the Museum of Malware Art, showcasing how digital threats can inspire artistic creativity.
Mikko's Website: https://mikko.com/
WithSecure Website: https://www.withsecure.com/en/home
Mikko's Linkedin: https://www.linkedin.com/in/hypponen/
Mikko's X: https://twitter.com/mikko
Alright. Welcome back to another episode of Modern Cyber. Just like last year, we had the fortunate opportunity to sit down with Mikko from With Secure and talk again about what we learned at RSA this year. Mikko, thank you for making the time. Thank you for having me.
Almost at the same place. We are at Marriott Marquis right next to Moscone Center exactly like we did a year ago. Like, maybe five meters in the, over that direction we sat last year. And it was a great conversation last year, and I think the thing that I wanna start off with is exactly the same thing that I started with last year. What did we learn?
Well, we learned that there is a lot of hype in the industry still about AI. It is Yes. Continues to be the main topic. Yep. We had zero trust as the main topic for maybe two years.
Yep. Yep. Since last year, it's been AI generated AI in in particular, and it continues to be the main thing. And it's not just hype. There's very real things happening.
Last year Yep. There were zero discoveries of, vulnerabilities made completely by large language models hands free. Okay. Now we have more than 10. So it's it's happening, and it's happening faster than we think, faster than I thought.
So what kind of discoveries are we talking about here? Talking about, large language models which are given access to source code and Okay. Given the the task that, you know, go and find bugs. Out of the bugs that you'll be able to find from the source code, find the bugs which are remotely exploitable, then go and write a code to exploit it. But the bugs that are remotely exploitable, we're going on the assumption that this would be in an application that is Internet facing and is, let's say, Internet accessible network connection, all of that stuff.
For example, some of the zero days which have been found have been from browsers. Okay. So, you know, applications like that. Okay. And and, the good news is out of the ones that we know of, they've all been found either by bug bounty hunters or by, security companies like Google Security or or, XPO or Crossbow companies which find these, to report them responsibly so they get fixed.
And that's great. I mean, this is improving security. However Yep. Some of these vary me a little bit. So for example, I know one guy in Finland who used to work at WitSecure who is now going solo.
He's basically earning his living with bug bounties, and he's found zero several zero days with open source large language models like Gwen and DeepSeq. Yep. And if he can do it, one guy with one computer with one NVIDIA GPU, if he can do it, a lot of other people can do it as well, and not all of those people will be the good guys or bug bounty hunters or security companies, and that's the thing that worries me. However, I'm not aware, and I didn't hear during RSA about any discovery where we would know that attackers would have found zero day vulnerabilities with generated AI. So I wanna come back to that in a second because that brings me to something that you and I have talked about before as who gains the advantage with AI, the attacker or the defender.
Mhmm. But before we get to that, I wanna ask a little bit more. To the extent that we know and I can imagine we don't know all the details because I'm sure some of these companies are being kind of secretive about how they used LLMs to find these things. Was there any kind of guidance or knowledge shared about, hey. If you wanna use an LLM to find a vulnerability, this is how you start the process or these are the kinds of prompts that you give.
This is a brand new research area, so I think everybody's just experimenting different things and and seeing just what sticks. I know that some of these zero days found by this solo researcher in Finland, they basically been fuzzing with LLMs. Okay. So use different fuzzing technologies, try all possible kinds of input to everything where you can put input in, and just try to crash the system. Yep.
Once you're able to crash it, then the large language model and the the framework around it, we'll try to figure out why it crashed. Okay. And weaponize, you know, the part of code. Write the export to crash it. Yeah.
The right way so you get code execution. So if you say let's say, just taking fuzzing as an example. Mhmm. So just to kinda make it concrete for the audience. Right?
So a fuzzing technique is if you have a piece of code that has some kind of input that should be an integer between one and one hundred, you would say, well, what happens when I send negative one? What happens when I send a b c? Sure. What happens when I send pi e some, you know, two divided by three or some kind of thing that is is not really falling in that way. Which wasn't designed.
Right. Right. Right. And not a predictable input. Right.
And so The way I understand that the fuzzing that was used in here was fuzzing different image formats. Okay. Obviously, browsers, and we're speaking about browsers here. They have to be able to render different images, and image formats can be pretty complicated. Yeah.
So, you know, you can imagine that they would work fine when you send a normal JPEG into it, but then you take a 10 terabyte JPEG with really weird structures Yeah. And then you see what happens. And in some cases, you end up with crashes. And when you end up with crashes, in some cases, you can use those to gain access to the system. I I can totally see that being the case.
I think images in image files are actually kind of interesting because I think one of the things a lot of people don't understand is that some image files are binary and some image files are actually text files that get kind of rendered by the software that displays them to you. Like SVG. SVG. I think bitmap is also one of these if I remember right but super old school. Yeah.
Showing our heads a little bit. But, but yeah. So that's really interesting. Okay. So the question that is, I think, on some lines and I've looked at one example not too long ago if somebody used an LLM to, win a capture the flag exercise Mhmm.
Was that there is a a little bit of a open question around, do you go more prescriptive with your prompts, or do you actually go less prescriptive with your prompts and let the LLM try to reason out a way to fuzz on its own? Right. And do we know anything about that yet, about what some of the techniques that have worked to find these problems, or we don't know yet? That's the thing that, these companies and individuals who've successfully been able to do this so far, they're they're keeping the lead on a little bit. Of course, this is very valuable information.
Yeah. Yeah. Yeah. Wants to be, you know, making groundbreaking new research. Yeah.
Yeah. It's a business advantage for companies in the industry. So there's a lot of development going on, but we don't have a full visibility on what's really happening. Yeah. And it's funny because, you know, there was a lot of talk when generative AI started being able to, let's say, develop reasonable running code.
Mhmm. And you would hear these things like, well, we're not gonna have software engineers in the future. We're going to have prompt engineers. Right. And that's an actual example of prompt engineering is figuring out the right way to give it the the requirements and the criteria and the instructions to develop the thing that you're looking for.
And and then, of course, the next step after using large language models to fuzz code or look at the source code and find the bugs Yep. Is that you just give the give give a binary Yeah. An EXC file. Yeah. And and, you know, reverse engineer this binary, figure out how it works, find all the bugs.
Yeah. And that's gonna be possible as well. Probably, it's gonna take a while, but I think it's gonna happen. Yeah. There's another thing around this that I think is kind of interesting, which is, you know, we talked about source code.
We talked about binaries. APIs have also been in that, and there's even a company that I have I can't remember the name now, and I feel ashamed because I've given this talk a few times recently, but there are companies specializing in building LLMs to write code specifically to create or consume APIs. So a very narrow use case, but it actually unlocks a ton of functionality. If you think about, you know, APIs as this thing that I could 10 take 10 APIs and stitch together a piece of software or a service out of them. Mhmm.
So, yeah, a lot of room at to your point. I mean, a lot of very valuable information in being able to do this right. Sure. I wanna come back to that question that I I said. So we've talked about how whoever gets the advantages, whoever kind of embraces it first Mhmm.
Learns how to master it first and and it being AI, obviously. Do you feel like the defenders have now understood this and are actually charging ahead in a meaningful way? Yes. We are going to use LLMs, and we're going to use them in meaningful ways. I would say yes.
And I think defenders are in the lead. Okay. Quite clearly in the lead at this stage, and that's something we rarely see. Like, quite often, when we have new developments, we see the attackers Attackers always. Use the use technology first.
You know, for example, cryptocurrency. That's a great example. The first time I heard about Bitcoin was when it when it was already being used in rogue mining attacks. Right. Which is really telling that I I I follow technology pretty carefully.
Yeah. And I hadn't heard about blockchain at all until I ran into malware, which was Yeah. Mining already. So this is the rare exception. There's much more things that are being done by the good guys with generative AI than by the bad guys.
Now bad guys are moving. We we are seeing of course, we we've seen deepfake attacks, especially consumer targeting deep tech attacks for quite a while. We've seen political attacks. Yeah. We've seen automation of, consumer scams, especially romance, scams, which are really heavy scams to pull off.
Yeah. It might take weeks or months to, you know, nurture the risk. And relationship building and very human like interactions. And, I mean, historically, they were done by humans. Right?
Exactly. Which means one scammer could only scam one victim or maybe two or three. Sure. But with LLMs, you can scam a hundred thousand at the same time in all languages. Yeah.
And this is already happening, and that's this is really worrying. But, frankly, big part of the generative AI, attacks we've seen are have been targeting consumers. And, of course, in a place like RSA, this is not really about that. This is about enterprise security and corporate security and business to business solutions. There's one gang in the ransomware space that I know of, which we believe is using generative AI, not just to help them write the code for their ransomware malware, but also to run the campaigns.
And interestingly and I And when you say run the campaigns, you don't mean just, let's say, like, write the emails that, you know, the phishing emails to get the initial access credentials and things like that? No. What I mean is is to react to defenders. Oh, okay. So continually monitor how defenders are blocking or detecting their attack and then react immediately at machine speed.
So k. Basic example, ransomware needs a control server on some IP address, and it gets blocked. So it automatically detects, hey. We're getting blocked. Let's change the IP address.
Let's change the domain. Yep. Let's recompile the binary. Let's avoid detection. And Yep.
This, again, used to be done by humans at human speed. Now they are reacting so quickly Yep. That, it seems to be automated. Yeah. Yeah.
This is this is the thing we've been waiting for. It seems to be happening. And the interesting thing I'm referring to a group called Fancsec. And it seems to be from Africa. Interesting.
It is interesting. So so Not the parts of the world that you that we've heard about ransomware gangs in the past. We hear about four zero nine and the Nigerian scammers and so on. But yeah. BEC and CEO scams.
A lot of scams coming from Africa, but ransomware really hasn't been a problem. No. That's that's been Eastern Europe. That's been Asia. That's been Middle East a little bit.
But, yeah, that's really interesting. When you think about the the the wisdom that cybercriminals come from geographical areas where you have a lot of people with skills but without opportunities. Yeah. And, of course, Africa would also fit this description. Yeah.
Yeah. Yeah. I mean, this is, I I mean, ironically, you know, not to get super geopolitical for a second, but when we think about, you know, the leading causes of immigration and all the political friction that it creates, it primarily comes down to exactly the factor that you said. You have a lot of people with skills but without economic opportunity. And, you know, largely speaking my personal opinion is people leave for better opportunities and better lives for themselves, their children, and their families.
On the geopolitical side, what were some of the themes that we had here at the event this year? Well, one thing which several, keynote panels touched upon was North Korea and especially the North Korean remote workers. Yep. Yep. This is something that a lot of companies are worried about.
It's a bigger problem than we've realized. This has been happening a lot. So the problem in itself is that North Korea, the government of North Korea, tries to get their people inside US based technology companies to work as coders. Yep. And there seems to be two different reasons for this.
Number one, simply to get money into a into the most sanctioned government into the world. You know, tech developers in Silicon Valley companies make a lot of money, so they bring get money in. And then the big worry, which hasn't been happening as much, but what creates a lot of anxiety here, is that they gain access. Into the companies. Yeah.
And and a little bit surprising, it seems that they haven't been using that too much. They haven't been stealing a lot of information. Yeah. They clearly so far have just been, like, making money by, you know, posing as a remote worker typically from Poland or some Asian country or or somewhere over there. And and, they might even have, like they've they've hired people to physically come to a job interview claiming to be the version, but it's it's just an actor basically.
We had one of these ourselves in our first year of existence. Yeah. We have a blog post up about it. You can find it. We interviewed a North Korean hacker.
Mhmm. And, you know, the backstory that we were given, by the way, was a, a half Irish, half Japanese person who had primarily grown up in Japan. That's why their English wasn't perfect, and that's why they looked Asian was because of the half, the mixed heritage. And, you know, a a completely believable backstory Mhmm. And a fake address in Ireland.
Alright. We were actually given the resume by a recruiter who recommended the person to us because they did what the recruiter does. They scanned the CV. Mhmm. They look for the particular skills that are in the job description, and they do a very introductory phone call just to make sure it's a real person.
And, oh, it says you have five years of Python. Yes. I have five years of Python. Awesome. It says you have two years of API development.
Yes. I have you know? And came to us and said, hey. This this really sounds like a good fit. It's only through the job interview process itself that we started to get a little bit skeptical.
There were a lot of things. I won't go through all of it because it's a long post. But I'm curious when you say it's happening more than we knew about because at the time, this was 2022 for us. Mhmm. We weren't the only company by any means.
And then, you know, last year, KnowBe4 actually admitted that they hired somebody, and they shared their experience, we shared ours, etcetera. When you say it's a bigger problem than we know about, how do we know and what do we know? Well, I'm quoting FBI here Okay. From from one of the panels. That's that's what they said.
They they believe it's it's much more rampant, that there's more of these cases happening. They see they get more and more reports of this. So they they they are they are much worried that there's a, an underreporting of this. All of these get caught. Yeah.
It's it's a big problem. Yeah. And and they were half jokingly recommending to ask the questions from the the, applicants. So, you know, hey. How fat is Kim Jong Un?
I heard this one. Yeah. Or something like that. Yeah. And and just look for marks or or tell those signs.
One of the panel members said that they had an interviewee from Poland who couldn't pronounce his own name. So it's sometimes it's fairly easy to point the, the evidence if you're just now looking for it. Yeah. But this clearly is something that is happening a lot and is worrying a lot of companies in tech sector here in USA. Yeah.
We've had a couple of other interview not interviews, but resumes over the years since then. And now we're very sensitive to it after that initial experience. And, again, check our blog for for about it. Sure. I will maybe reshare on our social with this, podcast that posting but now we watch out for this stuff.
We've had a few more resumes since then that looked very fake and there's a few signs of that that you can look for as well. Okay. So that's the North Korea side. Anything else on the geopolitical side that were particular themes? Well, Russia, of course, is there.
I didn't really see Chris Krebs, of course, who very publicly had his security clearance Revoked. Revoked by president Trump, has been very vocal about this, and he's been pretty visible here. He actually told in one of the panels that he was hosting that he was wasn't going to come to RSA at all because of what what was happening. But then, the chairman of RSA conference convinced him to come, and I'm I'm happy he was here. He received a standing ovation in one of the panels when he was discussing about the problems he's been facing because he used to be head of CISA, and he was fired by a tweet in the last Trump administration.
Right. I mean, this Trump administration, he he had his, security clearance stripped, and then he left Sentinel one and where he was, working. Yeah. He was working. They're not just an adviser, but actually working.
He was the original adviser. And, and, he wrote an open letter about how he has to take this fight alone. It's not a fight between Sentinel one and the administration, but between him and the administration. Right. Another interesting thing about geopolitics was that, Dmitry Alperovich, when he was being interviewed in one of the keynotes Yep.
Former CTO of CrowdStrike for those who don't know. Yeah. And founder. He was a member of the CSRB, that the Cyber Safety Review Board Yep. Which is basically the cyber equivalent of NTSB, which investigates real real world disasters.
CSRB was well, basically, when Trump administration came into effect, big part of the budget was cut, and the the the advisory board is no longer there. And, I think, to meet you as the, vice chairman, he's no longer in that position. And I was sort of expecting some sort of commentary or criticism from Dmitry on Trump, and he had nothing. He was actually fairly positive about this these shifts and changes underway right now in The United States, which I wasn't really expecting. And I think the audience was also a little bit surprised about how Trump, administration got no criticism from, Dmitry Alperovitch at all.
Interesting. And were some of the things that he was, let's say, complimenting. I don't know if complimenting is too generous, but some of the things that he was kind of pointing out as positive points around the Trump administration causing other companies to invest more in things like national defense and cyber defense, or was there any specifics? He he said there's a lot of things that haven't been working. It's okay to, you know, you know, wrap things around and, you know, make big changes.
And and, one thing he said in particular, where he actually it by saying that a lot of people will be probably angry at me for saying this, but he thinks that CISA CISA doesn't need 5,000 people, and maybe 30 would be enough, which is pretty controversial. Yeah. But but, you know, something along those lines. So clearly, he's linking back to the big shifts that's been done by the Doge group run by Elon Musk. So Okay.
And I'm not disagreeing that there's a lot of spending in big governments, which probably could be used more wisely. Sure. But I'm not a fan of the Trump administration at all, and I guess a lot of Europeans aren't fans either. Sure. And that's another geopolitical thing that I've run into several times.
A lot of European people here in RSA are anxious about how much we can rely on US technology and our partners in USA. I heard this point a couple of times. And in fact, on Monday, I was at a little side event, and there was a panel discussion of a few people from European leadership positions at various companies and investment firms and governmental and nongovernmental organizations. And it is very clear that there is not only concerns about, let's say, the reliance on American technology providers, but also around things like data sovereignty and the ability to have the ability to regulate the data of European customers completely. And I think, you know, there are pledges by organizations like Amazon Web Services and so on that, okay, your data is yours no matter what and that but then there's a concern that's been lingering in the back of many people's minds, which is like, okay, but when that subpoena or that warrant or whatever comes, will the provider actually uphold their rights, or will they actually give in to the request?
I know Apple has been very public and and pretty consistent and always upholding the customer's rights to their data privacy. And I think, you know, one of the things around this that some firms have done is just say, okay, look, it's end to end encrypted. Even we don't know what's in the data. Mhmm. We have no way of accessing that data no matter what.
Probably as a way to make the customer happy and give them some sense of, like, hey. Doesn't matter. We can't ever reverse engineer it. We can't unencrypt it no matter what. Which, by the way, isn't true when Apple says that or Microsoft says that there's no way for us to access the data.
The customers are accessing the data from an operating system created by Apple or created by Microsoft. So a long winded way of saying what I what I often, think about these problems. If you don't trust, clouds of Apple or Microsoft or if you don't trust, you know, Copilot created by Microsoft, I get a lot of questions. Like, can we really trust Microsoft Copilot and put our corporate secrets in Copilot and ask questions about the data? And what I always tell these companies that, well, it's a Microsoft product.
You trust Microsoft already. You run their operating system all the time. If Microsoft would want to steal your secrets, they could have done that twenty years ago. Sure. So they do have access to your data if they're willing to break all the rules.
Of course, they don't wanna break the rules and completely destroy their reputation, destroy their business model. Yep. But it's not true they wouldn't have access to all this data. They do because the the original source of the data is the operating system that they've created, and they have remote access to it through these online update systems. So during times of conflict, during during times of crisis, it's not completely out of the realm of imagination that operating system manufacturers would use their power to gain access to information on other people's computers.
But if you think about it, to your point about, a, I already have trust in the vendor, and, b, all of the steps that would have to happen to get to that point. Right? So that would assume that they would have to push out in a Windows update, for instance, that get that has a remote access package in it, and you accept that, and they then get the orders, and they then act on the orders. It's a kind of thing where, you know, my father for years had a tattoo. Well, actually, all the rest of his life with his blood type on it.
Okay. It was given to him during elementary school in in Utah in the late nineteen forties or early nineteen fifties. Okay. And I always thought, what's the chain of events where this tattoo is useful? It's a Soviet invasion that gets to the point that it gets to Utah of all places not high on the target list and that there are, attacks to the level that they're hurting children and the children are so hurt that they need blood transfusions and they need blood transfusions so quickly that they need them right there on the spot can't get to a hospital and you have the availability of blood.
So you could look at the tattoo and say, oh, okay. B positive. Get me b positive blood right now. And so I understand what you're saying about, yes, they could get access to the data. I do think maybe the threat model around it is a little bit long and I agree.
But the point stands, you already trust these vendors. Yeah. And if they tell you that, yes, you can send your corporate data to Copilot, we're not gonna use it to train anything. We're not gonna leak any of that. Can you trust that promise?
Yes. You can trust the promise. If they go to break promises, they could have done that in a thousand different ways. Fair enough. That's my point.
Last point about the geopolitical things I saw, John Fokker of Trellix had a great keynote, on on day two of the event on the big stage. It was great to see a European to give a keynote in in in RSA. Of course, I'm biased because I did one last year myself, but, he he had a great talk, about Blackbosta. Blackbosta is one of the largest ransomware gangs out of Russia, especially about the leader, Oleg, with very interesting details based partially on leaked chats from within the Blackbosta ransomware gang. Yeah.
And the the biggest, like, bombshell really was about, the leader of the gang, Oleg, getting arrested in Armenia. These guys, for reasons that are beyond me, regularly leave Russia for a holiday. They might go to Dubai. They might go to Israel. They might go to Armenia.
He went to Armenia, and he was arrested. And then based on the leaked chats, representatives of Russian, foreign embassy or a Russian foreign ministry flew to Armenia to have meetings with the local law enforcement, and they left back to Russia with Oleg. And he's now a free man in Russia. And inside the chats, the internal chat of this crime group, Oleg says to the other gang members when he's back that, yeah. Well, you know, number one wants me protected.
Yeah. And we don't really know what that means, but, of course, everybody thinks he's referring to president Putin. Is that true? Can we trust those chats? We don't know any of that.
But, maybe it's possible. Maybe it's possible. Yeah. Nevertheless, it's it's interesting. So whenever we get these references that, you know, Russia protects cyber criminals who attack Russia's enemies, I mean, they target companies and entities and governments outside of Russia, maybe that is the case, and maybe it goes all the way to the top.
Who knows? Fair enough. Okay. That's the geopolitical side. We've covered that.
We've covered some of the AI and AI vulnerability research and a few other things. What else? Anything else that was particularly kind of interesting and exciting? Not really exciting, but it's amazing how big RSA is. Yeah.
45,000 people. The expo is just huge. Like, I don't know, 500 companies I've never heard of Yeah. Plus a thousand companies I have heard of. Yeah.
Yeah. It's just amazing. This industry is a real massive industry and just keeps expanding to new directions. Well, it's interesting. I I mentioned earlier, I've been giving this talk recently around a particular topic, and that topic is actually the development and securing of new technologies.
Mhmm. And one of the key points that I make in this presentation is that typically, every new technology brings a new set of risks and a new threat model with it. And usually, the existing tooling doesn't cover that new set of threats. And so if you think about, you know, the 500 companies that you've never heard of, well, yeah, that's because we're using new technologies. Right.
Right now, it's AI, but actually, it's been new stuff continuously for the last ten years. And we think about these big shifts like cloud, like SaaS, like whatever, but there are also little shifts that support and enable those technologies continuously. Sure. And so you have, you know, kind of little niches opening up and little new technologies here and there, and all of them bring in interesting new risks in many different ways. And it's great that the solutions to these problems are coming from new companies, new startups, not just the existing large players.
Yeah. That's where innovation lives. It doesn't live in these huge companies. It lives in new companies with new fresh ideas. I think by and large, that is a % true.
Awesome. So that's the state of the industry. That's some of the things that we've seen. Any top tips for people who are thinking about coming next year? Bring good shoes.
Yeah. We hear that one all the time. Yeah. It's it's it's really funny when I look at my steps on my iPhone, how many steps I've taken throughout the year. I have two peaks, which are clearly high.
RSA and black hat. RSA and black hat. Yeah. It's amazing how much you walk around in these things. Yeah.
And, of course, come a day earlier than you think would be necessary, especially if you come from Europe. From Finland to San Francisco, Ten Hour time difference. It's clearer. So give yourself some time to recover. Yeah.
I agree. And, also, I mean, look, there's some great tourist stuff to do here in San Francisco. There's some good food. There's some good friends. I I always find one of the things, and I I kinda joke with my wife about this.
You know, I worked in cybersecurity total of about twenty years in my career. I took a couple years where I really focused much more on the IT side, a little bit of data stuff that I did as well before kind of coming back into cybersecurity. And the one thing I say to my wife is because she asks you you go to this conference, Julah. How many sessions did you go to? How many talks did you attend?
And, actually, this year, the answer for me was zero. And it's been that way for the last couple of years, but the connections that you make, the side conversations that you have are very valuable. And the other thing that I always find is interesting working in cybersecurity, my hug count Okay. At every conference is higher than anywhere else that I've ever worked. Right.
Right. Right. And the sense of community, in the sense of kind of shared mission because, you know, we're all on the defender side here at this conference. We're all trying to help our customers have the best security. Sure.
I don't think that I've ever seen a vendor that wanted to sell somebody a piece of vaporware or something that didn't actually solve a problem. I have seen pieces of software that weren't as effective as other pieces of software. Sure. That kind of thing happens. Right?
But I've never seen any malintent. I've never seen anybody who wasn't genuinely working on building something interesting because they were trying to solve a problem for the customer. Sure. Sure. And that is something that I really admire about the space.
In the end, individuals and companies working in cybersecurity are in this business to help the ones who need. Yeah. Yeah. Certainly, this is a business. Companies, they need to make a profit.
But in the end, we use our expertise to help the people who need our help. Yeah. That feels good. And that does feel good. And it's something that I think I saw you talking in Australia, if I remember right Mhmm.
Where you thanked the audience for their contributions and the work that they do because it is very often a thankless job. Sure. It's the kind of job where everybody takes you for granted day after day after day until something goes wrong. Right. And then it's not generally a thank you for your service.
It's a, hey. What happened? What did you guys do? When you prevent a disaster, no one's gonna know. Right.
So thank you to people people who spend their time and expertise to help prevent the disasters. And thank you for taking the time to join us, and thank you to our audience for watching this episode of Modern Cyber or listening. Remember to stay tuned for the next one. If you've got any topics you want us to cover, if you've got somebody you think should come on the show, just reach out. We're happy to have you.
Talk to you next time.