In this kick-off episode for 2026, Jeremy is joined by the legendary Mikko Hypponen, Chief Research Officer at Sensofusion, for a comprehensive retrospective of 2025 and a look ahead at the future of AI-driven threats. Mikko, now a "Mount Rushmore" guest of the show, shares insights from his transition into the anti-drone space while reflecting on a year defined by massive infrastructure disruptions.
.png)
In this kick-off episode for 2026, Jeremy is joined by the legendary Mikko Hypponen, Chief Research Officer at Sensofusion, for a comprehensive retrospective of 2025 and a look ahead at the future of AI-driven threats. Mikko, now a "Mount Rushmore" guest of the show, shares insights from his transition into the anti-drone space while reflecting on a year defined by massive infrastructure disruptions.
The duo discusses the staggering impact of 2025 ransomware incidents, most notably the Jaguar Land Rover breach, which halted production for six weeks and cost an estimated £1.5 billion. Mikko argues that these events prove cybersecurity is no longer just about protecting computers—it’s about securing society itself. They also break down the "random shotgun" nature of modern attacks, where gangs like Clop and Akira target vulnerabilities rather than specific industries or geographies.
Turning to AI, Mikko provides a reality check on the current state of deepfakes and automated orchestration. He reflects on the first massive AI-orchestrated cyber espionage campaign of 2025 and explains why the battle between open-source and closed-source models will define the next phase of defense. Finally, they examine how "data is the new oil" and AI is the "new oil refinery," creating a dual-extortion landscape where the risk of data leakage often outweighs the cost of downtime.
About Mikko
Mikko Hypponen is a world-renowned global security expert, author, and speaker with over 35 years of experience in the industry. In August 2025, Mikko transitioned from his long-standing tenure at WithSecure to become the Chief Research Officer at Sensofusion, a Finnish company specializing in advanced anti-drone technologies.
Mikko has assisted law enforcement in the U.S., Europe, and Asia on major cybercrime cases since the 1990s and is the curator of the Malware Museum at the Internet Archive. He is the author of the best-selling book "If It's Smart, It's Vulnerable" and a frequent contributor to The New York Times, Wired, and Scientific American. In addition to his role at Sensofusion, Mikko serves as an advisor to Firetail.
Episode Links
All right. Welcome back to Modern Cyber. I hope you all enjoyed the break. Coming back to you. Depending on what order you listen to this, this will either be the first or the second episode that you get published in your feed for twenty twenty six, and I am thrilled, as I think this is the fifth or sixth time we've been lucky enough to have Michael on the podcast, but I think you're now on our Mount Rushmore of modern cyber guests, so you're definitely in that top four group. Michael, thank you so much. Happy New Year. Thank you so much for taking the time to join us.
Thanks, Jeremy, and thanks for having me. And of course, I'm always happy to be back with Firetail as I am an advisor for the company as well.
Yeah, awesome. And we really appreciate all of the advice and all of the time and wisdom that you've shared with us over the years. Speaking of wisdom, let's get into a little bit of a recap of twenty twenty five. I don't want to go through everything because that would take a long, long time. But let's go through a couple of the things that mattered over the course of twenty twenty five, I guess, just to start off, is there any one big cyber breach or incident that maybe stands out in your mind as maybe the most notable of the year?
Well, there could be several candidates for for this, but the first one that comes to my mind are the massively large, um, outages caused by ransomware incidents, especially in UK. So early in the year we had Marks and Spencer. There were a couple of other incidents in UK as well. And then towards the end of the year, around September, October, uh, Jaguar Land Rover, which has had a very significant effect on their actual production. So cars were not rolling out of the car factories because the company was hit with ransomware. And the overall cost of all of this. Well, there's many ways to count it, but one figure that has been repeated is one and a half billion UK pounds. And that's a lot of money.
That is a lot of money. And I mean, I think certainly like that number jumps out, but another one from that same incident that really jumped out to me was production. Manufacturing facilities shut down for six weeks. And you know, if you, you know, if you. But it underlines the fact that we all know which is that computers run everything around us. Yeah. I've often told people that that, you know, everything changed during my career because it used to be that computers were computers, and then real world was real world. I used to think for years and years that I am a computer security guy. So my job is to secure computers. Then at some stage throughout my career, I realized that our society is run on computers. So us who work in computer security, our job is not to secure computers. Our job is to secure the society.
Yeah. You know Michael Steed, who is the chairman of Paladin Capital Group. And disclosure. Paladin is our lead investor here at Firetail. He has a saying that he likes to use, which is that cybersecurity is national security. I think you could look at that. You know, it's very, very parallel to what you just said, you know, whether it's the nation state or the society or honestly, you know, you could also just say cybersecurity is business security. It keeps your business going. It keeps you in business, so to speak. But, you know, six weeks of no ability to churn out new vehicles. And I know that making cars is not the only thing that a company like Jaguar Land Rover does. You know, they also sell cars and service cars and lease cars, and they have financial income streams from leasing activities and financing and so on. So it's not their only source of revenue, but it is their major source of revenue.
And I think it's got to be for most car manufacturers, it's got to be their number one source of revenue. And when when the top management, when the leadership team and the board of a company like this looks at an incident like this. I think the best lens to look at cyber risk is to think about it as any other risk free company. Companies like Marks and Spencer, companies like Jaguar Land Rover, obviously they do a lot of work on risk management. They think about fire risk, floods, natural disasters and things like that. And certainly all of those are serious risks. You have to prepare for them. And we know how to prepare for fire risk. You have sprinklers. You have firefighting equipment, you train your people, you have rehearsals, and you ensure you have insurance against fire risk.
Now cyber risk, that's no different. You train your people. You build the facilities. You build the systems in place. Maybe you get insurance, but especially the likelihood of the risk is what we should focus on here. How likely is it that someone comes over and burns down your factory? Or even better, how likely is it that someone tries doing that over and over and over again until they succeed for weeks or months? Unlikely to happen in the real world? That somebody would try to burn your factory over and over again. However, when you look at cyber risk, when you look at ransomware, the crew behind this this attack was linked to Shiny Hunters, which was behind many of the attacks we saw in in in ransomware during twenty twenty five. That's exactly what they do. They will scan your network every week. Every day they will try to find an unpatched server, or they will try to find something that they could exploit. And if they find a way in, they will shut down your operations just as effectively as a fire or a flood.
Yeah. One thing that occurs to me, and I'd be I'd love to get your perspective on this. I wonder, you know, if some of our audience is listening right now and they hear the first two names that we rattled off, Jaguar Land Rover, Marks and Spencer and they think are UK companies. So the risk is in the UK. The UK and British companies are clearly being targeted. But I don't think that's the case. I mean, what would you say to somebody who, you know, kind of raised that question to you?
Oh, I have a great, great eye opener for people who think about who don't think that they will be hit or who think it's a problem for foreign companies or for big brands or much larger companies than ours. The best thing you can do to open up your eyes is to go to the leak site of any major ransomware gang. Go to the website of Black Pasta or Play or Akira or Clop and just look at the victims because they list the victims on their website. You can look at how many victims this week, this month, this year, and you will realize that there's hundreds and thousands of victims companies from all business verticals. Companies and organizations of all sizes. Private, public, public sector, big, large, medium, enterprise. Right next to each other. You have a furniture manufacturer from Denmark, then you have a bank from Vancouver. Then you have a steel mill from Sao Paulo, Brazil. It's like somebody would have shot at the internet with a shotgun. That's how random it looks like.
And the reason why it looks so random is that these gangs, gangs like Clop or Akira, they don't choose their targets by geography or by business verticals. They have a vulnerability. Like, here's a VPN server from three years ago. If you haven't patched it, we have a way in. Let's scan the whole IPv4 address range to find all companies and organizations with this vulnerable device. That's how they find their victims, and that's why it looks so random. That's why you have small, large and enterprise level companies from all possible areas. And this means that yes, your organization will be targeted even though nobody knows about you, even though you are in in a far away country. Even though you don't think they would find you, they will find you.
Yeah. And I think another, you know, counterpoint to this argument is that, well, if you take one of the names that you mentioned, Shiny Hunters. Shiny hunters was also claims to be behind the breach of Salesforce. And I think that's where most of the evidence points. And, you know, Salesforce, this is a big global technology brand American company publicly traded. You know, I would say among the top, probably the top ten most valuable software companies in the world at this point. And, you know, they are also able to be breached by one of these same gangs. And to your point, it's, you know, it's one vulnerability. In their case, it looks like it was as a result of, I don't know if you would classify it as a supply chain attack or third party risk, but it was, you know, via a set of. My understanding is via a set of credentials on a third party system called drift that connects into Salesforce. And using that access, able to then penetrate inside the Salesforce organization and then from there able to compromise tens, if not more of other organizations who run or have data stored on Salesforce.
Sure, sure. And I guess we should underline what you just said here. Technically, Salesforce itself wasn't hacked. It was the data hacked through a third party connector that accompanies used to better manage their sales force usage. But it's also a great example of how everything is connected and how supply chain, in many ways, is the crucial part that we need to be able to secure. We often think about supply chain in software referring to the code. So if you run open source software, you have to be able to secure it somehow. But you could also think about that. Anything that you use, all the third party services or third party software, including things like this. Salesloft drift application that your sales team would use to use Salesforce more effectively. That was breached and that opened up the floodgates to the actual data. And the data is what the attackers want. They don't care whether they get it by hacking sales for itself, or by hacking a Salesforce client through a third party app, as long as they get the data. That's what they're after. And this is a great example. On on one of the bigger stories of twenty twenty five, Salesforce, indeed, like you said, is one of the biggest software companies on the planet. All large companies use it. And the crucial part is the data. Data indeed is gold or the new oil, as as we often say. And in the age of AI, it's even more valuable.
I want to come back to that point about data being so valuable in the AI age. But before we go there, I want to touch on one more point related to this kind of ecosystem effect, right? So one organization being compromised that then compromises tens of N customers at the same time, twenty twenty five was also a year when system outages showed the interconnected nature of the ecosystems that we had. You know, first, I think it was AWS and then Microsoft Azure. And I think there were a couple of others, Cloudflare that, yeah, had an outage a couple months later, if I remember right. I'm losing the timeline a little bit on some of these incidents. But, you know, at any given point, one of these events that, you know, the numbers were staggering, right? You know, something like twenty percent of the internet, down thirty percent of the internet, down ten percent of the internet down. And if you think about the millions of online services, you know, ten percent of services being down, you know, that's thousands of organizations impacted at a particular point in time.
But this is also the nature of the, you know, how all these things are connected to each other? You know you don't build every service on your own anymore. It doesn't make any business sense to do that. You rely on these third party services. And, you know, that's why, for instance, at Fhir, we've always stressed to people the importance of their APIs and their connective tissue into other services as being both a crucial attack surface as well as a crucial stability point. If you think about CIA, it's not only about confidentiality and integrity, but it's also about availability. And that availability, I think, is something that had a lot of people kind of scratching their heads with these major outages.
There's a silver lining, though. Um, yes. We saw a surprising amount of large scale outages during twenty twenty five. None of them were attacks. Like all of these cases that you mentioned were like internal misconfigurations or errors of some kind. And I have to give credit to Cloudflare about the openness and about the postmortems that they publish on exactly what went wrong. Nobody publishes them faster than Cloudflare. Typically have it like two days later. They have it. And it might be written by Matthew Prince, the CEO himself. So they really give top priority on these cases, as they should. Cloudflare business is to keep availability running. And if if they fail, you know, they really have to be able to recover from it. And the best way to recover from it is through openness. And they've succeeded in doing that.
Yeah, that level of customer trust is something that I think shouldn't be underestimated. You know, every system will have a failure at some point. I think it's, you know, pretty well proven over the course of, of human history, certainly computer history, you know, nothing is one hundred percent foolproof, right?
I have a story on trust. Um, because trust you don't really lose trust. Um, by getting hacked, you lose trust by getting hacked and then lying about it. That's when you lose trust, and that's when you should lose trust from your customers. As soon as you start lying, you deserve to lose the trust of your customers. And I remember a case which I wasn't working with directly. I was observing the case with a Norwegian software company in the business of oil and gas, like making software solutions for the Norwegian oil and gas industry, which went public early in the year. This was a couple of years ago, but they had only been listed in the stock exchange in Oslo for like three months when they got hit with a ransomware attack, and you could tell from their stock value as soon as the stock opens, their stock value plummets fifteen percent.
And there's no other news. It's the fact that there's some kind of an outage. Nobody yet knows what's happening, and this company really got it down on how to communicate because they had their first like open zoom call open to the world about the outage, like two hours into it, where they basically have CTO and CEO telling that we know nothing like we've been hit. Everything's down. We don't know who did it, we don't know what's lost. But what we do know is that we will tell you more in two hours. Two hours later, another meeting. Now we know what's down. Now we know what we can recover. Now we don't know what we can recover. But you'll know more in two hours. And they had this very, very radically open communication about what's what's happening and the way you could measure the success of this open communication from the top management in the middle of a cyber crisis, share price, share price by the end of the day, like it dipped in the morning, recovered slowly during the day, and it ended the day a little bit higher than where it was while they're down, while they're out, while they're non productive as an organization, no revenue generating activities going on. Their share price goes up, their share price reaches the level of the morning and then goes a little bit higher. And the difference between the start of the day and the end of the day, that's that's trust. You can measure that one right there. That's building trust. The shareholders don't like the situation, but they believe in the company. But certainly there's a problem. But clearly these guys know how to pull out of it. And they did.
So just to be clear, do you do or don't recommend taking your company offline as a business strategy to to boost shareholder value?
No, I don't endorse doing this on purpose. But if it happens, radical openness is the way to handle it.
Yeah, I think that's a great point. I want to circle back to the topic of AI, and I know we've made it about sixteen minutes into the conversation before focusing on this. So I count us in, in kind of a good standing here in terms of not just leaning into what is, of course, the number one topic of the hype cycle. But there's a couple of things that really did happen in twenty twenty five. You know, a year ago when we did our end of Twenty twenty four recap. And we talked about this. We talked about how AI powered attacks were probably going to start becoming the norm. I don't know if you've spent a lot of time looking at this specifically, as you've kind of gone through a career transition of your own over the past year, but what's your current observation or your current thinking about, let's say, our threat actors using AI powered attacks as a regular tactic now?
And the answer is obviously yes. Yes they are. Um, cybercriminals are no different from the rest of us. They use computers, they write code, they run their own projects. They have their own bosses. They have their own working hours. It's like any other job, except it's it's a criminal job. And obviously they will use these tools. Just like we use these tools to rewrite our text to, to to draft the emails for us to run, um, tests on our code, to write our code. To do Q&A. To start new framework developments. And if the framework you're developing is a new version of ransomware, that's no different from any other software project. So they will use and they have used these tools, but it isn't, um, how should I say it? Like groundbreaking in the sense it's no different from the way we use our, uh, AI in our daily lives.
Yeah, I think that's a, you know, well said. And something I always like to tell people just to make it a little bit more concrete. Hackers have automation, hackers have AI, hackers have cloud. Now, the credit cards they use to pay for these services might be stolen, but they have access to all the same tooling that you do. So thinking that you have a tools advantage, unless you have some proprietary technology that's not commercial off the shelf. I think you're kind of kidding yourself.
Yeah. Yeah, that's that's that's true. Now obviously there are attacks which are very specific to to AI itself. For example, deepfakes, which clearly is is a problem we didn't have before, AI and generative AI with the image and video and voice generation as we have it today, was available to everybody. The barriers for entry to do this have completely collapsed. Everybody can do convincing real time deepfakes, but I'm still a little bit surprised how we haven't seen, um, you know, more attacks like this. We've seen cases of this happening, like the CFO calling with the real time deepfake over. Yeah, yeah, there's a handful of examples, but but not nearly as much as you would think. And I can't really explain why it hasn't become a bigger. Yeah. I mean, the vast majority of deepfakes we see in criminal use is targeting consumers. It's the it's the fake Elon Musk or fake Donald Trump pushing a cryptocurrency scam or investment scam. It's not the CEO or CFO trying to steal millions by changing the account numbers of their companies.
Yeah, one offs here and there. And you know, those have been certainly well publicized. I think something that sticks in me when you when you say, you know, why haven't we seen this kind of grow to scale? I actually think there is a little bit of a parallel between legitimate business and criminal enterprise, in the sense that we're still in the very early stages of learning how to take this technology platform and make it production quality ready. The consumer use cases for AI are well ahead of the kind of LLM powered application use cases. Meaning, you know, if you go to a lot of organizations today and you say, hey, Michael, how are you and the team at Sensor Fusion using AI? You guys might be an outlier, I don't know, but let's just take the average organization. Probably the answer you're going to get back is, well, you know there's Michael in R&D and he's doing these things. And, you know, he's maybe using an enterprise version of anthropic cloud or ChatGPT. And he uploads some proprietary documents and he works with those documents. And maybe he's like working with those documents with one or two collaborators. But what you hear less is, well, we have like five purely, you know, agentic software powered kind of agents, if you will, that are autonomous, that are doing tasks on behalf of the organization. You're just hearing a lot less of that, because I just don't think many companies are there yet. And I actually think the same is true of the criminals. I just think the criminals haven't got it into production quality at that level yet either.
I think you're exactly on the point here, Jeremy. Um, we often tend to overestimate the speed of major technological revolutions and underestimate the size of the revolution. And the great parallel to where we are in AI today is what happened with the internet in the nineteen nineties when the internet became a thing. Um, all companies wrote an internet strategy, like what is our strategy about the internet? And for the first years it was about setting a website. We need to be on the web. We have to have a website so we can, you know, it's like a virtual bulletin board that our clients can come to read about our products. It took years and years for internet to be integrated into the core operations of the companies, where everything is being run online, and all the sales are happening online, with automated integrations to shipping companies and everything else.
And that's the thing that's about to happen with AI. You speak to companies today and they tell you about how they have written their AI strategy, just like they wrote their internet strategy in nineteen ninety four. Well, today, no company has an internet strategy. They just have strategy. That's what's going to happen with AI. It's going to be integrated into the core functions of their companies. But like I said, we tend to overestimate the speed and underestimate the size. It's going to be a bigger revolution we can imagine, but it's going to take longer than we think.
And do you think that it's going to be one of these things where you may have heard the expression, the future is here already? It's just unequally distributed where, you know, different organizations have different levels or depths of ability to embrace a new technology. And to that end, what we're going to figure out is like, oh, you know, AI really makes the most sense for software companies, e-commerce companies, online streaming companies, media companies, whatever the case may be. And then within those companies, it's going to make the most sense for product teams, finance teams, whatever. But it's going to, you know, filter down into core industry verticals and then core use cases within those verticals that are really like the first movers, because I would argue, for instance, that like, you know, here we are, we're in twenty, you know, beginning of twenty twenty six, let's say that the first kind of commercial internet applications went online sometime around ninety seven, ninety eight, right early enough that I remember it happening. And I imagine you do as well, you know, but for instance, like at ninety seven, ninety eight was probably like the first time that I put a credit card into a website and actually made an online purchase of any kind, and I think my first ones were probably on eBay if I'm if I'm remembering. Right. And so we see like, oh, okay. You know, e-commerce was for instance, one of these early, early movers on the internet side. And then within e-commerce, you realize, well, it's actually like listings and then, you know, purchases. But for instance, fulfillment was still a very decoupled process that was pretty manual for years to come. So, you know, I just wonder if it's going to play out in a similar kind of cycle where again, like verticals and then use cases. And then over the next five, ten, fifteen years, it becomes deeply integrated into the whole technology landscape.
Yes, except it's going to be faster than that. This is the fastest moving technological revolution we've ever seen. And it's it's almost scary how fast it moves. GPT I mean, ChatGPT not GPT itself, but ChatGPT is three years old. GPT two is four years old. So that's nothing. This is moving very, very quickly. And it's not just in the hands of one company. As we know, like all companies, all the big players have amazingly good technology already. And this technology that's already available to us is, is frankly amazing. If you would have shown current versions of these large language models or image generators to an AI researcher ten, twenty, thirty years ago, I mean, that's exactly what they were imagining. Like, that could happen one day and it's no longer imagination. We have access to all these tools right now. But these tools are just scratching the surface. These are tools.
When technologies like these are integrated into the operations of companies, everything changes. And these changes which are easy to see and then changes that are very hard to predict. Uh, I'll give you an example. Um, when the web became the norm, it was fairly easy to see that eventually video streaming will be possible. We didn't have the bandwidth, we didn't have the technology, but it was obvious. Eventually we will. So one day we will watch movies online. In nineteen ninety four, it was, you know, sounded, um, incredible but possible that eventually the bandwidth and the technology and the resolution and the IPR, the rights for the movies will be figured out and we will we will have Netflix like we have Netflix today. Yeah. Then there are unpredictable things. It would have been hard to predict things like food delivery at the scale that we have today, and we have things like DoorDash and volt today, thanks to the internet. I mean, the fact that everybody has a device in their pocket which knows your location, which has a payment mechanism, which has real time communication systems, um, yes. That's all because of the internet and because of the mobile internet boom. But imagining how it's going to change the way we eat would have been hard to see in nineteen ninety four.
Yeah. And I think, you know, as you were saying, that one industry that really came to my mind and maybe it's because I'm just back from a trip is, you know, travel is one of those industries where, for instance, you know, I don't do anything physical anymore for my travel, really. Like, I don't remember the last time I had a paper ticket. It's definitely been more than ten years. But for instance, we're now at the level where one of my recent hotel experiences. I didn't even go to the front desk. You know, I got a digital key pushed into an app that's enabled by Bluetooth. You know, my bill came to my email as a PDF. My credit card was taken inside the app. Like, you know, there was almost no human interaction. And, you know, imagining that we would get rid of room keys in the check in desk and the checkout process and all of that, you know, that that would have been very hard to foresee back in nineteen ninety four.
Sure, sure. And world is changing very fast. I remember international trips I did in early nineteen nineties before the internet, which means you can't Google for car rental places or where you're going to stay in a foreign country. Where are you going to rent a boat? You have to like call internationally directory service to get a phone number in a foreign country, and then try to speak the foreign country to book a boat that you want to rent, and then you have to drive there with a physical map with no GPS. It feels like a different, different planet. And, you know, this is during my lifetime. I remember doing like that.
Yeah, yeah. Yeah, totally. I mean, I remember I did a six week road trip with some of my high school friends with physical maps only, and there were many times when you know you, you you. Yes, you have to learn to navigate because you go physical map to stop at a gas station, ask for directions, fast food, restaurant, whatever the case may be, you learn to get by. Yeah, these these are bygone times that seem like there's, you know, the digital world has come for them and there really is no going back unless you specifically cite yourself this challenge and something like this. In ten years, we'll be having this discussion with, with, with something that AI changed that we can't imagine right now. It's going to change it completely. And in ten or fifteen years, we'll be imagining how we used to do something without AI.
Well, it's funny, I mean, even just speaking of travel, I was speaking to a friend of mine, and he told me that his trip planning is now much more AI powered than it is Google powered Google or other search engine equivalent, you know, comparison shopping sites or or travel portals like Expedia. You know, it's really much more now about talking using an AI service to figure out where do I need to go? What? What part of town do I want to stay in? All of these types of things are are very much coming from AI now.
Very true, very true. Yeah. And one thing I wanted to mention, as we discussed about Mark and Spencer and Jaguar Land Rover, which were British companies, what the one huge incident during the year that we forgot was Asahi Beer from Japan. They had their beer factory shut down for extended periods of time. I read that they wouldn't be fully recovered until February twenty twenty six. And that's a huge operation. Operation big enough that it's not just Japanese beers which were disrupted. It says here also Peroni, the Italian beer, which is very large, and Pilsner Urquell from from Hungary, where I'm sorry, from Czech, were also disrupted. So when beer deliveries get disrupted by ransomware, it's pretty bad.
Yeah, well, coming back to the kind of the point that I raised about. We're in the early days of kind of production use cases. I will say twenty twenty five is the year that we saw the first kind of massive AI powered cyber espionage campaign. And, you know, to be fair to AI powered is a little bit, um, maybe missing the key. This is AI orchestrated. So this is the first case that we see where, um, threat actors used the anthropic cloud platform and anthropic also to your, to your point earlier about openness, they've been very open about sharing all the data that they can. I personally have a little bit of a suspicion that there's some data that was shared with law enforcement that may not have been shared publicly. And, you know, that's probably right as it, as it is. Um, but, you know, a nation state actor, high degree of certainty that it was a Chinese nation state actor who manipulated cloud code into attempting infiltration into thirty global targets and succeeded in some cases. And they go on to describe that there are kind of different kind of use cases.
And interestingly, they talk about the human in the loop effect in this case where, you know, there are kind of specialized agents for, hey, go map this network infrastructure. Hey, go look for vulnerabilities. You know, agent one, agent two, agent three use cases for each of these targeted scenarios. And by the way, I did a talk earlier, uh, or rather last year. Now about some AI powered analysis that we've done here at Firetail. And we have found this to be consistently true. And it's really interesting to observe. And I'd be curious your thoughts about this. You're tapping into the same foundation model, but when you give narrowly targeted use cases as opposed to, hey, use all the capabilities of the LLM and do not only network mapping, but also do vulnerability enumeration and malware design to exploit that vulnerability. You get much worse results than if you do Agent one. Network mapping. Agent two look for vulnerabilities. Agent three exploit generation. And that was really interesting to to read about. And we've got that linked from our website. And we've talked about it on on modern cyber in the past. But I'd just be curious. Any observations, any thoughts about lessons learned out of that incident?
Well, I think there's something weirdly human about that. I think humans operate the same way when you have a narrow scope, when you have very clearly defined task, it's much easier to do than something much more general. It might feel to, uh, to shapeless to to really figure out what you should do if you're just given orders that, you know, break into that organization. If you're told, map the network, that's much, much more narrow. So, you know, restrictions, um, are good for humans and clearly here for, for machines as well. They also create creativity. When you have restrictions, people become And maybe machines become more creative in figuring out solutions because of the narrow scope.
Then you have have to have, of course, someone or something to orchestrate it all. But that's that's what these attacks have done. And it's also interesting to note about the attack that you referred to, that, um, uh, anthropic. The creators of clod were like watching and trying to figure out to spot attacks like these. And that's great. And that's possible to do as long as it's a closed source. Um, generative AI or LLM system like Claude is. Claude is closed. You can't download it. The only way you can use it is on anthropic servers, which means anthropic can monitor you and anthropic can kick you out. Um, or one of the other cloud platforms like Microsoft or Google or Amazon bedrock. It is available there. Sure. Very, very true indeed.
But then, of course, there's the other option open source models you could download. Deep sea or meta. Interestingly, Facebook announced in December twenty twenty five that they will apparently turn, uh, llama from open source into closed source because of security considerations. So maybe they know something we don't. And maybe that is I don't know, I'm a big fan of open source, but if there's limits to open source, they might be in AI because clearly there are unsolvable problems with open source, powerful, uh, generative AI models, because you can remove all the safety restrictions and you can remove all the guardrails if you have the, the access to the source code.
Yeah. You know, it's funny because I don't I don't know if I worry about that aspect so much. Or rather, let me rephrase. I don't know that the open source versus closed source piece of that is actually as powerful as people think it might be. And my logic for that is, you know, we've been we launched a series as part of kind of a subseries within modern cyber last year, this week in AI security. We started it back in, I want to say October. And over the weeks, you know, ten, fifteen minute episodes, three or four stories, typically on a weekly basis and for about five straight weeks there there was a series of academic papers that all basically made the same point. Prompt injection is always possible. And if you think about what prompt injection is, there's no one type of injection. Whereas something like SQL injection, you know, there's kind of two outcomes. It's either a get me all the data behind whatever web based application, or B give me administrator access by me exploiting this kind of SQL injection vulnerability that you might have.
The thing is, with LMS, prompt injection really means I can bypass anything that you are told that you're not supposed to do. So, for instance, you are given ethical guidelines that you shouldn't write malware. Well, it's pretty much proven that it's always possible to find a way to make you write malware, whether it's through character impersonation, whether it's through trying to trick the Lem into thinking that you know, you're giving it support to your grandmother who is remote to you, and you just need a way to be able to get onto her system with remote access, or whether it is through something like a hamburger or multi-lingual attack or whatnot. And one of the paper's argued that, you know, it's effectively just a result of how llms work. They take our human language, they turn it into mathematical tokens. Those mathematical tokens have no kind of intent or intention in them. They are just numbers at that point. And as long as you figure out the right way to give a sequence of numbers, you can always find a way to prompt inject any LLM open source or closed.
And I think you're right, Jeremy. However, there's one thing which makes open source and closed source systems different. Yes, you can do prompt injection against all these closed source systems, but you have to do it on their servers and detecting these prompt injection attacks from the point of view of the of the company hosting the service is is doable. And once they figure out what you're doing, they can kick you out. Um, if you're running your own own instance, if you've downloaded the code, you don't need prompt injection to begin with, and nobody can kick you out. So that's the difference.
Yeah. Fair point, fair point. I wanted to come back to something that you said earlier. While we're on the topic of AI about data being the new oil, and when we think about these systems, I imagine that the intention of the threat actors who took Asahi Beer or Land Rover offline for six weeks, three months, whatever the case may be, that's not their intention. Or rather, that's not their primary goal or their primary motivation. It doesn't earn them any money to shut this company down. It earns them money by virtue of having the target organization's data Is that the right way to think about it?
Yep. That's the main reason why ransomware victims pay the ransom. It's it's not the downtime. It's not the encrypted laptops. It's not the encrypted servers. It's not the encrypted cloud storages. It's the risk of having all this stolen data leaked publicly. And it's not just future plans and patent applications. It's also personal information. Very expensive lawsuits from your from your employees and from your business partners. Imagine your employees exchanging private health data with corporate healthcare and those emails are leaked. So this is the real reason why we have these dual extortion tactic from all ransomware gangs today. Yes, they will lock down your systems. That's how they notify you. But they they're not relying on that being the reason why you would pay today or all large organizations have pretty good backups. Backups have gotten much, much better over the last ten years because of ransomware problems. That's the silver lining. Ransomware has forced companies to do better backups, but backups don't help you at all. To fight the problem of leaked data being published on these Tor servers.
Yeah, and it's so funny because there's an aspect of this where, you know, when it's personal, right? When it's the classic, oh, I got a video of you and your webcam and what you were doing while you visited some adult website or something like that. You know, this kind of personal attack that I think is pretty well known at this point. You know, that's this kind of embarrassing, very vulnerable moment. And yet, the flip side of it is that very few businesses are in the business of containing or holding kind of incriminating data. You know, the kind of data that if somebody if one of Firetail suppliers were exposed and Firetails data was out there. Aside from sensitive data, which is, you know, let's say things like bank account numbers and things like that. Aside from that, there's no incriminating data in them. And yet companies are scrambling and, you know, they're really, really nervous about this. I wonder if there's some kind of reverse or perverse incentive that we've created as an unintended consequence of things like GDPR and having such large fines for companies that do get compromised.
Could be could be. Then again, I would hope that majority of companies who are hit with ransomware and who end up paying the ransom would still, you know, report it to the authorities to follow the GDPR guidelines. Repeating what I said earlier about it's one thing getting hacked, and it's another thing about lying about it or trying to hide something that you've lost your customer information. And the thing that that I should also underline about this how valuable data is is that I mean, if data indeed is the new oil, then we should think about AI as the new oil refinery. I mean, data is valuable, but it's even more valuable for systems that can only be built with data. And generative AI generates the kind of content it has seen. It needs a lot of content. Content is data. That's why data is valuable today.
Yeah, yeah. There's one last thing that I've observed this year that I'd love to get your thoughts on before we kind of look to wrap up this episode. And we've got just a few minutes left here. That is, you know, the first wave of, I don't know what you would call it, kind of search result. Poisoning via AI is out there now. And there's this kind of weird side effect again, where, um, one of the things we've educated consumers on is if you get an email that looks suspicious or it looks like phishing or something like that. And it reports to come from a well known, respected organization like you get something from Thin Air, the national airline of Finland, that says, oh, there's a problem with your flight reservation, please call our call center to sort this out or pay for a baggage fee or whatever the case may be. Typically we told people like, well, don't call the number that's in the email. Go online, search for the actual correct call center number, and call that number instead and verify.
And what we've seen now is that there's been this kind of trick of the AI search engine results into giving scam center phone numbers. And so people look for Expedia call center. And what they get instead is, you know, there's kind of enough placement of bad data that the search engine, the AI search power results now have surfaced that in some cases and, you know, there's confirmed cases of people being scammed. I'm curious, like, how do you now? How would you react to that? What advice would you give to people today knowing that this is now evolving?
I'm mad just about the whole whole thought. That's a fairly clear attack. And of course they will. They will do that. We've seen similar attacks, poisoning, search engine results, but poisoning what the AI systems will tell you because they've learned it from the internet is it's pretty clever. I've often said that our work in cybersecurity would be easy if the enemy would be stupid. Fortunately, the enemy isn't stupid. So I guess the advice to to users would have to be that, you know, double check your information from several different sources. Making your life more difficult. Um, but clearly we can't rely on the information we get from from these sources. Maybe it would be a good idea to bookmark and write down contact information for important services, like your bank into your own notes, or have it bookmarked in your browser so you would know where. Yeah, for real when you need it.
And it's funny because a lot of these companies have tried to make it deliberately hard for consumers to find their call center numbers to reduce their call center costs. And yet, I feel like they're going to have to bring these numbers back to the surface easily for bookmark purposes.
There's one thing I also want to mention here about scams in general. Um, this is the advice I typically give to consumers about consumer scams. Like, you know, fake police or fake bank or even romance scams, but it actually works in corporate scams as well. And the trick is, in almost all of these cases, um, the attackers try to create a fake sense of urgency, like you're being hacked right now. There's someone on your bank account right now. You have to act immediately. And people get panicked and they make the wrong, wrong choice. So the advice is kill the urgency. So someone calls you. Hello. This is the police. We have to move your money because you're being hacked. Hold on. Please wait. Put the call on hold and then talk to someone else. Or call someone else and explain the problem. Hi. The police just called me because somebody has breached my bank account. And now the police would like to move all of my money to a safety account. Well, this makes no sense. I mean, you sort of understand it by yourself as you explain it to someone else. When you kill the urgency and just explain what's happening to a third party. This is. I mean, just ask for advice, like ask somebody else's advice. What should I do? I get this call, there's this, this, this thing happening. And and when you have an outsider looking at it from an outside view, without the crisis mode, without the urgency, it often resolves itself and you realise that. Hold on. I should double check this. Is this really who it claims it is? Let me call you back. And that's the way you fight these. These kind of scammers.
Yeah, yeah. That's great advice. Makes a ton of sense. One other thing that comes to my mind when we talked earlier about how, you know, let's say like AI powered scams haven't become weaponized in a B2B sense at scale yet. And you know why? This is maybe down to not that many use cases being in production. One of the other things that's kind of surprised me in that it hasn't happened as much, and maybe you can correct my information, and my knowledge here is I've been surprised that more IoT systems haven't been compromised. If you go back to law about, you know, if it's smart, it's vulnerable. There's got to be billions of IoT devices in the world, and finding them and figuring out which ones are vulnerable is now easier and faster than ever. Maybe I'm missing some data points about this, but what are your thoughts about IoT devices and their vulnerability right now.
Well, there's tons of them which which are being exploited and used. The largest denial of service botnets are all IoT botnets today. Not computer botnets, but IoT botnets. But you're also right in the sense that there's tons of devices which which are IoT devices which are not getting hacked. And they're the typical explaining factor is that they they don't have a TCP IP stack. They are not on the internet. They use Zigbee or Wi-Fi or some some other protocol for communication, which means they are not exposed to attackers from any part of the world. You have to be physically close to the device if you want to hack it, and that totally changes the threat scenario. If the attacker can be anywhere on the planet, or if the attacker has to be in your neighborhood, there's a totally different.
Yeah, it's a huge difference. Yeah, yeah. Is there also maybe something in there that, you know, there's just not that much data behind your connected refrigerator or your connected washing machine?
Yeah, maybe that's the explanation as well. One good example of that is that we regularly see security cameras getting hacked. And you would think that if somebody hacks a security camera, that's because they're planning a heist or they want to spy on you. The the boring fact is that they typically do it to use it as a node in a denial of service botnet. And that's it.
Yeah. Awesome. Awesome. Any thoughts or predictions as we move through twenty twenty six? I mean, I think it's very you know, the obvious thing is, okay, AI powered threat actors are going to increase in size strength, number. That's probably a pretty logical prediction. But any other thoughts that are on top of your mind.
Well, that's that is the the major, major prediction. And that's easy to do. AI is going to get bigger and AI is going to change the way attackers work because it will change the way everyone works, including the criminals. Um, otherwise, I don't have major predictions for twenty twenty six. It's it's, um. I mean, it's my thirty fifth year monitoring the cybersecurity industry, and I think we can safely say that the problems will continue. But I'd also like to underline that there are tons and tons of success stories. You look at the computers we have today, they are more secure than ever before, especially if you look at mobile devices. The security level of these are better than what we've ever seen. We've we've managed to create devices which are secure and unhackable for ninety nine to ninety nine percent of users, even if they have physical access to the device. We used to think this was what. This is what was taught in universities that if an attacker gets physical access, it's game over. And that's no longer true. I mean, you think about mobile devices you can't run your own code on them, even if you want to. Even if you own the device, even though you have physical access.
Yeah. And even if you to your point, if I pick up one that's yours, I can't unlock it. I can't, you know, hook up a USB-C drive and auto run some virus or malware to install on your phone. You know, that's just not how these devices work.
Remind me not to lend my laptop to you, Jeremy.
Well, on that note, I think that's a great point to end today's episode. Mikko, thank you so much. We look forward to catching up with you over the course of the year. We'll see if we're able to continue our annual tradition at either RSA or Blackhat. Mikko Hypponen, thank you so much for taking the time to join us on this kick off twenty twenty six episode of Modern Cyber.
Thank you everyone.
All right, wishing you all all the best in the New Year. Looking forward to continue to come at you with this week in AI security. Future episodes of Modern Cyber. Maybe get back to our breach series. We'll see how the year goes. Wishing you all the best and signing off for today. Thanks so much. Bye bye.