Modern Cyber with Jeremy Snyder - Episode
83

This Week in AI Security - 18th December 2025

In the final episode of 2025, Jeremy examines the evolution of SEO poisoning into "AI poisoning," a major privacy breach involving a popular browser extension, and shares a data-driven "sneak peek" at the state of AI security over the past year.

This Week in AI Security - 18th December 2025

Podcast Transcript

All right. Coming back to you with this week in AI security for the week of the eighteenth of December, twenty twenty five. And I believe this is our last episode on the modern cyber feed for twenty twenty five. I'll have a little bit of notes at the end of today's episode and some things to say, but we've got three stories to dive into, as well as a couple of special items to discuss.

First, we're going to talk about or escape researchers finding an AI attack that sends travelers to scam call centers. And it's not, strictly speaking, only airline call centers, as the headline would indicate here. But effectively, the net result of this is this is kind of the modern version of SEO poisoning. If you look at most search engines. And let's be honest, Google really dominates the search market share at this point. When you do a standard Google search these days, you get the AI summary results right at the top, and it kind of reduces in theory, the time that you need to to kind of go through those results and find the one that is right for you.

So this AI summary can be useful in some circumstances, but what threat actors have figured out is that by planting enough false information, they're able to actually convince some of the search engine crawlers that are processing data with AI, that they actually have the legitimate phone number for different call centers. So you might Google the call center for United Airlines, and you get this AI summarized result. And that could have been planted, that number could have been planted. And it's actually kind of doubly dangerous because in fact, in a lot of scam situations, one of the first items that's recommended is don't respond to the eight hundred number that's in the phishing email that you might get. That looks pretty legitimate. No, no. Instead, go out and Google and find the official customer support number.

So this was a really unfortunately clever way to kind of manipulate it. And I liken it to kind of SEO poisoning that some listeners may be familiar with from way back in, let's say, the two thousand and twenty tens, where you could do things like get enough links pointed at something. I think there was a famous example And during the George W Bush presidency of, you know, utter failure. That keyword phrase pointing to George W Bush. And if you Google that utter failure, that result came up first. And that was called kind of like link poisoning or SEO poisoning or what have you. And I think of this as kind of, you know, AI poisoning, where, uh, the scale that we're working at is the entire internet and the target victim is anybody who Googles for that particular piece of information. So certainly something to watch out for and definitely something that the likes of Google and Microsoft and other search engine providers are going to have to figure out a way to mitigate. Because if this kind of thing stands for a long time, that's going to be a really, really bad look. And it does threaten to bring some kind of legal reaction or state level reaction against AI summarized search results.

Moving on to the next story. Similar or related story around kind of the crawling of the web. Cloudflare has been kind of one of the first organizations to to tell all customers who are using Cloudflare in front of their websites as, hey, hey, do you want us to opt you out of AI crawlers so that AI crawlers can't train their models on the data that's on your website, text, what have you. And the proposal from Creative Commons, and I think it's backed by a number of other organizations, is what if we say we don't want to prevent them from crawling, but we want them to pay for crawling. And then they can decide whether they want to pay the fine or the fee to crawl our website and train their model on it. And we can set the price and we could set the price at say, you know, hey, maybe it's ten cents for these pages, but maybe it's like two dollars for this page that goes into the details about how we run a certain process or whatnot. And if it's worth that much to the AI engine to crawl, then maybe we're willing to to make that trade off.

So it's a really interesting proposal. I don't know that it will go anywhere, because it's the kind of thing that, to my somewhat jaded cybersecurity mind, sounds idealistically right, but unlikely to get broad kind of consensus support across the internet, unless it's really backed by a consortium of the top players. Most notably, I would say the top LLM providers, if they don't go along with this system, I think it's probably unlikely to to get too much momentum. But a really interesting thing, and it could provide a layer of kind of content protection.

So moving on to the next story this week, uh featured Chrome browser extension was caught intercepting millions of users AI chats. The irony in this for for anybody who knows firetail, you know, we have a browser extension that does this, but it explicitly does this, and it tells you that it does it. The difference in this case is this was a VPN service and the extension is called urban VPN proxy. And so you expect a VPN service to proxy your traffic as the you know, the proxy in the name kind of suggests. And VPN service for those who are unfamiliar. It takes your IP address or your location wherever you're calling from. And it kind of swaps it. For one, it relays it via a third destination. So if I'm trying to visit something like Netflix from outside of the country, Netflix is going to see where I am. And then kind of, you know, give me the content appropriate for that geographic location, which could depend on any number of factors like licensing and whatnot.

And the same is true, broadly speaking, for e-commerce sites. They might display different prices, different product availability, and it really goes on down the line. And this has been pretty widely studied. And so a lot of VPN services offer a way for you to kind of appear to be in a different location in order to gain access to a different set of services. And there are lots of legitimate use cases for VPNs. In fact, a lot of businesses use VPNs or require their employees to use VPN, especially while while traveling in order to, you know, kind of relay their traffic through the corporate gateways, which might have security controls and policies implemented on them. What you don't expect out of a VPN is that it is reading your traffic, because that is almost explicitly against the policies and the purpose of a VPN. So this was a very popular extension. It was even featured with six million downloads in Google Chrome and one point three million installations on Microsoft Edge. We're talking about minimally, seven point three million users who have installed this and may have may have inadvertently had their messages intercepted. So kudos to the team over at Koi who picked this up and shared information about it and some really, really great things.

I will say, you know this area about, you know, AI usage inside the browser. This is such a fast moving space. You know, last week we had a story about Gartner recommending the the discontinued use of AI browsers. I think we're not done seeing stories in this area. There's going to be much more coming out over the next couple of months.

All right. As I mentioned, this is kind of the last this week in AI security for the year. We'll be taking the rest of the year off for the holiday break. Uh, we really wish all of our listeners the best for themselves, their, their loved ones, their friends, their families, etc. but what I did want to highlight is, you know, a little sneak peek at something that we'll have coming out in, uh, kind of, you know, early or late Q1 of next year. Every year at Firetail, we release our this year in AI security. And that report will be coming out in Q1 of twenty twenty six as well. One of the places that we track information or that we kind of gather information is on something called our AI Incident Tracker.

So I thought it'd be fun for the final this week in AI security of the year to take our tracker, drop it into a Google spreadsheet, and then start asking Google Gemini some questions about it. So I didn't have a ton of time, so I only got a couple of high level statistics from it, but I thought I would share them with you. Again, as a teaser. We'll have much deeper analysis in our full report coming out in twenty twenty six, but I thought I would just, you know, share a couple of quick stats.

So first thing that I noted was that the rise in AI security incidents from twenty twenty four to twenty twenty five is pretty broad. And, you know, there's a general rule of thumb in cybersecurity that for every publicly disclosed incident, you have something like ten that go undisclosed. So I usually think about, you know, kind of measuring these not so much with the raw number that's there, which looks to be in the eighties, but more think of that as like, well, that's actually probably representative of eight hundred incidents across the industry over this year. But the crazier thing to me was just how big the jump was from twenty twenty four to twenty twenty five. And we have a pretty complete data set from twenty twenty four. So it was really interesting to see just how twenty twenty five was the year that AI related security incidents, I think, started to become like a global problem.

So that was one. And number two, one of the breakdowns that we always do in our analyses, in our annual reports is kind of categorization, trying to understand what are the top risks and so on. And I thought it was really interesting. You know, the OWASp top ten has prompt injection as the number one risk. And that is definitely up there. But higher than that is sensitive information disclosure. And this is much more of the organization made a mistake and their data accidentally leaked as a result of something that they did. So more kind of, you know, the user error type of problem and that outstrips prompt injection by about a third. Um, and so when we broke down, you know, that that number of incidents of the year across twenty twenty five and we looked for the type of vulnerability or the type of, of incident sensitive information disclosure was number one, well ahead of prompt injection. But prompt injection was of the, you know, kind of top malicious categories, not the human error categories is number one. And we've talked about the risks of prompt injection a number of times.

All right. I just want to take a moment to personally thank all of our listeners for their time, for their attention. If you have stories, please do submit them to us. We'll get them into the AI tracker. We might feature them on this weekend AI security. And again, if there are guests who would like to join us on the Modern Cyber podcast, please reach out. If not for myself and the whole team, we wish you and yours all the best of holidays, wishes and we will talk to you in twenty twenty six. Thank you so much. Bye bye.

Protect your AI Innovation

See how FireTail can help you to discover AI & shadow AI use, analyze what data is being sent out and check for data leaks & compliance. Request a demo today.