In this week's episode, Jeremy focuses on two rapidly evolving areas of AI security: the APIs that empower AI services and the risks emerging from new AI Browsers.
.png)
In this week's episode, Jeremy focuses on two rapidly evolving areas of AI security: the APIs that empower AI services and the risks emerging from new AI Browsers.
We analyze two stories highlighting the exposure of secrets and sensitive data:
API Insecurity: A path traversal vulnerability was discovered in the APIs powering an MCP server hosting service, leading to the exposure of 3,000 API keys. This reinforces the lesson that foundational security mistakes, such as inadequate secret management and unpatched vulnerabilities, are being repeated in the rush to launch new AI services.
CVE in Google Cloud Vertex AI: We discuss a confirmed CVE in Google's Vertex AI service APIs. This vulnerability briefly allowed requests made by one customer's application to be routed and responded to another customer's account, risking exposure of sensitive corporate data and intellectual property in a multi-tenant SaaS environment.
Finally, we explore the risks of AI Browsers (like the ChatGPT Atlas or Perplexity Comet browser) and AI Sidebars. These agents, designed to act with agency on a user's behalf (e.g., price comparison), are vulnerable to techniques that can reveal sensitive PII and user credentials to malicious websites, or unwittingly download malware.
Episode Links
https://techcrunch.com/2025/10/25/the-glaring-security-risks-with-ai-browser-agents/
____________
Worried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of FireTail's AI Security & Governance Platform
Welcome back to another episode of This Week in AI security coming to you, as always from the Modern Cyber Podcast. I am Jeremy, co-founder and CEO of Firetail, host of these episodes. We are talking today for the week of October thirtieth, twenty twenty five, and we've got two themes for today's episode. One is services that empower AI and the APIs that power them. And number two is AI browsers. And both of these topics have been in the news a lot over the last week.
So let's dive in. It's going to be a little bit shorter episode today with just these two topics, but we've got a couple of stories on each that I want to get into.
So let's start off with the path traversal breaking MCP server hosting. And as always, we'll have all of the articles and the stories from the week shared in the show notes. But in summary, what we found was a path traversal vulnerability in a particular AI services APIs that enabled the exposure of thousands of API keys. And I think one of the things that somebody said to me once, and I've been doing my own research about this as well, is that really what is an MCP but kind of a natural language wrapper around an API? So underpinning effectively all MCP, you have some kind of API, whether that is Rest or GraphQL or what have you. That actually is what the MCP service itself interacts with. And the MCP itself is kind of a natural language interpreter between the user's desired intentions and the API back end. And so of course a service emerged to power hosted MCP. And that's not really all that difficult, because most of these APIs that are kind of behind the mic are publicly exposed anyway.
So if you think about the way that this service might be structured. You know, you going to this site, going to then and that site then issuing the request to the backend API. And so that site needs to have the credentials in order to forward the request on to the APIs. And one single Docker bug enabled the discovery of the sorry enabled the exploitation of the path traversal vulnerability, which enabled the discovery of the three thousand MCP server API keys. Kudos to the folks over at Gitguardian who found this. That's kind of their specialty is looking for the exposure of secrets in various different ways. And you know, to me, what this really illustrates is, again, a theme that we've talked about a lot over here at Firetail, which is we're all in this rush to build and capitalize on the potential of these AI services. But in that rush, it's very common that mistakes are made along the way. And a lot of these mistakes are things that we should know better as a cybersecurity and as a software industry, frankly. You know, the challenge around exposed secrets is one that's been going back all the way to the days when I still had hands on keyboards as a as a windows Nt4 system administrator, you know, back in nineteen ninety eight and onwards. And at that time, it was just, you know, are you saving passwords in clear text in a text file or something like that? But as we've moved into the era of web services and APIs and AI services that are powered by APIs, the same kind of principles need to apply. You know, you don't make that mistake of leaving a credential exposed somewhere. And if you need to store the API key somewhere, you need to have it protected with layers of encryption or access controls or both. Right. And so what what was found in this case was really that, you know, a service stood up very, very quickly, did not check vulnerabilities around it, did not then think about the storage of the API keys. And that is the exposure that we see here.
All right. Next topic and this is from Google Cloud. So this is a GCP security bulletin that was released uh just about seven eight days ago. As of the time of this recording and publishing. And it is a classified as a CVE. So common vulnerability and enumeration. So this was a confirmed vulnerability that did live out in the wild. And this was around the Google Vertex AI service APIs. And for those who aren't familiar, vertex AI is kind of an AI as a service offering built on top of Google Cloud. We've actually talked about it before, and I've talked about it in some of my talks around the intersection of AI and API security, and in fact, the way that most customers will access the vertex AI service is via API.
So how does this work? So on the Google Cloud Platform you have a variety of LLM model providers, model versions that are all available in kind of a containerized or serverless function based architecture. And what that allows you to do is you don't have to run the full LLM yourself, which can be very expensive from a compute standpoint. And so it allows Google to offer this in kind of a multi-tenant SaaS architecture. And so as a customer, you can sign up, you get access to all these models without a huge cost commitment or compute infrastructure commitment to stand it up. And then you just use the API service to access the different models, model versions, providers, etc. that you want. And I won't get into the details of the CVE. You can read the security bulletin yourself if you want to understand the fine details about how and where the exact kind of issue existed. The issue has been patched by Google. Uh, one thing that they did want to stress, though, is that this doesn't impact their kind of Gemini service, which is a different Google AI service. Gemini is much more akin to ChatGPT, whereas, uh, vertex AI is akin to having your own private copy where you can run like integrate code with an LLM. Right. You know, kind of from your code, build potentially an interaction with an LLM that generates product recommendations or a customer support chat bot or whatever that case may be. So they did want to stress that that is highlighted in the in the security bulletin itself.
But the risk in this one is that actually there was a brief period of time, and this was discovered by researchers who were looking at the service. At that time, requests made by one customer were perhaps the response was routed to another customer. So just to kind of put this out there. So my name is Jeremy, my co-founder at Firetail is named Riley. So imagine that Jeremy and Riley are two different customers of the vertex AI service. Jeremy makes a request to an LLM or Jeremy's application makes a request to an LLM, but that response gets instead routed to Riley's Google account. And so that's obviously a pretty big security risk in the sense that I might, you know, part of the reason I might build an application like this is that I want to use my data with the vertex AI service, and that might have very sensitive data in a corporate intellectual property customer records, who knows what. So that's a real issue on this. And again, this in my mind goes down to the category of we're building things very, very quickly without necessarily knowing all the vulnerabilities on them.
All right. Let's move on to the second topic for today. And that is AI browsers. And so what is an AI browser at a high level. It's a web browser. We've seen three or four of them already launched and announced from ChatGPT. Sorry, from OpenAI themselves, the ChatGPT Atlas browser, but also from companies like perplexity with their comet browser. We talked about that a little bit last week. And what we've seen is, you know, the value that these browsers can provide is that they can take your content out to the web. So let's say I want to go on vacation, and I want to do some price comparison shopping around hotels, flights, rental cars, whatever the case may be. I can actually, instead of me doing all the work on that, I can maybe use this AI powered browser to kind of take some of that responsibility and do the searching for me and build a summary of the results that it finds. And it tells me, hey Jeremy, the cheapest hotel. Is this the best flight for your schedule? Is that whatever the case may be, right?
And with that kind of agency that goes into kind of outsourcing my request, if you will, there's a lot of risk that's been discovered and proven. Everything from revealing sensitive data that I might need to give to the browser in order to have it go execute tasks. Maybe I need to tell it, for instance, what airlines I have my miles with, what hotel chains I have, my loyalty points with, whatever that could be revealed by a malicious website that knows to knows that you are using an AI powered browser and it knows the right questions to ask you. So my little AI browser agent lands on a website for hotel booking and that hotel booking website. Ask the question well, is Jeremy a member with Marriott Hotels or Starwood or IHG or whatever the case may be? And that could reveal some sensitive PII of my own personally identifiable information. Similarly, with that agency, it might land on a very malicious website that says, well, I can solve that problem for you. Just download this package and the AI agent might unwittingly download that package. So downloading malware has been proven to be a potential risk of some of these AI browsers as well.
At the same time, along with AI browsers, there is there are AI sidebars that are being launched into kind of existing browsers. So if you're using something like a Google Chrome or Firefox or whatever, and you don't convert into one of the quote unquote AI browsers, but you do want some of that functionality pretty much the same issue around these sidebar functionalities. You need to give them, again, agency credentials, authentication tokens, whatever the case may be, in order to go execute your tasks. So researchers have red square and kudos to them. They've been doing some great work lately around browser based security items. In general, they have discovered that these sidebars have been cloned. They can exploit user trust, the trust being the credentials and the identity that you provide to them. And I think just to kind of summarize, TechCrunch has a article that really summarizes some of these findings that was published on October twenty fifth, and it really just encapsulates a lot of the risks in one article. So if you don't want to read both of those individually, just go to the TechCrunch article that summarizes it for you.
All right. We'll leave it there for today, but hopefully that gives you a sense of some of the risks coming up. As always, please rate review, like subscribe, all that good stuff. If you've got a story around AI security that you want us to cover on next week's episode, please just send it podcast. We'll be happy to do that. Thanks so much. Talk to you next.