In this week's episode, recorded live from the inaugural AI Security Summit hosted by Snyk, Jeremy reports on the latest threats and strategic discussions shaping the industry. Covering multiple instances of "old risks" reappearing in new AI contexts...
.png)
In this week's episode, recorded live from the inaugural AI Security Summit hosted by Snyk, Jeremy reports on the latest threats and strategic discussions shaping the industry. Covering multiple instances of "old risks" reappearing in new AI contexts...
The Salesforce "forced leak" vulnerability, where an AI agent was exposed to malicious prompt injection via seemingly innocuous text fields on web forms (a failure of input sanitization).
Research from Nvidia detailing waterhole attacks where malicious code (e.g., PowerShell) is hidden in decoy libraries (like "react-debug") that AI coding assistants might suggest to developers.
A consumer AI girlfriend app that exposed customer chat data by storing conversations in an open Apache Kafka pipeline, demonstrating a basic failure of security hygiene under the pressure of rapid AI development.
The "Glass Worm" campaign, where invisible Unicode control characters (similar to Ascii Smuggling research by Firetail) were used to embed malware in a VS Code plugin, proving the invisible code risk is actively being leveraged in development tools.
Finally, Jeremy shares strategic insights from the summit, including the massive projected growth of the AI market (approaching the size of cloud computing), the urgency of data readiness and governance to prevent model poisoning, and the futurist perspective that AI's accelerated skill acquisition (potentially surpassing humans in certain tasks in an 18-month cycle) will require human workers to constantly upskill and change roles more frequently.
Episode Links
https://noma.security/blog/forcedleak-agent-risks-exposed-in-salesforce-agentforce/
https://developer.nvidia.com/blog/from-assistant-to-adversary-exploiting-agentic-ai-developer-tools/
https://www.foxnews.com/tech/ai-girlfriend-apps-leak-millions-private-chats
Worried about AI security? Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform
Welcome back to another week in This week in AI security, brought to you, as always, by the fine folks over at Firetail, the folks who bring you modern cyber as well. We are recording for the week of sixteenth through the twenty second of October, as we typically do kind of a midweek to mid week cycle.
We've got a bunch of things this week, and if you notice, the location is different. I'm actually on site at the AI Security Summit, the first ever AI Security summit hosted by Snyk. We're going to talk a little bit about what we learned at this summit this week. There were some really interesting topics of conversation. Highly recommend the event to anybody who's looking out for something in the area of AI security leadership. But unfortunately, the next one might not be until next year, so keep an eye out.
All right, let's dive in. Let's go through the stories that we've been tracking over the past week.
So first the forced leak. Now this was a vulnerability or a risk exposed by Salesforce lead capture and kind of contact capture forms. And what it was was that on these forms, there were, of course, fields to capture names and email addresses and things like that. That could be for opening a customer support ticket, that could be for filling out interest in getting a product demo or something like that. However, the agent force platform that was used, or I think it still is, used to process some of that data coming in, had a vulnerability where it would go ahead and execute based on what was in those forms. So if you have a form that, for instance, says, what is your name, what is your email address, phone number, and then what are you interested in one of these kind of free form text fields. The agent force would execute a malicious prompt or could execute a malicious prompt that was input into that field.
Now, this was discovered by researchers over at Noma. They disclosed it to Salesforce. Salesforce has taken action about it. If you want to learn more about it, as always, links are in the show. Note on that. It's again, to me just kind of reinforces the the whole kind of vibe that we're moving very, very quickly and potentially not asking ourselves to think about all the risks of the types of technologies that we're exposing. So, you know, inputs on a text form is a very well known path. And input validation. Input sanitization is again a very well-known path, and in this case, not necessarily thinking about the threat model of, well, it's valid text input. Let me just hand it off okay. That might have been accounted for. But then that next step of well, who's going to pick up this text and what might they do with it, whether that be a human or in this case, a system slash AI agent. And that proved to be the issue in this case.
All right, moving on to our next story. Some really interesting research from the folks over at Nvidia and on the Nvidia developer blog. This is more hypothetical research. Um, but it does show that there is the possibility to kind of pivot from being quote unquote, helpful in the coding space to being a potential adversary. This leverages a technique called waterhole attacks. And what that really refers to is there are some common things that developers might gravitate towards. For instance, there are very common libraries that are often used just as an example, uh, a very common front end for modern web based applications is something called React or React.js, a JavaScript framework for building responsive web applications in browser based environments using a mixture of JavaScript, HTML and Cascading Style Sheets CSS.
And what they found was that, um, you could, for instance, kind of squat on things that are very close to that. And then with a targeted a prompt. You could prompt a genetic coding assistant platforms to go out, grab those things and they might be malicious. So if you, for instance, created something called react debug or React Hyphen debug JS, and then you go on to a coding assistant platform and you say, hey, I'm having trouble with this react thing. How do I debug my code? See what's going on. Well, the copilot or the coding assistant might go out, do a search, find this react dash, debug, and assume that that could be a helpful tool for you. Not really thinking about the fact that there are malicious prompts embedded in it or malicious. Um, you know, maybe even malware. I think in the Nvidia um, research, they actually had planted some windows PowerShell and remote shell and maybe even some kind of Windows registry, uh, contamination, um, stuff to kind of prove to, to provide a proof of concept for how this could be leveraged. So that's a really interesting kind of, uh, development on that side.
Next, uh, under again, the topic of we're moving too fast and ignoring security guardrails and not, you know, not remembering the lessons that we learned from the past. This was an AI girlfriend app. It's been really interesting to watch. Kind of on the consumer side, what are some of the most common use cases around AI platforms right now? And certainly one that we're seeing is around things like mental health and companionship. And so here we have an AI girlfriend app. And what it ended up doing was it was storing the chats for training of the model and reinforcement and improving this kind of AI powered companionship application, uh, to make it better for the users over time. But it was putting them into an open source, uh, data pipeline called, uh, Kafka from the Apache Foundation. And that Kafka, uh, Implementation was left wide open to the public.
So this is not an AI vulnerability per se. But it again is kind of under that umbrella of we're moving too quickly and not remembering to kind of check the boxes on the standard risks and threats that go along with the platforms where we're building our AI applications. So nothing earth shattering here, but just reinforcement. And it does call into question from my mind, you know, the what is the responsibility of the vendors who are leveraging AI technologies to provide these services to consumers? Um, you know, this one maybe pretty innocuous again, found by a researcher who disclosed it, as opposed to a state actor who might be looking to harvest information about private citizens. So you could say no harm, no foul. But on the other hand, I would say, you know, for those companies that are out there providing services to consumers, you could be collecting massive amounts of information from your customers, and you have a duty and a liability to those customers to treat that data with respect, to keep it private, etc.. The excuse of we're moving very quickly because we have to because it's AI and it moves very quickly in my mind, is not a viable excuse and is not a relevant or, um, you know, a justifying excuse for this type of, uh, you know, carelessness.
All right. Moving on. Um, comment. There are a number of companies thinking about how they can use AI to really repurpose some of the most common tools that we use on a day to day basis. And one of those is the web browser. And in fact, it's arguable that the web browser is the the single most common tool being used nowadays. You know, if you think about your day to day working environment, you might or might not be checking your email in a web browser, but you're almost certainly using very common business productivity applications in a web browser. That could be anything from document editing on a platform like Google Workspace. Or it could be things like managing your contacts and your customer database in a tool like Salesforce. Or it could be HR or payroll in a platform like workday. So we the point is, we all live in our browsers pretty regularly, day to day, week to week.
And so some companies have been out there thinking, well, how could we make humans more productive by embedding AI into the browser? Or in fact, by giving them a brand new AI powered browser to start off with. And we've seen a number of these. And what the researchers over at layer, in this case, figured out was, one of the things that these browsers do is they allow you to kind of reuse that URL bar. So, you know, if you're using something like Google Chrome, you can use that URL bar to initiate a search. And in their case, they figured out with this comment browser, this AI browser from perplexity it accepts prompts right in that same URL bar, but it also accepts prompts that include things like SQL injection. And so here what they figured out with the research is, you know, with a single kind of malformed prompt that kind of looks to my, to me, very parallel to a SQL injection vulnerability. One click, you turn that thing on and it can actually kind of hijack that browser against you.
So some really interesting research and this one did make it out into the wild, this vulnerability. And so if you were a user of that system and you did either accidentally use one of these things or input a malicious prompt, you might have observed this. More likely, though, the vulnerability comes down to things like phishing attacks for people who would be using this browser and checking their email. So in many cases of phishing attacks, the attackers rely on creating a web page that looks similar to something like your bank or PayPal or something like that. Well, imagine that instead of that, they have a button that actually just redirects you, and includes that malicious prompt in the URL of the link that you're clicking. You would be one click vulnerable. And that's why they call this the one click that to turn your AI browser against you.
All right, moving on. I had the pleasure of attending the first AI security summit in San Francisco this week. It's been fascinating. I've learned a lot, and I wanted to especially thank the host, Snyk, for having us. And we got a call out from Danny Allen, the CTO of Snyk, and his introductory remarks this morning for our research on Ascii smuggling and the hidden invisible Unicode control characters that our researcher, Victor Markopoulos, disclosed to Google and a number of other LLM providers a couple of weeks back. If you haven't seen our research on that, I would encourage you to check back on that.
However, that same kind of topic came to the foreground again this week in another context, uh, with a campaign that is being called Glass Worm and I don't really care for the name too much, but I don't really think that's too important. Very often, the security community will slap a label on a new piece of malware, or a new worm or something like that as an easy way to kind of reference it when talking about a vulnerability or something like that. So in this case, there was invisible code using exactly the same types of characters as we talked about in Ascii smuggling inside a plugin for the VS code environment. Um, and I apologize if I've gotten the name VS code slightly wrong. I haven't worked in a Microsoft kind of in code development environment in many, many years now. Um, but there is something called the open vs marketplace where kind of uh, vs editor plugins can be, you know, provided, um, downloaded, installed, etc. and in one of those there was a piece of malware planted and it didn't really show up because why it's an invisible characters. But when activated, what it did was it kind of propagated across these environments. In this case, it wasn't super, super malicious. It was more a proof of concept, I think, from the attackers who figured out that they could do these things and use these invisible characters to actually embed bad stuff in code. But it does prove that, you know, there is this risk with all these AI powered tools that are being adopted into the enterprise, that a lot of them might contain malicious things. And some of these risks that we identify in one environment, like in our case, we identified it as a prompt based vulnerability that you could, you know, send to one of those platforms like grok or like Deep Sea that were vulnerable to Ascii smuggling. Um, it can also be embedded in tools that you might actually be like downloading and your team might be using. So a little bit larger risk than you might think about.
And on that note, I would just wanted to kind of close out this week's edition with a few specific thoughts of things that were discussed. Now, I wasn't able to attend all of the sessions. I was in the leadership track, learning, you know, from peers, networking with peers, etc. and a couple of the things that I thought were really interesting.
First, um, they pointed out that, you know, this is the first AI security summit, um, that anybody here knew about. There may have been other events focused on AI security, to be sure, but this one really brought together people from multiple companies, multiple, uh, customer profiles and multiple organization types and from multiple countries. So a good diversity of thought in the room, a number of panels, a couple of other things. Market size. It's projected that the AI, the AI market is already in the kind of seven hundred and fifty billion range, with an estimated cap of around twenty two billion over the next ten, fifteen years. And they put that in comparison in parallel to cloud computing. So out of nowhere, you know, in the last two years, the projections are that basically AI is kind of an equal sized industry, as the cloud and cloud has been around for a good fifteen plus years at this point, and has become kind of a default platform for doing it. So just think about the pace and the scale of that growth. So that was an interesting data point for me.
Other ones that I thought were really interesting, a lot of talk about data readiness and governance as part of strategies to keeping organizations on track. Um, of course, we're a lot of organizations are with AI adoption today is in early is in experimentation and early production phase. There are, of course, some that are a little bit more ahead than that. But for those that are kind of getting ready to go forward with larger initiatives, you know, there's a lot of talk about AI as only as good as the data behind it that's in this next wave of applications that are really being thought about and developed right now, where corporate private information is going to be used in combination with AI systems for that next wave of applications. And so the concept around protecting your data and data readiness and having good control over things like data pipelines to prevent poisoning risks and things like that was discussed at some length. And that was a really interesting conversation from my perspective.
Cyber resilience as kind of thinking about how do you bring some of the overall lessons that you've learned from past waves of technology adoption and building the cyber resilience that you've built for things like end user computing or from cloud computing platforms? How do you bring those same kind of thinking over into your AI initiatives? That was an interesting conversation as well. For those that are interested, we recently put out a blog post about the NIST AI Risk Management Framework that is also built on some of these NIST Cybersecurity Framework. Foundational concepts that do include cyber resilience. And I thought that was really interesting for me.
The leadership track concluded with possibly the most interesting talk of the day, a researcher and futurist who talked a lot about kind of the role of humans in these environments. Um, and the organization is called Signal and Cipher. Uh, you might want to look them up if you're interested in kind of talking about that. And, you know, there were kind of two or three points that really stuck with me from that presentation.
Number one, tools enable new ways to do a task. AI rewrites the rules of the system. And if you. The point being that if you think about AI as a productivity agent, it can be that. But if you think about it as a new way of handling a task, it can actually be that as well. So you might you might want to think or ask yourself, am I using AI in the way or using a particular AI tool in a way that makes me better at this task, or that might actually solve the task in a different way altogether and faster and more effectively? And that's an interesting thought exercise to ask yourself as you embark on an AI Initiative.
And the second thing is that with the rise of these AI platforms and some of the things that they can do, particularly in the area of information technology and, you know, let's say managing data and whatnot, um, the, the lifespan of a skill, let's say you're very skilled at analyzing log data as an example. Well, it takes an AI system much less time to get up to that speed. And then within something like an eighteen month cycle, they will get better than the human at that if trained, you know, improved, continually developed, etc.. And so what that might mean for human workers is that whereas you might change, you know, let's say, positions five, six times over the course of a twenty, thirty year career, you might need to do that much more frequently going forward, because every kind of rung up the ladder that you climb, you get superseded by AI at some point, and I don't know if I believe yet that it is or will be as extreme as eighteen month cycles. But I do take the point. You know, if you're working a job that is very kind of, um, data focused or data centered, something like security operations center analyst or something like that. Again, an AI system is likely to get better than you at it in a short period of time. And then you, as the human, need to upskill and move on to the next task where you add value to the overall equation. So it's a really interesting thought exercise about managing the human side of the changes that the AI systems are bringing.
So all right, we'll leave it there for today. I know this week's was a little bit longer than our previous weeks. Hopefully some interesting thoughts in there for any of you. We will talk to you next week on This week in AI security from Modern Cyber. Thanks so much.