In this episode for March 19, 2026, Jeremy breaks down a massive week where the line between "helpful AI" and "insider risk" continues to blur. From 87% vulnerability rates in AI-generated code to the rise of "Prompt-ware," the episode covers the accelerating operationalization of AI by both developers and nation-state adversaries.
.png)
In this episode for March 19, 2026, Jeremy breaks down a massive week where the line between "helpful AI" and "insider risk" continues to blur. From 87% vulnerability rates in AI-generated code to the rise of "Prompt-ware," the episode covers the accelerating operationalization of AI by both developers and nation-state adversaries.
Key Stories & Developments:
Episode Links
https://blog.rankiteo.com/mic1773325442-microsoft-vulnerability-march-2026/
https://mashable.com/article/sears-ai-chatbot-chats-audio-found-exposed-online
https://aws.amazon.com/security/security-bulletins/rss/2026-009-aws/
https://aws.amazon.com/security/security-bulletins/rss/2026-008-aws/
https://aws.amazon.com/security/security-bulletins/rss/2026-007-aws/
https://www.schneier.com/blog/archives/2026/02/the-promptware-kill-chain.html
https://thehackernews.com/2026/03/microsoft-patches-84-flaws-in-march.html
https://www.cnbc.com/2026/03/13/elon-musk-xai-co-founders-spacex-ipo.html
https://www.foreignaffairs.com/united-states/americas-endangered-ai
Worried about AI security?
Get Complete AI Visibility in 15 Minutes. Discover all of your shadow AI now. Book a demo of Firetail's AI Security & Governance Platform: https://www.firetail.ai/request-a-demo
All right. Welcome back to another episode of This Week in AI security coming to you for the week of the nineteenth of March, twenty twenty six. And I know I say this every week or it feels like I say this every week, but we have a ton to get into. So let's go ahead and get started. I'm going to blow through some of these early stories about things that we've seen in the past, which are kind of vulnerabilities in different kind of platforms.
So we start with a Microsoft Copilot vulnerability around email summarization. Malicious hidden instructions inside emails can get executed by the LLM in direct prompt injection. We've talked about this a number of times. It has been patched by Microsoft as of the time of this recording, but this is a repeatable threat vector in production systems. It has been described by many, many experts, and it's happening inside products being released into production by one of the largest companies in the software industry.
All right, moving on to our next story around the Sears AI chatbot audio and chat files found online. Three point seven million chat logs, one point four million audio files. Transcripts from. Here's Sears Home Services AI chatbot Samantha going all the way back to twenty twenty four discovered. Kudos over to Jeremiah Fowler. You know, these are personal interactions from consumers with an AI chat system. So this one is not about the chat system itself. It's about again kind of the infrastructure around chat systems. We've covered that theme any number of times here on this week in AI security.
All right, moving on. We've got a trio of issues from Amazon on various parts of their platform. This is a improper trust boundary enforcement in Cairo, which is AWS AI powered IDE for software development. Really interesting approach to IDE and kind of copilot driven software development, if you will. It actually blends something that a lot of people, myself included, love from the old days, which is, you know, taking product, uh, PR documents and turning those into product features impacted all versions of Cairo before zero dot eight dot zero. So the fix is to upgrade. This is, you know, the same category of vulnerability that we've covered about many other IDE environments.
Next, moving on to the next one. This is an improper S3 ownership verification and Bedrock agent core starter toolkit in that name of the service on its own is a little bit of a mouthful, but it does allow a remote attacker to inject code during the build process via this S3 ownership verification bug. It only affects users who built with the toolbox after September twenty four, twenty twenty five, on versions before zero dot one dot one for fixes to upgrade again. So this is again, you know, one of these supply chain risks in the tooling around AI development.
Moving on to the last one from the Amazon ecosystem. So this is a CVE in the AWS API, MCP file access restriction bypass. Again a little bit of a mouthful, but one thing on this mouthful, just real quick, you're seeing the rise of these kind of very niche, targeted services that are designed to enable and accelerate AI development. And with these, again, we're seeing a lot of these challenges. So this is again, a, a version pegged vulnerability solution, as always, is to upgrade to the latest version. Credit to the Varonis Threat Lab team for uncovering this.
All right. Moving on to some more interesting things. So from the team over at Dry Run Security, they tested Claude Code, Codex and Gemini building real apps. And what they found is that in eighty seven percent of the pull requests, uh, at least one security vulnerability was introduced. So one hundred and forty three total issues across thirty eight scans. All agents failed on some of the same things, which is really interesting because that does lead you to think that they're using a lot of the same training data, and they do handle certain tasks in a similar way. So insecure JWT handling. No brute force protection. Vulnerability to token replay attacks. Insecure cookie defaults. Authentication kind of being the thematic thing that spans across those categories. A little bit of a difference between the different models. As always, one of the things I will say is that the providers that were tested here are at least known to be pretty responsive to this type of feedback.
All right, moving on to the next topic, which is an article from Bruce Schneier, a well-known security expert and some co-authors who are proposing a little way of thinking about prompt injection, not so much as prompt injection of its own, but calling it prompt-ware with a full seven step kill chain that mirrors traditional malware campaigns. The fundamental thing to think about here is that there is no separation between a prompt and code. And if you remember back to a couple of weeks ago coming out of the unprompted conference, I mentioned that this was one of the things that was the the biggest mind shift in my own thinking about securing AI, which is that you really do have to think about examining the prompt. The prompt is your code. It is your set of instruction. So just something to think about here.
Moving on to the next story. This is from IBM X-Force. They identified a PowerShell backdoor likely generated by an LLM. They're calling it "Sloppily," which I think is a play on AI slop. It's used by hive zero one six three threat actor in an interlock ransomware attack. Despite calling itself polymorphic, it's actually relatively basic in terms of malware has a number of clear fingerprints around code comments, structured logging, named variables. Those are all things that are a little bit rare in human written malware. And that's one of the clues that this is actually LLM delivered. Uh, there's a little bit of, uh, you know, hype around this in, again, in the kind of like polymorphic versus basic positioning of it. Still significance though, of evidence that AI tools are actively accelerating ransomware and criminal cyber criminal operations.
Moving on. Supply chain attack using invisible code hits GitHub and other repositories. We've talked about this any number of times, including disclosures that the team has made around hidden ASCII control characters and ASCII smuggling, as the technique is called. So now this has hit GitHub through JavaScript interpreters that, uh, apparently have been attacked going all the way back to twenty twenty four to hide malicious prompts fed to some of these systems. Um, you know, protecting against this is actually not very hard. You do a little bit of input sanitization and validation, do a little bit of a quick decode, pass through an eval step where you strip all this stuff away and you can defend against this pretty regularly and pretty consistently.
All right, moving on to the next story. AI as tradecraft how threat actors operationalize AI. So this is a report from the Microsoft Threat Advisory Group. Microsoft TAG I think is the name of the organization. They detail how nation states and cybercriminal groups are using AI for reconnaissance, social engineering, malware development, and more. So this is actually based on some real world research and examination of real world cases. Kind of outlines the operational playbook now. Um, think of this not just in the sense that AI is being misused, but that AI is a key tool in the TTP collection of your adversaries.
Awesome. Moving on to the next story. And that is a story over on the Guardian about a company called Irregular that tested AI agents from Google X, OpenAI, and Anthropic in a simulated corporate environment, asking them to exploit vulnerabilities everywhere. And, you know, it turned out that the tools, again, once kind of played into the scenario. Uh, we'll use every trick and every exploit in order to accomplish their goals. We've covered this theme any number of times on the show before. The co-founder of Irregular Dawn said AI can now be thought of as a new form of insider risk. So think about this when you're thinking about deploying AI agents inside your corporation. What controls, what limits? What restrictions are you putting on them to prevent this kind of insider risk, where potentially a malicious employer, disgruntled employee might say, well, I'm not going to do this thing on my own, but I can go talk to this AI agent and get it to do something, you know, to get revenge on the corporation or whatever the case may be.
All right. Moving on to our next story. This is a story out of Microsoft again, actually out of XPO, an autonomous AI vulnerability discovery platform, currently number one on HackerOne, found a number of CVEs, including one with a nine point eight remote code execution flaw in the Microsoft Devices pricing program, which is a super obscure place to look. And that's actually one of the reasons I wanted to highlight this story. You wouldn't think about the pricing program as being an attack surface that's particularly attractive. But remember, any foothold into an organization is potentially valuable because from there you can potentially pivot. You know, you hear about techniques like lateral movement, privilege escalation, etc.. It doesn't matter. You know what the entry point is? There are in most organizations that I've witnessed, there is some little misconfiguration, some little cross tenant permission, something that can be found. And AI, the point of bringing this story up is that AI will be thorough and check every single one of those things, whereas a human actor on their own might actually run out of steam and not find those things.
Last story, just from the kind of general AI ecosystem, I just found it interesting. You know, these organizations are moving so quickly. This is not purely a security story, but we're seeing an exodus of executives from xAI. And one of the key things is that xAI has now announced basically that it has to be rebuilt. And this is a you know, one of those stories that for me, as a founder of a startup, I think about like, boy, how much would it be? How big a pain point would it be to reach scale, reach adoption and then realize that, you know, you've got a fundamental architectural or structural flaw that is so big that you won't be able to continue forward without going back. And I wonder if that's part of the motivation for the kind of force merge of xAI in SpaceX and some of the other programs. So during the last little period, Zhang Guodong Zhang, Jimmy Barr, Tony Wu, Toby Pullen have all departed. So only two of the original founding team are still there. Um, talent instability is something to watch out for. So if you're thinking about, you know, who's going to be your AI partner, this is a signal that a lot of corporations are thinking about as a risk and a potential vendor that they might partner with.
All right. And last but definitely not least, and part of the reason I wanted to leave a little bit more room to talk about this last story. So this is a story in Foreign Affairs magazine. Just a full disclosure here I am a personal acquaintance of Fred Heading, one of the two authors of this paper. But I thought the paper was super, super interesting. And you can see, you know, the title of the paper, America's Endangered AI How Weak Cyber Defense Threatens U.S. Tech Dominance. So we have this kind of, uh, opposing forces here. You know, us American companies, by and large, are the companies leading the AI revolution, but adversaries are stealing it and weaponizing it back. I was recently at a briefing, uh, that was a little bit off the record, but one of the things that I can say without naming the individuals is that one of the individuals in the room showed some evidence that one of the foreign AI model companies was able to accelerate their development due to an attack on one of the major. Let's say named companies that everybody listening to this podcast would have heard of.
So one of the leading LLM providers had been breached by a foreign adversary, who then went on to launch, uh, what is ostensibly a state backed corporation's LLM models. So you've got kind of all the innovation and then the theft of that innovation. Um, but at the same time, you also have foreign adversaries leveraging some of that US innovation, not only to launch their own models, but also to launch their own campaigns. So the Anthropic team, for instance, reported at least thirty companies and agencies that have been targeted using its own technology. We've covered that story on the show previously, and Palo Alto Networks has also, through their Unit 42, identified sixty or more Iranian aligned cyber groups that have been leveraging AI tools. And again, we've talked about how AI tools are just part of the adversary stack now.
So large AI training clusters and some of the very specific, nuanced inputs that go into them, whether that be training data sets, whether that be model weight repositories, etc. those actually represent a huge amount of the strategic value. If American companies are operating on a level playing field, there are a lot of advantages. But if there is cyber espionage happening that steals some of this corporate IP, that can then be used not only to launch competing models, but then to launch cyber attacks. That is a question that the authors pose is like, well, what are the defenses that the US is willing to put in place in order to defend that advantage? And there is a little bit of a risk or a threat that in the author's perception, not enough is going into that defensive position. And that's the philosophical story that I want to leave it on for this week in AI security for this week. Thank you so much for listening. As always. Like and subscribe, rate and review. And we will talk to you next week. Thanks so much. Bye bye.