I think the best description of the AI governance problem during this interview was the title of the award-winning movie, Everything, Everywhere, All At Once. Generative AI has been disrupting businesses, products, and vendor risk management for a few years now. FireTail is one of the companies trying to address this problem for enterprises, so we check in with Jeremy Snyder to see how things are going.
Interview with Allie Mellen about her new book, Code War: How Nations Hack, Spy, and Shape the Digital Battlefield
We're VERY excited to check out Allie's new book, which will be released on St. Patrick's Day 2026! The timing could not be better, as her book is perfectly positioned to provide some much needed perspective on the cyber aspects of the ongoing war in Iran.Is it normal to see the use of wipers on healthcare companies in the midst of the conflict? Is there any precedent for hyperscaler datacenters getting targeted (some of AWS's EMEA regions are still recovering)? Check out the conversation to find out!
Segment 1 Resources:
This week, Jeremy Snyder from FireTales with us to discuss the double-edged sword of AI. Then in our second interview, Allie Mellon joins us to talk about her new book, Code War. Finally, in the enterprise security news, vibes and funding, starting to see some disruption in the vulnerability management space, finally, tons of new free tools, lots of essays, lots of reports, lots of breaches, the talks our hosts are giving at the RSAC conferences here, and someone is selling an actual cone of silence.
That's our squirrel story for today. All that and more on this episode of Enterprise Security Weekly. This is a security weekly production for security professionals by security professionals. Please visit securityweekly.com/subscribe to subscribe to all the shows on our network.
It's the show where we talk security vendors and aren't afraid to name names. It's Enterprise Security Weekly. Right now with Tanium, IT and security teams complete patching programs in minutes, not weeks. Reduce software vulnerabilities by up to 97 percent. Cut incident response times by up to 99 percent.
This is not just IT, this is autonomous IT, making business your business unstoppable. Tanium, autonomous IT, unstoppable business. See more at securityweekly.com/tanium. In today's threat landscape, cyber attacks move fast and security gaps, siloed tools, and slow response can turn small incidents into major breaches. That's why organizations need a proactive, unified defense from a single trusted partner.
Set in a one-wayfinder threat detection and response delivers proactive, scalable defense built for modern threats, unifying expert led readiness, threat hunting, incident response, and 24/7 managed detection and response. Harnessing Google Threat Intelligence, industry-leading AI technology and unmatched expertise, Wayfinder helps teams see attacks sooner, respond faster, and reduce risk across the entire attack lifecycle.
Ready to stop attacks and accelerate response? Learn more at securityweekly.com/sentinelone, Wayfinder, where threat intelligence, AI, and human expertise deliver true cyber resilience. Welcome to Enterprise Security Weekly, and Happy National Skin Barrier Day, which I feel kind of attacked. For this podcast, I'm going to look a bit like a lobster.
I spent some time on a boat yesterday, and this is episode 450 recorded on Thursday, March 12th, 2026. I'm your host, Adrian Sinabria, and joining me is Mr. Joshua Marpeh. How are you doing, Josh? Pretty good, Adrian. Pretty good. Looking forward to RSA next week, or the week after, sorry, my apologies.
It's getting short time, and I'm like, "God, I should remember everything, packing bags, and doing all sorts of stuff." I have a lot of writing to do before RSA. I'm promising a lot in my talk, and it needs to get published before my talk.
Also, we'll have Amy Nelson with joining us for the news, but just me and Josh for the interviews today, and congrats on the new role, by the way, Josh. Oh, thank you. Thank you. Thank you. Yeah. I'm at Finite State starting on Tuesday. So I'm really excited about that. They're moving. It's weird.
I've been self-employed for about 15 years. So having an actual HR department, and they're like, "Okay, we need you to do this and this and this and this to get onboarded." I'm like, "Oh, wow. There's actual processes. Cool." Yeah, big change for sure. 15 years. Yeah, I've been solo for three years.
Yeah, I couldn't imagine going back at this point, but who knows? After a while, stability has its sway, if you know what I'm going to say. Yeah. All right. Quick announcement, then we'll get into our first interview. Security weekly listeners can save $100 on their RSAC 2026 all-access pass.
RSAC 2026 conference will take place March 23 to March 26, as you heard Josh say, week after next in San Francisco. To register using our discount code, you can visit securityweekly.com/RSAC26 and use the code 56U5SECWeekly. We hope to see you there. All right, today we are talking about AI governance and shadow IT with Jeremy Snyder.
He's the co-founder and CEO of Firetale. He speaks the language of business, having been directly involved in a lot of acquisitions, but also speaks a lot of human languages as well. Welcome to the show, Jeremy. Thanks, Adrian. So happy to be here. Yeah, lovely to have you. And it's interesting going over the list of languages you speak.
My partner speaks three and three of the same that you speak, but she's Brazilian, but she grew up in Miami. So you can immediately guess which three that would be. Yeah, Portuguese Spanish and English for sure. Yeah, Miami, the capital of Latin America, as I've often heard it called. Yeah. Yeah. Indeed.
And you also speak, I think Finnish, and was it French or the other two? Finnish and French. Yeah. At a pretty fluent level, and there's a few others I can get by and certainly conversational enough. Yeah. And with a lot of these. You probably know probably. Yeah, exactly.
And similarly, like, you know, German and Swedish and some of the others that I, you know, not really a challenge. Yeah. Yeah. I can go to Australia, England, Scotland, the United States, various sections of the United States, which is actually not a lie when I moved to Louisiana. It's very different English. It's challenging.
But that's about it, you know. So congrats. Well done, sir. Thank you. Thank you. Product of my upbringing. All right. So yeah, you know, AI governance. So both me and Josh are actually faculty at ions. So we do a lot of advisory calls with enterprises on the problems that they're seeing.
And if I were to guess, Josh, you've probably taken just as many AI governance calls as I have. Hundreds. Hundreds, I think, is the right term over the past few years. Ever since chat GPT went sort of wildfire, it's been insane. And it's gone in so many different directions, depending on what the latest magazine article in the back of the airplane pocket was, that it's fascinating.
So I'm really fascinated to hear about your take on AI governance, Jeremy, seriously. Yeah, I think I'm sorry, go ahead, Adrian. Yeah, I was going to serve it up for you with a little more context. I was just going to say, like, it's pretty typical for new technology to create governance problems, right?
Like when SAS became a thing, you know, when a BY, you know, bring your own device, people start bringing iPhones into the enterprise, you know, these were all challenges. But it didn't happen this fast, right? And it didn't happen at this scale, like with this technology, you're not just worried about somebody using cloud or chat GPT via web browser.
All of a sudden, it's baked into all your other products that you've already done due diligence on. You've already done supply chain, third party risk management. But now all of a sudden, like one day, notion got AI. And now there's prompt injection concerns, right, indirect prompt injection and stuff like that.
I wonder if you could -- well, and then when agents -- we started seeing agents and having AI have access to file systems and being actually installed on systems as an agent, you know, I think that further, you know, made this problem more difficult with integrations and things like that.
And you know, there's a quote -- I saw Caleb Seema, who I think most recently was CISO at Robinhood, he posted something on LinkedIn basically saying, "We just deployed more attack surface in 12 months than we built in the previous decade. And most security teams have zero visibility into it." So I thought that would be a nice quote to kick things off here.
Yeah. I mean, look, it certainly tees up the first part of the problem that I really think about, which really is that visibility and getting that visibility. To Caleb's point, I mean, I give these parallels of different points in my career from when I was a practitioner and then when I made my first move away from being a, you know, hands-on keyboard cyber practitioner for me, which was at AWS, and I saw this like new thing of the cloud.
From my practitioner days to your point, like the first thing that kind of happened as a new technology was a BYOD mobile device. In my case, it was a Palm Pilot. And it was a Palm Pilot that, you know, a CEO came in and said, "Hey, I want my contacts and my emails and my calendar on this thing.
And I want it now." And by the way, like, no, it's not an answer. You know, he didn't say that part, it was kind of implied, right? But you know, when you think about these waves that we went through, right? So I think about that early stage of BYOD, it was like business executives.
It was a pretty limited audience. You could think about it. You could kind of risk model it and you could be like, "Okay, well, you know, please put a password on your Palm Pilot because my biggest threat is that you leave it in a taxi." And, you know, and then nobody can get the data off it.
So like, "Okay, I've got some risk mitigation there." And then, you know, later in my career, we think about the cloud and, you know, that was really developers and infrastructure. So it was still kind of limited to certain teams within the organization that had key use cases.
We definitely had the same thing where the business got ahead of the security teams and we opened up this governance gap with lack of visibility and things changing quickly. And one of the other things that I really observed during that switch was the tooling that we had pre-cloud didn't align to the threat model or the risk model of the cloud.
What do I mean by that? What we had up to that point is we had like, you know, network firewalls. We had a lot of perimeter defense. We had end point defense, right? You know, we had EDR, we had EPP, things like that. And then we get to the cloud and our biggest risks are simple infrastructure misconfigurations, IAM over provisioning.
And we had like very ephemeral infrastructure where EPP and EDR, I know a lot of companies that never bothered to deploy them on cloud because they were destroying their instances. So frequently, they were like, yeah, it's just not worth it to install CrowdStrike or something like that.
What is your dashboard for any of these tools look like if you create and destroy a thousand? Exactly. Like what's the point of that? Yeah. You know, the juice is not worth the squeeze, right? So to speak. You know, that was also a somewhat limited audience.
The thing to Caleb's point is that like right now, the analogy that I use is that movie that came out a couple of years ago is that it's everything everywhere all at once. You know, it's like every part of the business has a legitimate use case for something that they want to do.
Whether it is me wanting to, I don't know, install a local agent that does prospecting for me from a sales perspective, or in my case, one of the agents that I use helps me create my weekly summary that for this week in AI security podcast, or whether it's my developers who want to accelerate development, or whether it's log analysis at the back end of our product, like there are legitimate use cases everywhere within the business.
And we go back to that same thing where the business got ahead of security. There's not great standards of AI visibility and observability. And again, the risk model doesn't align to the tooling that most organizations have in place. So these are the kind of governance and security gaps that I've seen opening up over the last two, three years in AI adoption.
It's a lot of the same kind of trends with technology nuances. Yeah. Yeah. And I like the everything everywhere all at once comparison there, because like even if you lock it down on corporate own systems, people still have, because the tech companies are so invested in putting these things in front of you, like financially invested, right?
Like they get to somehow make their money back on these models, and these giant data centers that they're allegedly building and using that on your phones, like everywhere you look, it's front and center, like in the web browser, you know, now, like I got rid of Chrome on this device, because it started, there was a Gemini agent that was sitting in the test tray.
Like I didn't ask for that. Like even if the browser wasn't open, you know, that was now loading up, you know, so now I have a co-pilot and a Gemini agent sitting in my test tray waiting to be used. And I use Claude. I don't use either, though.
So, you know, it's just all this extra tax surface, even extra resources being used on my laptop, you know, it's, yeah, so I wonder, like, what are the questions that folks come to you with? Like what are you hearing in every one of your calls here, like, is it, are they more focused about what they want to do with governance, or is it kind of like we need to do discovery everywhere first, and figure out what we don't know, like get an idea the size of the thing, and then maybe choose somewhere to start focusing.
Yeah, it is the latter. By the way, I live in Northern Virginia, those data centers are real. We see them like, you know, if you ever land at Dulles Airport, just go west and drive down any road and you see these low set buildings with no windows that seem to go on forever, but have high fences and generators in the parking lot, and there are literally hundreds of them.
I taught my kids to recognize data centers at a very young age. So I'll just, just a brief aside, I had to comment that they are real. But to your point, Adrian, that it is the latter part of the question that you asked.
It is the, what we're not hearing, I'll tell you, is nobody is saying, I need to block all AI. I think like everybody realized that that's a career limiting position to take, unless you work in a very particularly regulated niche, or you work within a, I don't know, a government agency that has certain restrictions or something like that.
So the question is really, I need to establish visibility. I need to maintain that visibility because I know that there's new stuff literally every week, sometimes every day. So what I discovered today, if I just do a point in time exercise, that's going to be outdated by next week.
So I need to like kind of gain an established visibility ongoing, and then make a risk-informed decision about what are the right policies for my organization. And that might be like, for the whole org, or that might be like, well, development, I can trust a little bit more or less, depending on your developers, you know, more or less, and they can do these things, but I'm really concerned about the people handling sensitive data in my finance or HR teams, and so I can make kind of like tailored policies around what is right for different teams, for different use cases.
Maybe we've contracted with certain providers, maybe we have data sovereignty requirements, but like the ability to understand the risk that is already present, how it's changing, and then apply policies to that fit the organization to enable adoption. That is primarily like the both kind of the requirements as well as the end goal that we hear from our customer conversations.
You know, I mean, when you say this, and specifically the architecture point that you raised, which is that we're looking at this piece and this piece and this piece, not the all-encompassing perimeter that we had 20 years ago, it's fascinating how what you're talking about really melds in with zero trust. It really melds in with data class.
You mentioned data. You know, you're using this data in this piece over here, in this module, this application, this whatever. Let's check in some of the AIs using that data or has access to that data. So it really works with data classification initiatives, zero trust initiatives.
And this is one of the things Adrian exactly on those lines calls that I go on look, you know, governance and inventory of your AI, of all of your things, really, you get two for one, three for one, four for one between zero trust, between data classification, post quantum cryptography, AI governance, all of those things.
And I'm wondering how you phrase that with your customers. If you don't mind. Yeah. No, it's a great question. One of the things that we always kind of review with customers when we think about trying to tackle this problem, I don't get into some of those aspects that you talk about.
Let's say like zero trust and data classification. These are real problems. They are they should be benefits where you make progress, right? And I always hear, you know, zero trust is not a solution you buy. It's kind of a philosophy you embrace, right? And I think, I don't know, everybody's had that same conversation with a security leader that they know who thinks that there's some silver bullet to that kind of gets them zero trust or lease privilege.
And I've also heard it applied to lease privilege. What I think about is I ask a lot of these customers, tell me what your organization looks like right now, because I can almost guarantee you that the technical architecture of the organization is way different than it was 20 years ago, where I had my classic, you know, perimeter defense, DMZ internal network, and I had a pretty open network topology.
That is not what most modern organizations look like. What do they look like today? Well, we've got, you know, maybe a mix of in person and remote workers spread in X number of locations. They've got 20, 30, 40 SaaS applications that they're using for different parts of the business.
My data is scattered across those SaaS applications, plus one drive, SharePoint, Google, Google Drive, whatever the case may be. So like, that is the reality of the modern organization. And I think, you know, trying to apply outdated security strategies to that is just not beneficial.
So the thing that I always say to them is like, okay, AI is being kind of integrated into all parts of that business, whether it's like individual SaaS applications, like you said, notion or something like that, or whether it is, you know, actual LLMs that you're using, cloud, chat, GPT, et cetera, whether it's like back end models that you're doing application development work on top of on like Amazon, bedrock, or Azure AI service, et cetera.
The way that we think about it and what I recommend to customers, and I'm not saying this is the only strategy, is like, okay, what's the common thread across all of that? And the one common thread that we've observed is that all of that traffic happens over APIs.
So if you actually apply or a layer of API observability to what you're doing, and then you apply some specifics around kind of understanding, all right, from observing these X thousands of API logs, I realize that your users are using these systems. I know the risk model of that system.
I can now give that to you as a security leader and tell you the risk represented by Jeremy and the development team who are building on top of, you know, clouds on at 3.7 on Amazon bedrock. I know exactly what the vulnerabilities are. I know what the cloud configuration issues related to that might be.
Now you can go make a risk informed decision, design a policy around that, bring that to the organization. But from our perspective, like that is the one common thread that we see. It all happens over APIs, and the other, I guess the second point just to build on that real quick is that the most important thing for a business leader to understand in terms of this is actually the content.
What I mean by that is like it's not enough to know that their API calls going. We have for the first time this like non-deterministic technology where the most important thing is not actually the numbers, but the words that are going into these calls.
And the API layer is where you can actually observe the payloads, and you can then like extract them and understand was there, you know, corporate sensitive data at risk in one of these transactions. But I'll pause there and see if that kind of makes sense.
It makes total sense, but I've got to ask you a question because I've run into this more than more than all the time. What percentage of companies that you work with go instrumented APIs that log stuff? We don't have that. Yeah, that is hard and getting your developers to sign on to that is hard.
But there's thankfully a workaround. You can pull them off the back end. If you hook up the organizations, let's say like Amazon Bedrock environment or Azure AI Foundry or your open AI enterprise subscription, there are audit trails that you can tap into that have the same data.
You just pull them off of the server or sorry off of the service provider as opposed to getting them pushed to you from the application. And that is a little bit of a workaround that we found to that because yeah, getting developers to instrument an application at the right layer for the right observability data that you want can be very, very challenging.
But yeah, thankfully, thankfully there is that workaround. So do you guys, you know, you're talking about like kind of the intentional use of AI through internal enterprise apps, things like that. There's a lot of other tools that you can use entirely on-prem as well or in your own private instances, blinking on the name of the common one that I come across right now.
But also the like the SaaS side of things, since you're talking about architecture, are you also getting into or finding it necessary to have like browser extensions that have visibility into a web-based SaaS products as well. How do you capture all that? Do you need like a mix of browser extension, you know, some kind of SaaS-y solution?
Like talk me through because it's everything everywhere all at once, like one of the other pieces of the architecture look like. I mean, you're spot on Adrian. So there is an endpoint agent that does a little bit of local instrumentation that creates a local proxy service that ships a copy from like an outbound copy of requests.
So that's one of the components. There is a browser extension. There's a workspace integration. And then there's the piece that I mentioned a second ago in response to Josh's question, which is pulling the logs in from the service providers themselves. So there are a lot of different data components.
So I think like at last count, we counted something like 12 different data sources that we ingest kind of normalized to bring that visibility layer. And we figured out a way to kind of create two different streams. One is for applications and the other is for employee usage.
And we intentionally divided those because we found that they're very different use cases. The use case of me monitoring my finance team to make sure they're not uploading sensitive data via chat GPT is very different from monitoring my chat bot to make sure it doesn't give a customer a hallucinated answer in response to a customer support request.
So we created one stream for apps, one stream for users. And just as you said, we've got about 12 different data sources that we tap into to pull it all together. That makes sense. I like that. I've never thought of that separation, but I really do like that. That's actually really cool. Yeah. Thank you. Sorry.
I just I like that it's it's just a logical separation, but it really makes it clear as to what you're looking for and and why and what you're looking for, not just where you're looking for it, but the types of data types of responses, types of issues you're looking for. That's really cool. Well done, sir.
Thank you very much. I mean, I'm pretty proud of what the teams put together over here. So one of the other things I want to talk about here is the just the nature of risk with this technology a little bit different, you know, than than just DLP a thing, right? Yeah, 100% Sure.
We're worried about data leakage, but we're also worried about people being misled by hallucinations. And I imagine like with a product like this, is there also crossover between IT because with my CTO hat, I'm also worried about, you know, do some of my employees need more training? Are they using this product incorrectly?
You know, do they need that they putting bad prompts in there like, I need some visibility into what they're typing into these prompts and how they're using to know if they're unsuccessful because they don't have enough training or they're unsuccessful because the model's crappy and we should use a different product or a different model.
Yeah, so there's a couple things in there that I want to unpack quickly. And I know we've only got a little bit of time left. So one is exactly, as you said, there is this question and you hear a lot of organizations with like C-level initiatives, 100 AI projects this year, you know, wherever where the change is being driven from the top, that's great, but not everybody knows how to do it.
So to your point, you kind of want to understand what people are doing. So one of the things we figured out is with that, especially with that employee data stream, you can actually then apply semantic analysis to all the log files. All the prompts are in that data stream.
And then you can go ask, what are my employees working on? Where are they struggling? And you can actually identify that, you know, some of your initiatives might be failing because maybe your employees don't know how to use the tools. Maybe they don't have the resources, maybe they don't have the training.
So you get a nice visibility onto all of the employee requests across the different engines, et cetera. So that's one side. The second part on the, more on the application side is where we see the different risks by different LLM providers. On the employee side, you're mostly using black boxes.
You have no control over what cloud engine is being used on the back end when you integrate, you know, cloud desktop or cloud code or whatever. So you don't understand then necessarily the risk of the LLM that's being used there. So when you're picking a model off of a marketplace like an Amazon bedrock or a Google vertex service, something like that, we've run a bunch of red teaming of those models and lots of other companies have done the same.
So I don't want to say this is something that only firetail is done by any means. And what we figured out is there's kind of 25 core vulnerabilities that are known problems. These are everything from your like, oh, base 64 prompt injected, where, you know, if you send a prompt that's base 64 encoded, it may not get checked for prompt injection, it may just process.
And we found there's kind of five categories. One is around malicious content creation, next is toxicity and misinformation, third is jail breaks in social engineering, fourth is exploitation, and then the fifth is reliability and reasoning. So that's, you know, those five categories are those different risks. We found that no model is perfect.
The only question is what risks are in the model that you're using and then what do you need to do about it in response? So let's say, for instance, that you're using a model that has a high risk of hallucination. Well, great.
Well, then what I really need to do is I need to check the response payload coming back from the LLM and do a sanity check on that in some way. So maybe you like my mitigation strategy there is to have some kind of like response validation technique that runs before it goes back to the customer from that, I don't know, customer service chatbot or whatever the case may be.
So understanding which risks are inherent to the model that you're using and then making the right decision for how you incorporate that into your application structure is actually really important. I think where we are with that right now, by the way, is still early experimentation.
But what I do expect is like, I think this is the year that this goes real and this goes into production. And it becomes the norm that we start interacting with chatbots and, you know, customer service agents. Yeah. I would 100% agree. I would be very firm about that agreeing with you. I think Adrian would too.
Adrian, you had a last question on there that I think is a fantastic one. Do you mind? Go ahead. Go ahead. Grab it. You asked. All right. If you want to. I was going to flip. But I mean, the one that Adrian thought of, which I thought was a great question was about employees concerned about employee privacy.
Yeah. And because now you're talking about so many different tools, you just mentioned I'm looking at the payloads to see if my employees get it. Is there a privacy aspect there? There definitely is. We will say that, you know, our team is split between the US and Europe and we see very different concerns in the two locations.
I think in the US, it's generally pretty well accepted and, you know, usually explicitly stated in your onboarding in your corporate IT policy that your employer has the right to monitor your usage of company devices or company systems, whatever the case may be. Even though that is very often explicitly stated in Europe, the expectation of employee privacy is very different.
So for instance, one of the requests that we get from customers in Europe that we are able to fulfill is they say, well, I want you to look at it, the prompts to look for things like data leakage and PII leakage and things like that.
But then I want you to suppress the prompt, the actual prompt, maybe encrypt it, maybe suppress it. Maybe I need you to send it to a lookup table for later forensic, you know, if I'd have to do an internal investigation, but I don't want it to be there for my security team to accidentally just see, you know, what did Josh go, you know, prompt chat, GPT-4.
So we give them the option to pick what's right for them, but it's a great point. There are different levels of expectation around employee privacy, around, you know, what you are and are not doing. So yeah, again, it's like this, every organization has to make some trade-offs around it. So yeah, that's how we see the problem though.
What we need is an LLM to read all of those so that the LLM is not done anyway. Yeah. That's an easy rabbit hole to get done. If I could ask one more quick rapid-fire question before we wrap up here. You mentioned reliability and reasoning, you know, so I've seen a lot of security leaders kind of pushing back on like the, you know, the functionality of the models, the reliability of the models, not being their problem, right?
They don't want to own, you know, is the AI working correctly? Even, you know, kind of the red teaming piece of it when we talk about red teaming, a lot of that wraps around hallucination, how likely it is to hallucinate things like that.
And security leaders really don't want to be in charge of that on top of everything else that they already have. What are you seeing? Generally, like, they are kind of stuck with that because the tooling is also paired with the security priorities as well.
What I'm seeing is that the security teams are the people who have visibility onto it. So it turns out that all of the red team, the automated red team tools like the one that we use also check for hallucinations and for reasoning failures and things like that.
So you kind of just get that data as part of the output if you're going to be doing this anyway to assess security risk. And then you're just the person who knows about that. Now whether the development team listens to you about that and says, you know, hey, don't go use this model because it's going to give you bad hallucinated results, whether the development team listens, that's an organizational problem.
I do understand why security leaders want to say that like the quality of the content is not a security problem. That's kind of an application problem. I understand that argument, I just think that fundamentally like it shouldn't be the case that we're still reverting to this kind of mindset where like, that's your problem, not my problem.
We should all be in the same, you know, the same goal of getting the best result for the company and for the organization and ultimately for the customer. So if you've got that visibility, you should use it. You should share it, right? It doesn't hurt you.
It doesn't cost you anything to do that and it's a natural output of the scanning tools that you're probably using already anyway. I don't love it. I don't love the positioning is my point. But I also have been, for a long time, I've also been saying like development teams and security teams like, we need to drop this argument and this rivalry and get along.
The first time I set up a sim in 2004, all of a sudden it was my job to do root cause analysis every time we had knowledge, right? I think security related, but I have it if it's an app crash. Yes. Yeah. Yeah. Yeah. All right. With that, I think that's a good stopping point there.
Jeremy, thank you so much for joining us today. It's a wonderful conversation. Really appreciate you taking the time. That really my pleasure, Adrian Josh, really enjoyed the conversation. Love the questions. Absolutely. Pleasure. Thank you. Stay tuned. When we come back, we're going to talk with Allie Mellon about her exciting new book, Cold War.
When it comes to the UK's critical infrastructure, cybersecurity isn't just protecting IT system or data, it's about keeping the lights on. Airbus Protect specializes in OT security, shielding the industrial systems that power our transport, utilities and government. With UK national security clearances, they provide a sovereign first approach that most providers can't match.
They bridge the gap between complex engineering and modern digital threats. Secure your operations with a partner who understands the high stakes of British industry. Speak to their experts at securityweekly.com/airbusprotect. That's securityweekly.com/airbusprotect. Ever feel like your network infrastructure's stuck in the past? Meet Meter, the company reimagining what it means to get and stay online.
Meter delivers full stack networking, wired wireless and cellular, all in one seamless, scalable and secure solution. From hardware design and firmware to software deployment and support, Meter streamlines every part of your network while giving you all the control and transparency you need. Whether you're managing a branch office, retail stores or a massive data center, Meter gives you performance, reliability and the visibility you need to stay in control.
Thanks to Meter for sponsoring. Go to securityweekly.com/meter to book a demo now. That's securityweekly.com/meter, M-E-T-E-R. Welcome back to Enterprise Security Weekly. Ali Mellon joins us today to talk about her new book, Code War, how nations hack, spy and shape the digital battlefield.
Ali is a principal analyst at Forrester Research and has been kind enough to join us on the show many times in the past, usually to talk about security operations, but today's going to be a bit different. Welcome to the show Ali. Thank you so much for having me. I'm thrilled to be here.
It's such an important time for this topic, so I'm very glad we're having this conversation. It is and we will get into that. And I see your book behind you there. It's a little bit out of focus, but I believe it comes out, yeah, there we go.
I believe it comes out the day I publish, so the timing of this is going to be really, really good. Yeah, I'm excited. It's coming out right on St. Patty's Day, which to me is a good omen because I have a lot of Irish heritage. Well, definitely take it seriously with a green beer in hand.
It's not a problem. Perfect. Exactly. That's all I ask. So yeah, starting off here, Ali, what possessed you to torture yourself with a book project? It's an excellent question. I think just my own naivee say, to be honest, because the publisher came to me after seeing me on a podcast and was like, "Hey, you want to write a book?" And I was like, "Sure.
How hard could that be?" Turns out very hard. When people tell you it's very hard, they're right. It is very hard. Definitely worth it, though, because this topic, the whole topic of cybersecurity and geopolitics together is so important right now, especially.
And so as I was writing the book even, I was like, "This is so timely, and now that it's coming out," I'm like, "It couldn't be better timing for this topic." And it's also something that I've wanted to write on for a long time, where I just haven't had the time to dedicate to say, "Okay, let's look at the history of China, Russia, the United States, and see how that ultimately ties to the cyber attacks and defenses they perpetrate today, and make those types of connections." So that was really, really fun.
I'm curious here. Is there a direct connection to what you cover in your day job with Forrester and this topic here? Or is this something you've just always been interested in and have done some writing and talking about? Yeah, this is something I've always been interested in.
I mean, my background is as a computer engineer before becoming a hacker and then a security practitioner. So I've seen the industry from a lot of different sides, and I've also just always been really fascinated by the history of war in particular.
And so these connections I have thought about and been making for a long time but haven't had the opportunity to sit down and actually do that research and fully write about it. And it's really incredible when you get the opportunity to do that because there were so many things where I went in with this theory that these things were deeply connected and the reality was exactly that.
There's so much connection between the cyber attacks that we're seeing and even things like the regulations that were put in place over the years and how that affected exactly how these nations can choose to use specific cyber attacks. So it's a combination of that and then just my unwavering belief and wish to communicate to the world that cyber attacks are one part of a larger picture when it comes to nation state activity and when it comes to the choices of ways to attack that nations have and how important that part is but how it is not in and of itself going to end the world or like be the thing that makes the biggest difference of all time.
Like it is one thing that is a part of a broader effort of coordination that when done really well can be very, very useful for nation states. I get the impression, I read a lot of cyber security books, I'm a member of the Cybersecurity Canon which publishes book reviews and has a Hall of Fame for cyber security books and that Hall of Fame is defined as books that are must reads for anybody in the industry and are pretty much evergreen but I do get the sense from this that this book is not written just for a cyber security audience.
Would that be accurate? Yeah, that was actually very important to me as I was going through this because I do think that in a lot of instances cyber security can get kind of pigeon-holed into this oh it's magic and you can only really understand it if you're very, very technical and I didn't want that to be the conversation around this book.
Most of the book is little vignettes throughout history of the cyber attacks that have been perpetrated, the modern reasoning for why the historical background that led us to that moment and I recognize that listen probably most cyber security professionals have heard some of at least some of these stories.
I think there are little parts where people will be surprised like happily surprised and find some interesting tidbits but I really wanted this to be something where anyone with a passing interest in geopolitics, a passing interest in cyber security, tech, business could pick this up and get value out of it and so that was the audience that I came to this with and I've been really happy to see that those who have read advanced copies especially those who are not technical have learned a lot and felt like it was really approachable which has meant a lot to me.
I think especially as we go into the next five or ten years which are undoubtedly going to be very chaotic geopolitically, it is really important that more than just cyber security professionals deeply understand the value of cyber attacks, the value of cyber defenses and how all of those pieces come together and if there's confusion around that, we're not going to be able to use them as effectively, we're not going to be able to recover in situations where attacks happen as effectively and all of those pieces really are important to me to come together and make sure that we are reaching out to the broader world about cyber security and about why it's so important and where it fits into this bigger puzzle that we have within the geopolitical ecosystem.
I got to say the chaotic comment was, you know, yeah, probably very true. Timely, unfortunately. You know, in pop culture, you know, I find, you know, one of the things I'm looking forward to about this and that I generally look forward to and just reading about how this stuff happens, I spend a lot of my time studying how breaches happen but I don't look at nation state attacks very much, I'm very focused on enterprise stuff, so I'm very interested, I can't wait to check this out.
But yeah, in pop culture, right, like hacking is the MacGuckin, it's the, you know, when the plot gets stuck, when the heroes can't do something, you know, they all just kind of turn to the hacker and they're like, you know, do the thing, right?
You know, and everything is hackable, so that's, you know, we're not going to go into details in the book or the TV show or the movie or too many details or, you know, if they try to go into details, all of us kind of groan and roll our eyes because they get them all wrong.
But yeah, I'm kind of interested to see how pop culture lines up with what's actually happening on the battlefield. Yeah, to your point, very different things, like one of the things that I really loved, so I start and end the book with two different wars, so I start the book with the Gulf War, which may seem confusing to people because that was 1990.
There weren't many, like, cyber attacks rolling around and being used in the way that we see them use today. But the reason that I started it there is because that was a pivotal point in time where the United States was testing out what has become its core strategy over the years, which is multi-domain and joint warfare operations.
So looking at things from Air Land Sea, radio electronic, oh, my God, where we're radio electronic combat and everything in between. Sorry, yes. I know the feeling. Yeah. Exactly. Some of these words are too long. That's just a problem we have in society.
But what's so interesting about it is that moment was a huge validation for a new approach from the US military for the power of collaboration and coordination and how that can be done really effectively and achieve significant outcomes. And so what's interesting about that is if you look at Russia at that time, they had been using this through radio electronic combat for many years.
And this was, for them, a validation of, okay, what we are doing, we know has been working, but now we're seeing that it's working for other nations as well. So we need to continue and push harder into this. Whereas for China, it was a moment of awakening and a moment of, oh, my God, we need to change our approach and change our strategy to integrate our operations more completely and to focus on coordination.
And now we've seen this play out over the past several decades where that coordination is not only helpful in a typical military scenario, but it is one of the reasons why the US has been so successful with the cyber attacks that they perpetrate is because it is a coordination machine.
And we've seen that, of course, there have been changes over the years to try and optimize that system, but it is especially effective when comparing it to, say, what you have in China, where they've had to completely reorganize the PLA several times to try and make this work more effectively.
They've run into a lot of bottlenecks with the resources that they have, a lot of coordination issues before those bottlenecks happened. And then if you look at Russia, it's just a complete mess, because if there's one thing that they're not prioritizing, it's coordination because they're prioritizing competition between each other and all competition for Putin's favor.
So I start the book with that, and then I end it talking about Russia's war in Ukraine, because it's such a good example of now seeing that coordination taking place with cyber attacks and exactly where that's effective, where they're still experimenting and where it just hasn't been as effective.
A great example of this is some of the attacks that we've seen in Ukraine are, of course, things like, "Hey, let's take out the electric grid. Let's do that with the cyber attack, let's do that with a missile." If you look into it and you do it comparison, if you take out the electric grid with the cyber attack, you're down for a few days.
Take it out with a missile, we're talking a few weeks to months to upwards of a year. So there's an element of this that is also what's the best tool for the job, but what's really interesting is there are many cases where they'll use a missile to take out the electric grid, and then they'll use a cyber attack at the same time to do a DDoS on the customer support lines.
So not only have they taken out the electric grid, they've also made it very difficult for anyone to communicate with the company and try and get the systems back online and also just for the company to communicate internally. So it's really interesting to see that once you add in that coordination layer and you choose the right tool for the right task, you can really cause much more significant damage that lasts a lot longer.
And now, I've completely forgotten the question. So no, no, no, no, wait, can we keep going on this? Because your point about the right tool for the right task, because you've got cost factors, missiles are goddamn expensive, cyber attacks can be, but realistically, they're much less expensive over time because I can use the same server over and over again.
It's really hard to use the same missile twice, and it's not an either or, it's not a missile or a cyber attack. Why not both? You know, if they're used to term layers, right? Kinetic attacks, as well as cyber attacks in in and kinetic attacks, disinformation attacks, cyber attacks, because those are different disinformation and cyber.
They are related very much related. Don't get me wrong, but they are different. As well as so many other layers. I love that phrasing, that thought pattern of layers, that was, that's brilliant, Ali. I love that. That's fascinating. Yeah. And it's so interesting that you bring up the like kind of information operations side of this too, because as I was going through the book, I kept thinking, do I want to talk about information operations and disinformation and all of that?
Do I want to keep it separate? Because there's so much talk in the industry about like, is this part of cyber or not? I ended up including it in a lot of the sections of the book. And I'm glad that I did because, and I kind of touched on this in the, in the beginning of the book, I clarified that like they're not the same thing, but cyber operations can be used to deliver them in a much more effective way.
And so it's worth having a conversation about that in the book and, and how like the digital side of things can be a much more effective vehicle than we've seen in the past with some of these other ways to spread propaganda, throwing like little packets out of planes and things like that.
It's really fascinating to see, and it's also fascinating to see where different nations struggle. Like of course, Russia, very effective with disinformation, right? Very effective with narrative attacks. If you look at China, they have really struggled with a lot of narrative attacks that are broader globally.
Of course, they're very good at doing it within their own country and targeting some countries around China that are particular targets for them like Taiwan as an example. But where they struggle is once they're trying to get into this more global ecosystem.
And a large part of why they struggle there is because they don't know the cultural context of the past 20 years. They have created a walled garden that has penned in themselves. And so they can try to understand the cultural implications.
A lot of times they'll post memes that kind of make sense, but they just miss the mark slightly because they don't actually understand what they are, they're just using them. It's uncanny valley of memes, the uncanny valley, you know the uncanny valley. It's just a little bit off. Yeah. That's exactly what it is. It's so fascinating.
But then how is North Korea where they're succeeding in getting jobs and actually, like they are able to interview with business leaders and such at a very, very high level. And they are, well maybe that's because it's not a social thing, it's just a technical thing. Yeah. Sorry.
It's a lot more around the, like the jobs that they're getting hired for and not ones where you're typically like, oh, I got to be in front of a customer all day and I need to have this level of social skills to your point, I also do think that AI is making a difference here and how effectively they're able to have these conversations and certainly to like communicate over email and things like that.
Like one of the things that we see within like enterprise security is if you're looking to better collaborate with your own team that's geographically dispersed, a lot of times it can be really useful to use AI to translate the conversation because it doesn't just translate word for word, it gives you the local context in that translation.
And so it's really fascinating at how it does that, it makes it much more effective in an enterprise setting, but it also makes it much more effective and kind of the broader disinformation and kind of trying to trick people aspect of it.
So one of the, you know, a lot of follow up questions occurred to me as you were kind of diving into characterizing the different programs with each of these nation states. You know, like I've said, I've not dove too deep into these, you know, except for a couple books I've read of, you know, Kim Zettors countdown to zero day, things like that.
But I do get the impression that things are very fuzzy, like cyber is, you know, an area of engagement where maybe it's easier to hire mercenaries, you know, have plausible deniability, like attribution seems very fuzzy sometimes, and it's unclear to me if this is the nation state hacking or if it's some group hacking, like was hired by the nation state, or maybe they just feel patriotic, right?
Like, how do you deal with the fuzziness of all that, is that something that the book dives into? Yes, it is. I actually have Kim's book right on my bookshelf here. I love it. It's one of my favorites. But it's so interesting that you bring this up because one of the things that I do talk a lot about in the book is attribution, and some of the, of course, difficulty is round attribution, some of the basics when it comes to how to do attribution.
And I bring it up in the context of the US because one of the reasons why I included the US in this book is because I really wanted to highlight the US as its own threat actor and to treat it as such in the book.
And so it is treated in very much the same way as we see with China and Russia, but with the caveat that it's very difficult to do attribution of attacks from the US because of the plethora of Western researchers, because of the way that the US approaches or has in the past approached a lot of cyber attacks.
This is changing now. But there's, it's very difficult to take, for example, an attack that was targeting somewhere in China and to actually validate that attribution as an attack that was perpetrated by the US for those reasons, but also because the authoritarian nations aren't trustworthy.
You can't trust anything that's coming out of them, even if it is from a privately held company because at the end of the day who controls that company, the government. And to some extent, certainly the messages that are being released from, from that country.
And so it's very interesting, like it's a very interesting issue that I address throughout the book by being very explicit as to when we are aware that these are state sponsored versus actual state attacks. Of course, one of the most important things that I go through is the APT-1 report, which was just a joy to revisit.
But if anything, like attribution is going to be even more difficult now. I think it's become so much more difficult already, but with the way that AI is moving, the ability to identify what are the factors that are going to factor into attribution. How can I try to mimic some of the attacks by different nations?
All of that is going to become more easier to do with AI, which is going to pose a problem. I also, in the book, tackle a couple examples of false flag operations, where it seems to be from one particular threat actor, but it's actually just one masquerading as another.
Russia did with North Korea during the Olympics, and those stories are just so cool to break down because a lot of it does come back to the motivation, and whether or not the motivation is there or not, and especially in that case, it's pretty obvious. Which is your favorite false flag? I love those. I'm sorry.
They're like my favorites. Like, okay, so it's coming from here. Why is it? I love those. So which one is your favorite? I've got to ask. I'm sorry. It's probably the Russia and North Korea and the Olympics one. I find that one so fascinating because there's North Korea bringing, I think it was Kim Jong 's sister to the Olympics to try and have some element of diplomacy for the first time ever, I think, for North Korea.
And meanwhile, Russia's perpetrating this attack to get back at the Olympics. I love things that are so interconnected to the cultural zeitgeist and those elements, but which one is your favorite? I'd love to know. Oh, man. The problem is, is that some of the things I suspect, and it goes to the disinformation side more than anything else, I've seen some disinformation attacks and I'm like, that's not Russia, and because there have been some disinformation attacks on various topics that were like, everybody's like, "Oh, it's Russia.
It's Russia. It's Russia." I'm like, no. No. I think that's China. There was, and by the way, there were a few that we did, I think, that were false flagged as well. No doubt. And so it's, for me, it's honestly, there's the election handling and the election controversy, and look, I don't care, I'm not doing politics, but I'm just stating that there are some attacks on social media attacks, disinformation attacks that I find fascinating because they get blamed on all things.
They actually do a technical trace on where they're coming from, and it's not fun. But if you do a technical trace, you're like, that is not them. It's typically Russia and China are hammering on social media in our world. It's unbelievable how many Reddit posts, Facebook posts, Instagram posts even, and you're like, "Huh?" You realize all the filters, they're filtering faces to be Caucasian, non-Asian, non- whatever, and it's, wow, it's unbelievable.
It is absolutely, wow, that's so true. One of the things, this is a little spicy, but one of the things in the book that I came to time and time again is just the number of people that have mysteriously died in motorcycle accidents after doing something for the U.S. government is so high, and so just so everyone knows, I don't drive in the water, it's like, "Oh, no." Well, for whatever reason, Russians apparently throw themselves out of windows a lot, you know?
Yeah, we all have our own version. Very clumsy. Too much vodka. It's just a common problem. So you might have already answered this, just a totality of what you've talked about so far, but do you get a general sense that there's any kind of rules of engagement with cyber attacks, or is it change with every single engagement, is it just constantly in flux?
I do think that there's a, this is a great question because it was one of the questions that I started out with, which was like, "What are the rules of engagement? Why haven't we established rules of engagement? Why isn't this written down?" And there's good reasons for it to not be written down, right?
The second it's written down is the moment where there becomes a line to cross. But the general theme has been that they have been used for relatively non-fatal attacks. Obviously, lots of espionage, lots of attempts at breaking down systems for short periods of time, but it seems like there could be a lot worse damage if people wanted to go all out, if some of these nations wanted to go all out.
And we see that even with like, like China is a good example of this because China early on, like pretty much since the beginning, that the hacktivist community in China was incredibly large. It's largely where the majority of the cybersecurity industry in China started, which is really fascinating, and the thing is that they were allowing it for a while, but they reached a point where they stopped celebrating the hacktivist activity and they were kind of like, "Okay, you guys need to chill out.
Like, we've got this, we don't need your help with this, we can do this ourselves." So there are cases where there is still hacktivist activity coming out of China or in support of China that targets particularly nations that are close in proximity regionally to China, but it's much less than it used to be, and it's much less supported by the government than it used to be.
And it's largely because at the end of the day, this is just another type of geopolitical tool in geopolitical weapon, and it does have meaning. Everything from attributing a cyber attack to executing the cyber attack itself can be used for geopolitical leverage or to cause more friction geopolitically.
And so it's one of those things where a lot of those rules are set by the cyber superpowers or a lot of those unwritten rules, but I have to caveat that with the fact that like, we've seen in the 2026 National Cyber Strategy that the Trump administration just released that they are pushing harder for more forward action and more offensive cyber attacks, which is very in line with what the Trump administration did in their first term, like they are much more aggressive when it comes to cyber attacks than other administrations have been.
For better or for worse, like there have been instances where that has been particularly effective and they've been able to take out threat actors that they otherwise wouldn't through operations like defend forward. So there are really good things happening here, but the problem is that we also need to align defensively if we are going to protect the glass house ultimately companies based in the U.S. from collateral damage, and because they are going to have retaliatory damage because of these attacks and we're already seeing that.
And so the second that we open up a lot of this like more aggressive cyber attack doctrine and approach is the second that we need to be prepared for that to come right back at us. And it already is. Yep. So we've got to use the last couple of minutes to talk about Iran, the engagement in Iran a bit here.
And I've also been kind of surprised at like firmware level attacks, breaking devices, you know, making computer systems, the magic smoke come out, you're putting them in a state where they don't turn on again, stuff like Saudi Aramco and Shamoon.
And I've been kind of shocked that we haven't seen more of that, and we are going to talk about striker in the news, that attack, which looks like wipers were used in that case. So I'm really curious to get to your insights just in general on like, are you a bit bummed that you're releasing the book now?
And Iran could not be part of the book or, you know, is that like, how are you looking at this engagement, you know, having focused so heavily on this topic? And now it's happening right as your book is coming out. So I think the timing couldn't be more important and couldn't be. No, it's okay.
I mean, I fully think that the timing is very important and couldn't be more important to have happened now. Of course, like one of the things that I wanted to do with the book is I wanted to include Iran, North Korea, Israel, but I ran out of space.
There was a certain point where the publisher was like, you got 96,000 words, so you got to cut some stuff out and it's going to be these pieces. But at the same time, like this is very in line with the attacks that we've seen really succeed in Russia's war in Ukraine is it comes back to things like Wiper malware, very commons everywhere all over that battlefield, espionage, particularly for of course battlefield information, but also for information on attack successes and failures and a large focus on disinformation operations.
And so it makes a lot of sense to me that that's happening in Iran as well. And I do think that like when we look at the situation around, especially like the striker situation, it's very much aligned to what we would expect a nation state or potentially a state sponsored group or a state affiliated group, the type of action that we would expect them to take in this type of scenario to strike back, especially when it's a nation that is arguably less powerful than the nations that are striking against it.
They have to take some type of retaliatory action. And so far like wipers are the lowest common denominator for what you would expect from one of these. Now, of course, it's also like the modus operandi of the particular hacker group that is taking responsibility for this even though that is not confirmed through technical indicators and it's important to note that, but using either wipe or malware or what appears to be in this case of just being able to wipe systems through potentially like Intune or the endpoint management software is exactly the type of way that they would want to create damage, especially to a company that is publicly traded in the US, has a location operates out of Israel through a company they acquired in 2019 and has contracts with the US military that are for medical supplies, like this is kind of the perfect target for them.
So it makes a lot of sense that they would take this approach. So what about, you know, there's, well, I guess it's not rumors. You can just look at the AWS status page and you can see that some of the AWS data centers have been impacted here.
Is that, again, like I didn't realize wipers were so common on the cyber battlefield, but what about cloud data centers like financial, you know, obviously that's a huge financial bottleneck that's going to impact thousands of businesses rather than targeting anyone business directly. Is that somewhat new?
That is and it's so fascinating to me because like those data centers have been established for digital sovereignty reasons and requirements like that's because they want to operate out of those regions and for companies that are in those regions. So it's a little bit surprising that they would choose to target those, but if they want to send a message directly to the government in the US targeting potential friends or people that are friendly with the top officials in the US government is a great way to do that.
It is interesting though because I wonder what the ramifications of that will be for companies that are in that region more so than necessarily just Amazon itself and whether or not that's kind of like you're stabbing yourself in some ways. Yeah. Well, Ali, we're out of time. I can't wait to read the book.
Thank you so much for joining us today. I'm incredibly excited about it. And yeah, I suspect after reading your book, maybe I will have to start keeping up on nation state attacks going forward. Thank you so much. I can't wait to hear what you guys think. All right. Stick around.
We'll be right back in a few minutes with the weekly enterprise news and some more co-hosts are joining us this episode is brought to you by adaptive security, the first cybersecurity company backed by open AI. AI has fundamentally reshaped the threat landscape. Today, the greatest risk isn't someone breaking into your systems.
It's someone breaking into your trust deep fake voices joining your video calls, AI crafted fishing emails that perfectly mimic your executives, even synthetic job applicants entering through the front door. Adaptive security is designed to counter these new realities. The company delivers real time attack simulations that show security teams exactly what modern AI driven threats look and sound like and prepares them to respond effectively.
Learn more at securityweekly.com/adaptive Zero Trust is clearly the future as threats get faster, quieter, and harder to detect, but implementing it shouldn't disrupt the business. ThreatLocker enforces default denied execution in a way that remains enterprise-ready, scalable, and operationally clean. Unknown software is stopped cold, trusted apps stay contained, and drift is locked down across the environment.
It's zero trust that works in real enterprises and prepares you for the threats ahead. See why CSOs are adopting it at securityweekly.com/threatlocker. Now for the enterprise security weekly news, you can check out securityweekly.com/DSW450 if you want to follow along as we go through the news or for links to the articles that we're covering.
All right, welcome to the show. We've got Ayman Elsawa with us. How you doing, Ayman? Good. Good, Adrian. How are you? I am very good. Today's been a fun one, had some good conversations, some good interviews, and looking forward to the news here.
Some interesting stuff to talk about, which leads me to our other co-hosts we have joining us. We've got an emergency co-host, Dr. Dustin Sachs. Thanks for joining us, Dustin. Thanks, Adrian. Thanks for letting me show up last minute to crash the party. Absolutely. Always fun. I like a little bit of chaos. It's a flavor, Adrian.
The last 24 hours has certainly been full of chaos. Yes. Yeah. I mean, every day in the last kind of three years, maybe, can be disgusting. Yeah. Fair. Fair. All right. So we're going to go through things maybe a little differently here. I think I'm still going to start with story number one, which is always our security-funded newsletter item here.
I always like starting with the vibe check, but we've got a lot of stories here. We're not going to -- a lot of them, I'm just going to briefly cover just so that we can spend most of our time on just a few more interesting stories.
And not that the other ones are not interesting, it's just we've only got 30 minutes and too much has happened in the last week. So the vibe check here, Mike Prvett asks, given all the appsec excitement last week, and I think he's referring to anthropic getting into the cybersecurity game and people freaking out and the stock market freaking out and everybody's freaking out, he says, "Where's the next likely place that Frontier AI Labs will attack the security stack?
The top result from that survey was threat intelligence followed by compliance and GRC with 27 percent followed by cloud security with 20 percent identity management and governance with 13 percent and then risk dashboards and executive reporting at zero." Nobody voted for that one, nobody voted for other.
So it looks like everybody kind of piled into the top four there. "Amen, you made a face, would you have voted differently?" Yeah, I would have voted for a little, probably the same one and two are about the same. I would have voted for compliance GRC.
That's just an easy thing to grab, but it's complicated with all the integrations. The dashboard's interesting, I've actually, me and several as I know, I've been building our own dashboards based on the work that we're doing. So yeah, I agree with this, I agree with this.
Yeah, it's interesting because I think with the move to, you know, with the discussion about like vibe coding and everything, dashboards just seems like an easy one to vibe code. Like why wouldn't you just, like, this is what I need it to look like.
So that one kind of surprises me a little bit, but that being said, I can kind of understand. I can see where they're coming with that. I think it's going to be interesting because is it going to be, is it anthropic going to be the one that's going to be able to do it with the kind of negative press they've been getting since their apparent, you know, brother/sister fight with the Pentagon?
I don't know. Yeah, it's, well, it makes sense to me that risk dashboards would not be something they tackle because it's something so easy to vibe code, right? There's no money to be made there. You know, why would you vibe, why would you create an offering in a space where everybody's going to create their own custom thing anyway?
You know, so they're already making money off of that just from cloud code, right? You know, that's the cloud code subscription. They've already, they've already tackled that space in a way, indirectly, I think. Yeah, I guess that's true, that makes sense. Yeah, I mean, in the cloud security piece, that'd be really hard because that's, you know, I mean, it's actually interesting because AWS released and I think it was MCP.
So there's been some agents to help connect to AWS easier. So it's like, oh, okay, I'll just give an agent permission, read only permission to my cloud and it'll run, you know, my cloud security assessment. And I hate to get, I hate to get semantic here, but you know, the quantitative researcher in me is also wondering whether or not this was a single choice question or not, because I could see where if compliance and GRC being 27% that if people were basically, if they were only given one choice, yeah, if you can't select multiple, then risk dashboards exec reporting to me falls under compliance GRC.
And now I understand why there's 0%. It's really bucketed under the GRC bucket and that's why that's 0%. If there's a multiple select, then I could, I would have a lot more question. Yeah, I agree with that. But again, that's just me as somebody who has framed questions before for research, that the framing of the question and the options given to the survey participants could have changed these numbers specifically.
So. Yeah. And to your point, Amen, with MCP servers, I think it's clear why story number six, a command line tool for Google workspace. Yeah, I'm wondering if this is, if this was released more for AI to use to be able to more easily leverage and do Google workspace things or for humans, but I expect we're going to see more and more product releases and product enhancements that are kind of web MCP that are, you know, let's design our product to be better consumed and used by AI.
Maybe, I mean, man, that tool is way overdue. I mean, I've been using GAM for a long time and I actually noticed it in the Mac admin slack. And I'm like, finally, like it's about time. It still doesn't do everything we want it to do, but it's a step in the start.
Is that what makes it step in the right direction though? I don't know. Anytime I see in 2026, a CLI tool being released, I just think like, are they, this is for the people who just have to do everything the hardest way possible, like, this is just for the people who have nothing better to do than spend their life in the command line.
I feel like we spent so much of the beginning of computing and technology getting out of the command line. And now we're going back to it just seems like it's, you know, what kind of how we're seeing culture, the 90s reemerging is this like cool new thing to look back at.
And I just, yeah, wherever our CCLI, I'm like, why, why, why, why, because it's faster. I love the simplicity of the CLI, yeah. Yeah. Yeah. Using a GUI is slow. I think that's the main reason when you're an admin, you've got a billion things to do every day using the CLI is just way, it's always going to be faster than moving a cursor around the screen and looking for a button.
So my question is, are we going to see DOS 3.2 now? Is that going to be the next new thing? We're going to release the next version of DOS because everyone wants to go back to the command line now. But it's a good point, Dustin, like, one, an admin as well.
Like the API for Google Workspace has always been there. Like why, especially if AI is going to be doing it, why, you know, who are the users of the CLI that, what are the use cases where you're going to use the CLI over an API, right? Yeah. And I think you're right.
I think you're hitting it there where they're making it agent-friendly. Well, but my concern is whenever you go back to the command line, there is a risk with simplifying it like this, then you're actually now creating another attack path. Like if you've got the API and if you've got the GUI, what are you getting from the CLI that those aren't giving you?
That why are we creating another way? I mean, how many command line misconfigurations have we seen in the history of computing? You know, I'm looking at this agent a little closer at the AI agent skills section and they have an open claw set up section there to help you do it.
And they actually have a bunch of skills, the whole skill index. So that's really cool, I've been having my clawed like a linear skill and how to create linear tickets for me on my behalf. So I like it. Yeah, fair.
Yeah, the G, I mean, already still fresh in my mind is a story about the chief of AI safety at meta, deleting all her email, her clawed bot or open clawed, deleting all her email. And I'm looking at these skills and I see Gmail triage, I see, you know, like, I see some skills that could cause a lot of damage if you don't have some kind of guard rails around this.
Yeah. I mean, the one thing I had to use GAM, I still have to use GAM all the time for is delegating an inbox. You know, so doing that from, you're doing that quickly is like, go of an employee or something. Yeah, exactly. You let go of an employee.
I guess, yeah, I mean, I guess from an automation play, it makes sense. It does make sense from an automation in an AI standpoint, but I don't know. I don't know. Command line attacks, just command line misconfigurations, mistakes, like I just see that being very problematic. Oh, yeah, and Gmail deletion was very unforgiving, very unforgiving. Yeah. Yeah.
For sure. All right, so let's see, we've got new companies, root evidence. I'm kind of excited to see what they do. They've been teasing the company from stealth for a long time. This is Jeremiah Grossman and Robert Hanson, AKA arsenic. They've built a lot of companies in the past before and they just released, they just came out of stealth and released that their first product out of four products are building is going to be focused on vulnerability prioritization.
So giving you a list of the, not only the vulnerabilities that are exploitable or being actively exploited, but can actually cause damage, you know, which sounds kind of funny until you start looking at exploitable vulnerabilities and you realize, oh, yeah, this vulnerability just shows me the internal IP address for a server.
Like I'm not sure how much I can do with that. Like maybe that's useful to chain attacks, like in another attack, I need to, I need to know what the, the subnet looks like or something like that. But there, there are a lot of vulnerabilities like that that are not really, you know, by themselves can't really do a whole lot of damage.
So I think this is interesting and I'm excited to see more from them. Yeah, I think, I think one of the things we're seeing in the startup and venture area right now is a lot of interest in AI being leveraged to address legacy challenges that we've struggled with and I think the number one legacy challenge we've struggled with is vulnerability management.
So, you know, if this is a hot topic in the, in the startup world and in the venture space right now, so many of these companies, the venture capital that's out there are looking at that, you know, legacy, using AI to address legacy challenges.
And I don't think it's going to be able to in a lot of these cases and the main reason is because, you know, kind of like, I don't think it's going to, yeah, I'm just saying that's weird. Yeah. Same thing we've seen with data science, like when the data is bad, when the quality of the data is questionable or mixed, what you can do with AI is somewhat limited.
And what you can do with data science is somewhat limited. And in like, like almost everything I've been seeing, like with AI and a sock, like it can, it can summarize stuff for you, you know, but if the stuff it's summarizing is a hundred percent false positives, it's not really giving you any value, right?
Like you still have to address that core problem of the quality of data that you're handing to the thing. Otherwise, you're just burning tokens for, for no good reason. Yeah. Well, I mean, it's the same thing that we saw with automation and big data, you know, if you, if you automate or you're processing bad data, you're just going to do it faster.
And that's what we're saying with AI. But I think there's a lot of interest in trying to, there's a lot of interest in trying to hopefully maybe address it. So it'll be interesting to see. And it seems like the, I'm sorry, it seems like the, I mean, I thought some of this was solved away with some other folks, but it seems like their approach that they're taking is more like financial risk, right?
Is that the idea here with this solution? Well, without financial risk, there isn't really any risk. I think is kind of the point here is like, like some of the other companies that do this, I feel strongly based on research I've been doing independently, that there's still orders of magnitude off like they are kind of culling, you know, the hundreds of thousands of vulnerabilities into tens of thousands, but tens of thousands is still too much.
You know, you tell somebody they've got 10,000 critical vulnerabilities. You know, first of all, it's not true. And second of all, it's still a lot to deal with. So I think culling that down to maybe a few hundred, you know, like I'd be, I'd be shocked if the list that they've come up with is more than like 2000, for example.
Yeah. I mean, it is a problem I have, I mean, depend about it's very noisy, for example, right? And I had cloud code just go through it was like, hey, which one of these are actually exploitable and, you know, give me, give me your, like, can you dismiss it? Which one is it fixed? Right?
Because sometimes they don't get dismissed after fixing. So it's, it's kind of annoying. It's definitely a pain in the butt. Yeah. And then the other item here that's actually closely related is this zero day clock website that was released, blinking on the guy's name at the moment, which is fine.
I would probably mispronounce it anyway, if I had it handy. But basically showing how the speed of exploitation is increasing. And in fact, we've, we've seen a lot of data from Mandian on this, basically showing that the majority of vulnerabilities get exploited, or exploitation starts.
Maybe not, maybe it doesn't hit your company this quickly, but it starts before disclosure. You know, like the average time to exploit, you know, is, is minus a certain number of days, now, and that's because there's just more zero days than ever. And I think with AI, that's just going to go up.
Now that AI, we've discovered it's very good at finding vulnerabilities. We're going to see a lot more, a lot more zero days and, you know, just requires us to kind of shift strategy. And I think the tough thing here, and I've actually written up a blog post, I think I included it in the notes here. Yeah.
Category number 13, reevaluating vulnerability management, basically saying that, you know, we kind of have to take a hybrid approach of both preventive, preventative strategies like hardening and, and, and passive mitigations, you know, like limiting outbound access from servers, and traditional vulnerability management, because a lot of these vulnerabilities, you know, you know, we still see stuff that's, vulnerabilities that were fixed years ago, get reintroduced, right, you know, because somebody uses an old image, or turns on an old VM or something like that, you know, that, that's obviously not patched because it's been switched off for years.
So none of it gets easier. And not only is AI finding or fixing vulnerabilities, but it's also creating more, you know, alluding to Caleb's, uh, essay, yeah, and so, and don't forget that early on one of the big things that go ahead, go ahead, Dustin, I was going to say don't forget that early on one of the big things that we saw, chat GPT being used for was to access, um, the, the MITRE CDEs ahead of disclosure, um, you know, they were, they were being prepped for disclosure two days beforehand and attack, you know, security researchers were able to find them days beforehand.
So it seems a little interesting and coincidental that we're seeing, uh, you know, a day and a half beforehand is when they're starting to be exploited. I wondered to what extent that's still occurring where the attackers are finding the disclosure reports and going, Oh, these are going to be, this is going to be shared in two days.
Cool. I'm going to attack it now. Yeah. Yeah. It's, it's, um, yeah. Speed is an issue. Uh, you know, like 30, 60, 90 days, like even the U.A. cybersecurity center, like, like, they've got a, I think it's five, seven, 14 days, something like that.
They have a really aggressive timeline that they recommend as a, as a best practice and even that's not going to be fast enough in a lot of these cases. So, um, yeah, yeah, it's, it's tough, but, uh, we're running out of time here.
I want to jump to one of the big stories, which is the, uh, striker cyber attack. So, you know, with, um, yeah, this, this Hondala group, uh, I've been following Kevin Beaumont on, uh, on Mastodon. He's, he's been focusing on some of the stuff they've been doing and, uh, we, we just interviewed Ali Mellon.
Apparently wipers are fairly common in, in, uh, in during war, like, like cyber attacks that are, that are related to the, uh, uh, you know, physical engagement. But yeah, in this case, it's interesting seeing the targeting of, um, non-military, uh, targets like AWS data centers and strikers, a healthcare company, uh, they make like, uh, hospital beds and other medical equipment and, you know, the story is what Dustin, tell, tell us what you found out about, about this.
Yeah. So, so preface all of this, you know, we're recording this on, uh, March 12th at, you know, 445 East Coast time. So yes, by the time it's out, we, I expect we'll have a lot more it. We're going to have a lot more information.
But what we already know is that all, all indication is that it was, it was, in fact, the sirenian threat actor group, Hindala, uh, it meets their, it, it's matching the kind of attack vector that they typically go with. Uh, it appears that it was some level of credential compromise because they were able to get into the into an environment and wipe mobile phones and laptops that were connected.
Uh, there does not appear to be evidence that they had installed any sort of malware. It was simply a destroy kind of scenario to your point. Um, and that, you know, striker found out about it, you know, striker found out about it because their employees weren't able to access any of the systems.
We're showing up wiped, but it was really Hindala themselves that made like, you know, advertised and made known that they had done this. Um, they immediately took responsibility for it and we're very proud of it and actually said we've got, you know, terabytes of data and we will release unless you do XYZ. Yeah.
So, you know, I think we're, you know, one of the points and these are servers. These are workstations. These are people's phones that were, uh, connected in. Yeah. So, so the number, yeah, the number, the, the group claims that they, that they were able to erase data from about 200,000 systems, servers and mobile devices across 79 countries.
Um, and that they, they plan to publicly distribute the information. Uh, the numbers that they're claiming, they claim they have 50 terabytes of striker data that they're going to release, um, there seems to be corroboration with these numbers and it has brought them to a standstill.
Um, and, and it does seem to be politically motivated and motivated by recent events in the region, specifically in attack against a school in Iran, um, that was, was, uh, targeted by military forces, um, or that was attacked by military forces. Um, we still don't know a lot, there's still a lot that hasn't been made available yet.
The initial attack vector is still, um, under speculation. Um, the fact that they got into in tune is also, I think still a an assumption obviously based on what has occurred, um, between obviously now when we're recording this and when this will air on Monday, there's a lot that we will probably learn. Yeah.
Like it totally makes sense from an attacker perspective like, uh, this is fairly common in attacks. Uh, I'm not sure that people realize this, but like if you, and we don't know this either for sure, but when target got hit, uh, all of its POS systems had malware installed.
What's the easiest way to get your malware on hundreds of thousands of point of sale systems, uh, or in this case, what's the easiest way to hit 250 million endpoints like, roll your own worm that spreads itself across the infrastructure or, yeah, I mean, and I think I, and I think if you integrated or to, to carry out the, the tasks that you want.
Yeah. And I think it's important to, to, again, state it, there's no evidence that malware was involved in this. It was simply a wipe data, wipe devices, bring the company to a standstill kind of scenario. Um, I, what's interesting to me more than anything on this one and what I think is really important to, uh, to articulate is one, this is an indicate, this is an example of where, and we're sitting here in March talking about this throughout CRA, which is, you know, the importance of cyber threat intelligence, because over the last week and a half, cyber professionals have been getting alerts nonstop about potential attacks of Iran.
And the fact that that, you know, I remember back in 2020, I was working for a critical infrastructure, uh, an energy provider, uh, electric provider here in Houston. And we were, we were looking at right before COVID attacks from the Iranians and we would spend, we spent two or three days looking at every single threat actor that had any ties to Iran and looking at how they typically attack and starting to make to check off what could we say they, they don't have the capability to do?
Where do we have gaps that we need to be paying attention to? And if you haven't been doing that over this last week and a half, regardless of organization, if you're an American company, this is something that should be a wake up call to you that it doesn't matter the size of your organization.
If you're an American company, that's all the Iranians care about right now. And that is why you need to be, you should be using this as an opportunity to shore up your own defenses. This is also, by the way, I have to make a shameless plug, but this is also a really great opportunity to highlight the importance of being part of a community, being part of organizations like the one I come from the cyber risk collaborative here at CRA, where you can get the peer insights, the what is everyone else seeing?
Are you seeing these types, what types of things are you looking at? What are the IOCs that you're pulling? This is a really great opportunity where teams, the team sport effort of where all Americans were all U.S.-based companies, we all want to do, protect ourselves and our organizations. This is where those communities come in handy.
Yeah, I don't know how common this is, but thinking about threat modeling and the use of administrative tools here, we were just talking about that Google workspace tool. Why use malware if you can just send a wipe signal to all devices? Yeah, you're just making it easy for the attacker, right?
So that goes back to just defense in depth and make sure that any key application that has a lot of access, you're locking it down. It also takes it to when someone attaches their personal phone to an organization and now their personal phone gets wiped, that's a lot of misery.
It's a lot of misery, but at the same time, I had this exact conversation a couple hours ago with a group of CISOs where I raised that exact question. Part of it is also this is that, hey, you're not being forced necessarily to connect your personal device, but theoretically, this is why you should be backing up your device because the work, really, what they're going to wipe, yes, they're going to wipe the device, but they're really focused on that corporate business data interruption.
So there's, I don't know how much sympathy from security teams, there's going to be because of the nature of, okay, no one's saying you have to be part of, you have to get this data on your device, but from an employee standpoint, I certainly understand I appreciate it.
I think the other aspect of it, I think the other aspect of it as well is this is going to, unfortunately, I think become the norm whenever, where the physical world is going to interact with the digital world. You're not going to see conventional warfare that's going to not have a cyber component to it now.
I mean, we saw that with the Russia Ukraine incident, how physical warfare started with cyber. And we're now seeing physical warfare causing, you know, instigating cyber attacks. I think this is just unfortunately the new world order and we need to be prepared for it. Yeah.
The other thing is, yeah, sorry, the only, the only ask I've had for many years is like, we need more from the providers to help put segmentation from business and personal data on people's phones, right? So just to make that ecosystem easier, it's still like Android and Apple are still, I think they've taken steps, but it's still a ways off to make it like super segmented.
That's absolutely absolutely could not agree more. You know, and I was going to ask it beforehand, and Adrian, you asked me to hold off. But I got to pose the question and I know I'm not trying to make light of the incident at all.
But, you know, the nagging thing in the back of my head is, are we going to hear that the administrator credential that was compromised for Intune hasn't been changed in three years or 10 years or 15 years. How long is it going to be since the last time that credential was changed?
And I'm just, I'm nervous for what that number is going to be. Or it was, you know, like a common thing is it was a contractor account and they didn't have MFA enabled, right? Like it's there was a former employee that they never shut down like, yeah, common thing. Yep. It's how drizzly happened.
It's how power school happened. And I hope that's not the case, but it's so hard. Yeah. Yeah. Um, I hope that's not the case, but I'm afraid it's going to be. So many folks that will never let you like on board their phone, like, you know, they've been burned in the past.
They've had a phone wiped in the past. And you will, you're not going to reach them with an email late at night or on the weekend or anything like that, because they're not going to have any work stuff on their phone. So it's, it's, uh, yeah, I think it's fair.
It's, uh, you know, businesses expect you to use your personal device, even like identity management, you know, we're seeing where, uh, you know, they kind of depend on people personally owning a, like an iPhone with a depth sensor, you know, like a lot of these tools use that depth information to make sure that it's actually a human's face and not like a 2D image of a face, uh, and, uh, they're, they're leaning hard on people using their personal phones and their personal $1,200, $1,300 devices here.
And, uh, and yeah, when somebody's, uh, device gets accidentally white, um, yeah, you, you've, you've lost the trust of that person forever. They will never again connect that to their work account. They'll, they'll be like, yeah, you can send me a corporate device and I'll carry two phones. How about that? Yeah. All right. Um, yeah.
So our, our squirrel story here, um, uh, so well, one thing I would highly recommend is, I think we did, okay, uh, I don't know what in there, but we will, we will carry on. Okay. Interesting. Um, yeah. Number 19, uh, I don't know if you've had a chance to watch this, but, uh, very popular YouTube creator Veritasium, which is, is really one of these, like, uh, these individual creators like turned it into a business.
They now have a full staff. So it's, it's not the, the main guy behind Veritasium that made this video, uh, but somebody on his staff, but it's basically a 50 minute documentary on how the XZ utils incident, uh, happened. And it's built for just the common, uh, common audience. So they explain open source software.
They explain like, like, some very basic, uh, like cryptography, how public private key encryption works. So a very ambitious video, I think does a very good job. Like, like they have, uh, animation, you like hand drawn animation in there and, uh, do a really good job of telling that story.
So it's, it's a full on documentary of how the XZ utils, uh, which, which was a, you know, I, I think the suspicion was that it was a Chinese actor that spent years basically, uh, you know, trying to get in on this open source project so that they could backdoor SSH and, uh, very, very well done.
I recommend going to check that out, adding that to your, your watch later list on, on YouTube. Definitely. That looks cool. I love Veritasium. Uh, I love, I love, you know, fan of their videos for some time. And, uh, it looks like they had, you know, the help of live overflow and a couple other notable folks as well.
So they, they even, they take the, uh, malicious component and they even demo it on their own website. Oh, yeah. No way. Yeah. That's cool. Like they, they, they did not have to go that hard. They did not have to, uh, spend that much effort, uh, to make the story work.
Uh, but, uh, but they did, they nerd it out and I appreciate that they did. It's, it's, uh, it's a very cool one to watch. It's a hard concept to, to explain, to tell you truth, you know, to, to, to layman. So, uh, good, that's, that's really good. That's really cool.
And then our squirrel story here, this is the Spectre one, uh, which is, uh, pitched as an audio privacy device, a device, a physical device. This is a battery. But let's, let's pretend that this is the, the device. Um, that you turn on, you sit next to you, uh, and it supposedly emits some frequency that prevents you from being overheard.
So it's like a cone of silence. Yes. That's awesome. If it were, do you think it works? Well, I mean, I've always thought about like, you know, I've always wanted to create something like this with a Raspberry Pi or something, but, um, I, you know, there's got to be some sort of frequency that will just jam mics, right?
Uh, $1,200, though, I'm not sure that needs to be, you know, that price sounds outrageous, but the other weird thing is if you Google this, uh, and you find cached pages, it was $500, then it was $800. Now it's $1,200. So like the price, they keep changing the price, which, which, uh, there's a few red flags here.
Yeah, but I have to tell you, there is a need for this. Yeah, sure. I don't, I mean, I don't know if you've been in like granola, granola is a thing. It's so popular and it auto records your meeting, it's an AI note-taking app.
It's been popular for a while and people would use it to record places that they can't record. Like, for example, they'll, uh, take a phone call on their Mac, right? Like a FaceTime, whatever, and it'll auto record. Like if you can't record from your phone and we know wiretapping laws, you need permission.
So and even sometimes in companies, they turn on by accident or meetings are being recorded, but they're, they're not aware, right? And so, uh, now everything's being recorded. It's, uh, I mean, I saw today, uh, appended someone invented a former Apple employee invented appended that will record all your, your voice, which is cool.
I mean, I, you know, Plot does that. That's, that's one of the devices out there. Like there's a version of plot that just uses MagSafe and just connects to the back of your phone. And you just button and it starts recording.
And then, uh, the app will transcribe everything and summarize it for you and write up a report for you. Yeah. But this stuff is really common and like it'd be nice if the app popped up and said, Hey, you're in a two party state. You're legally required. But I bet it doesn't.
No, no, and it's, it's, it's really hard because there's no like we want to make the app force, you know, like people want to, to tell people that it's being recorded. But, um, yeah, it's, it's happening. Uh, it's so this is, this is timely. Yeah. Yeah. Yeah. For sure. Yeah. I'm very, so I'm a big edgy guy.
Like I've, I've done some kick starters, uh, probably 40 or 50 kick starters at this point. Wow. And I, I do have reservations here. Like, like I, I've learned the things to go look for, you know, go check out the founders. What else have they built?
Do they have a track record of, uh, making big claims and never releasing the thing or releasing like one of the things I got were glasses and I have ADHD. So like the worst thing for me is to try, is to meet somebody, uh, for a beer or a chat or something like that in a restaurant, uh, that's like a sports bar.
full of TVs. It's so hard to focus on the conversation when there's sports and news and stuff going on in the background. So supposedly these glasses were polarized in such a way that those TVs would just show up as black. It would remove the picture. So that you wouldn't see the picture. So like a sensory airbag.
For for those of us that that have trouble with with focus. And I've yet to find a single TV that it works on. Oh, you know, there was a warning that certain TVs, you know, the way they work, you know, it might not filter them.
Never found us. I tested it on so many TVs. What, you know, went to different sports bars. Like just fundamentally didn't work. And this smells and feels a lot like that project to me. Of course, that was cheaper.
It was cheaper. It was a piece of plastic. Right. Like, I think I lost $40 or something like that. This is 1200. Yeah. No. Well, why not? At least I can't. At least I kick started came in. There's probably like a 50% hit rate of kick stars that never came in.
But have you ever heard of the TV be gone? Oh, yeah. That was the first. That's how we learned to solder. I went to, I think it was my first Derby con. I bought a TV, be gone kit. And I soldered it right there. And they helped me like they taught me how to solder.
And you can you can get in a lot of trouble. Go into a sports bar during a U of T fight and press, press one button. We'll meet your bucket. That's all your problem, though. We will focus on the conversation. It will. Well, it'll change the conversation. You'll be having the restaurant owner.
Well, if they find out, you know, yeah. All right. And with that, we're a little over time here. Got to wrap things up. Thank you so much for joining me today. This was a lot of fun.
Some interesting stories to go through. We don't often talk about cyber war. So that was an interesting one. But the fact that they're just hitting enterprises, right? They're just hitting companies is interesting. Yeah. Yeah. And besides an RSA is coming up, right? It is. You're giving a talk. I'm giving a talk. What, what is your talk on?
My talk is on Google Drive Hunter, the tool I created late last summer to help find the drive files that are open to the world. So just talking about my journey and half is about the tool that I have is like how, you know, a security leader can just create a tool to solve their problem.
So what's your talk about? So mine, do you know when your talk is like what day, what Sunday at 3 p.m. It'll be live stream. I need to. So come check it out. Great. Is that a besides talk or an. Yeah. Yeah.
Cool. I will be there. I will come check it out. Cool. Thanks. So my, my talk at RSA is at 940 on Monday and it is about breach transparency, the importance of breach transparency.
And, you know, we will dispel a lot of myths about breach transparency, all the reasons, all the excuses that companies give for not sharing details about what happened.
But also about like why learning from failure is so important, why that should be critical for all our security programs like we should be aiming to learn and understand and change the things, you know, that that'll prevent those failures.
And, and yeah, so I'm going to be publishing and sharing a lot of details of how breaches happen. There's a lot of reports out there that, you know, not everybody has time to go find and read all 200 pages of these reports and pull out all the control failures.
So that's something I'm now doing for you. I've hired an intern just for that. Nice to them to my sub stack. Yeah. Wow. Cool. Cool. It's exciting. Exciting.
Yeah. Yeah. I'm excited to see the response. And I've got a lot of writing and publishing to do before that talk because I want to have at least some of them out there and ready for people to look at. Okay. Yeah. Yeah.
All right. Big thanks again. Amen. Thanks to Dustin and also Josh for joining me today. Big thanks to everyone watching or listening to this week's episode of Enterprise Security Weekly. Next week, we'll be talking to Cara Spake. Sprague. I will learn how to say her name before that call. I have a prep call with her tomorrow.
She's the CEO of Hacker One, and we'll be talking about the state of the cybersecurity market with Mike Pravet. He released his 2025 year end report. So we'll be talking about what happened in 2025 and the state of the market now. See you then.