Modern Cyber with Jeremy Snyder - Episode
87

Sydney Marrone of Nebulock

In this episode of Modern Cyber, Jeremy is joined by Sydney Marrone, a premier expert in the field of threat hunting and the Head of Threat Hunting at Nebulock. The conversation explores the rapidly evolving intersection of threat hunting and artificial intelligence, specifically focusing on how AI agents are transforming the speed and efficacy of defensive operations.

Sydney Marrone of Nebulock

Podcast Transcript

All right, welcome back to another episode of Modern Cyber. I am delighted to be joined by somebody today who I saw one of her presentations recently, and I just had to reach out to her and invite her onto the podcast, because she's working in an area that is super relevant right now in early twenty twenty six, as we record, and I think is only going to get more and more important over the next couple of years. For anybody out there whose teams are building Agentic, AI solutions, applications, models, approaches, whatever you want to think about it, you're going to have risks. You're going to have threats. You're going to need to figure out what is going wrong. And so with all of that said, I am super delighted to be joined by Sydney Marrone today. Sydney is the head of threat Hunting at Nebula and co-founder of the Thor Collective. Sydney is a respected author in the area of threat hunting, having written the Agentic Threat Hunting framework, co-authored the Peak Threat Hunting Framework and currently writing a Sans course on threat hunting. Sydney, thank you so much for taking the time to join us on Modern Cyber today.

Thank you so much, Jeremy. I'm excited to be here. Awesome, awesome. I would love to just kind of start with a little bit about you and your background and how you got into threat hunting in particular, because it's not always the most obvious kind of area of cyber specialization. You know, a lot of people start in SOC operations, if you will, and then they kind of move on from there. Was that your journey? How did that play out for you?

My journey was pretty similar to that. I started in it many moons ago, and I tried just about everything under the sun in it. I did network security, I did desktops, I was crawling under desks. Uh, I worked on servers and I couldn't really find a fit. And then I had the opportunity to rotate through cybersecurity department, and I hit the operations team and started working with them and looking at alerts. And I was like, this is it. This is what I want to keep doing. Um, and my cyber security journey Really went through working, um, on many incident response teams as well as doing lots of forensics automation. Um, your typical blue team work and, uh, a few companies ago, I was at Centurylink, uh, now known as lumen, and I was on their incident response team, and there was two others on the team who were really interested in threat hunting. And so we proved it out and convinced our manager to give us a team. And we built the program from there. Um, and then I went to Splunk and helped build up their program there. And now I'm at a new company called Nebula, and we're doing AI driven threat hunting. So you can see there's a theme to where I've been working and what I've been doing.

I mean, it certainly sounds like you've put in the ten thousand hours on threat hunting. Yes, yes, and I love it. Oh that's awesome. That's great to hear. I'm really curious. Like in that early days when you had to go make a case to build a team around threat hunting, like, talk me through that. What was that case like? what did you kind of show to the management to show them that this was going to be a valuable use of company resources? Because I think about a company like Centurylink and I still think of them as Centurylink. I'm a little bit older school, you know, like I'm used to seeing their billboards and their, you know, like fiber optic ads and all that kind of stuff. But like, you've got a ton of signal coming on. You've got like literally must be gigabytes, if not terabytes of network traffic data coming across on a regular, you know, daily if not hourly basis. So, you know, that's a ton of data to sift through. It seems like it could be a big undertaking. And it seems like it could be like a lot of compute and storage resources to start investing in something like this. So how did you make that case, and what were kind of some of the key factors that made you go to management and say, like, hey, we need to invest in this and get serious.

So we encounter the common theme that many incident response teams deal with, and that we wanted to do this cool new thing called threat hunting, but we were inundated with incidents. Okay. And trying to put out fires all day long. And what we were able to do is convince our leadership to let one person at a time rotate off and do threat hunting. This would be they couldn't be called back to incident response. They were just threat hunting. Uh, we had one person prove it out. And so they ran a threat hunt for probably like a month. They worked on it and they were able to have some actual findings. And they had a report. They did a readout and shared it with the appropriate teams. And that was kind of our segue into being able to convince leadership. We proved it out. We had metrics. We showed that we were able to be successful. And we went from there and was proving it out as simple as showing that like, hey, if we're proactive about this, instead of being in the reactive incident response mode. We can get ahead of these threats sooner, better, more effectively. Like, what does that look like?

Yeah, it does look like, um, that it really a big part of it is not only were we able to find, uh, suspicious activity that was later confirmed malicious, it was also that we found, uh, issues with process or issues with our data that we had. And those are huge findings and things that people don't think about when they think threat hunting immediately. But there are so many other outputs that you can get. And just the knowledge from understanding that threat in your environment and sharing it out with your teams, like your incident response team, is super valuable.

Interesting, interesting. And so okay, so you go there, you go on to Splunk, you get deeper into it. I'm curious, like a company like Splunk, I think of very much in the in really two camps in cybersecurity. And candidly, threat hunting is not one of those top two use cases that I think of for Splunk. I think of Splunk for, you know, for, for my, um, um, archive and my audit log for, you know, regulatory and compliance purposes. And then I think of Splunk for more like detection engineering, like, how do you see the overlap between like detection engineering and threat hunting? Because there's a there's probably a lot of the same queries and it's obviously the same data sources. How do you see like this quote unquote synergy to use a buzzword there.

There's so much synergy between the two. And there is a lot of overlap. Uh, at the end of the day, we are both looking at data and trying to find bad. I see Intel funneling into threat hunt, which then funnels into detection engineering or incident response. So Intel is brought in threat hunting does Intel driven threat hunting. And they build detections or find things and they ship it off to those teams. So I think they work together hand in hand. And I think it I think it really funny when you bring up Splunk and it's like not a threat hunting platform. Yeah. Um, but what is a threat hunting platform if you think about. That's a great question. The top one, there's not really one. No. Um, and if you're threat hunting, you're just looking at data all day. Yeah. So you need a tool that can you can look at data really well. Yeah I think Splunk is underutilized as a threat hunting platform to be honest. Just like the core Splunk. Yeah. Um, but I'm definitely biased.

Yeah, yeah. Fair enough. Because and you know something you said there, I just want to, like, kind of latch on to and and kind of unravel a little bit more because you raise a really good point, which is like, you know, there a there isn't like a quote unquote leading threat hunting platform. But it led me to think in my mind, as you were talking through that, I was like, okay, well, how do you even start a threat hunt? Because like, on the one hand, I could think of it from the perspective of like, all right, great. So I've got all this data coming in, telemetry data from whatever source. And I could either start it from the perspective of, you know, just thinking like, all right, let me look at the architecture of that application. Let me do a threat model around that application and ask myself, if I were an attacker, how would I abuse it? And then look for signals that kind of correspond to one of those things. Or you said, you know, you start off with like threat Intel, which I assume is like, okay, we know there is this malware or this is, you know, we've heard about X campaign or we've heard about a particular vulnerability log for shell, whatever the case may be. And now we need to go like start investigating indicators around that thing that we just heard about. Like are those kind of the two right sources or what's the right way to think about how you start, how you even start a threat hunt and what to what directs you to look for something?

Yeah. When I think of starting threat hunts, I think of Intel driven like you mentioned. That is the biggest one. It should be the driver of your organization. Um, sometimes the threat hunters are doing it themselves. Sometimes you have Intel people. It just depends on your organization. So you gotta work with what you have. Uh, also incident driven. So if there's an incident on something new, then maybe we can go ahead and hunt it. Uh, so like that. Sorry. So, like, that new incident is actually just the tip of the iceberg. We might have had X number of these. Same kind of. Right. Got it. Yeah, exactly. And, uh, red team or penetration testing driven, vulnerability driven. Like there's so many different avenues that you can take for it depends on how mature your organization is. Uh, also executive driven. Sometimes leadership is going to come to you and be like, can you dig into this? Can you hunt this? So if it's, um, get to flesh out the idea a little bit with them, but it could be the start of a threat hunt.

Interesting. Awesome, awesome. So, okay, so you pick up from Intel, you pick up from the threat model executive, whatever the source is, Incident source. Whatever the case may be. Talk to me about what that process really looks like, because I think a lot of people hear threat hunting, and they might have a misconception about what it is. In my mind, what I know of it, and again, I have a limited exposure to this domain. I think about a lot of data searches and queries and then analysis of log files. And then maybe like starting one of these queries finds me one suspicious entry that I then tug on that thread and I start looking for correlated queries or the next action by a particular user. And, you know, kind of digging down from there. Is that the right way to think about it?

It is. That is the main part of threat hunting. That is the fun part. Let's call it the execution part. But there's so much more to the process of threat hunting. I look at it as you're becoming a mini Smee in whatever area you are. Threat hunting. So what do we do to become a Smee? We got to do research. We got to understand whatever topic it is. So if you have a hypothesis or say you're doing more baseline, you're digging into a data source. You need to do your due diligence and understand what either that threat looks like or that data source is, or even both. Uh, you want to be able to tell exactly, um, bad from good in when you get to that execution standpoint and looking at those events, you can be able to understand if something is suspicious or not. So I think doing that research, doing the planning part of your threat hunt is going to take even longer than the execution. Most of the time. Yeah. Um, which it's if not, it's important, if not more important than, um, doing just like the querying, which is the fun part.

Yeah. Awesome. So I want to ask a kind of a question, final question on this topic before we change gears a little bit. And we're we're like eleven minutes in and we haven't talked about AI yet. So I think we're probably doing you know, we're probably like behind schedule on that sense. But um, but I'm just curious, like from your perspective, right. As somebody who's been doing this a while, what are the things that have surprised you the most in the work that you've done as far as like whether it's, you know, super creative, um, types that you've seen or whether it's like, I never in a million years would have expected things to develop this way or like changes in threat hunting over the, you know, the years that you've been doing it.

I think what has surprised me the most is the maturity of threat hunting and the ways you can threat hunt. When you think of threat hunting, you think, let's see, we're running in Splunk. We're running. We're just like filtering, doing statistics, basic statistics on things, looking at a table, it's very, um, very basic. Let's just say there are so many more advanced methods that you can use to threat hunt, and that's where the statistics and data science come in. I never thought I would be looking at means medians, standard deviation z scores again right. And here I find myself using them to understand environments and understand new data sources. Yeah. Let alone actual data science. Um, that which is not something I am, um, super knowledgeable on, but I have like, brushed the surface of it to try to see how I can use it for threat hunting. So those have been things that have been really eye opening for me, and just a whole new space to learn about and challenge me as I've grown with threat hunting.

Yeah, usage. Awesome. Awesome. Thanks for that. So let's change gears and talk about AI and talk about the Agentic threat hunting framework. Is that the right phrase? Did I get the the title right? That is right, yes. Awesome, Awesome. Well, first of all, you know, you wrote it. So tell us what it is. Tell us what inspired you to write it, and then we'll have follow up questions, I'm sure.

Sure. So the writing framework was an idea that I kicked around with the founder of my current company, um, Nebula. We are, like I mentioned, we're doing AI driven threat hunting and we're in that space. But we would go talk to customers or even when I was at Splunk, we'd talk to customers and people would be like, how do I do? How do I bring AI into my threat hunting workflow? Um, initially it's just copy pasting into ChatGPT or whatever is approved in your organization, but that's not scalable at all. So I came up with this framework that gets you from zero to like an agentic workflow. I included a maturity model. Um, also there is a pattern. It's called the lock pattern, which can be used to drive your threat hunts and like templatize them. So it's something you can use. It's also something AI can use. Okay. Um, but this is what I've been working on, like the past few months. Um, been able to talk with a few people about it. Um, and it seems to be, um, having worked out good so far.

Okay. And you said it's like the lock lock methodology or the lock approach. There's a lock pattern as part of it. And lock pattern. Standard pattern that you would threat hunt. It stands for learn, observe, check and keep. So just like you would do a threat hunt, these are like steps you would take. And underneath them there's like um other smaller steps you would take. Okay okay. So and then I assume that that kind of aligns with what we were talking about earlier. It's where, you know, you like learn about a particular behavior or a particular threat. You observe it by running some queries, finding some data. You verify that that data actually matches an indicator of some kind, and then you keep the relevant data for future purposes for indexing, for archival, for audit, etc.. You got it.

You don't even need me. Like, you know, I'm sure the audience is, uh, is going to benefit from hearing it kind of just, you know, talked out explicitly. So. Awesome. Okay. And so, so a couple things in there I'm really curious to hear about. You talked about a maturity level in there. Like what is a maturity level in threat hunting. What does that mean. And how do you kind of measure it. And what's like, you know, from the least mature to the most mature, what are things to look out for?

So if we think back to like squirrels, uh, hunting maturity model, we have all types of automation and ways you can look at your data. I mean, this came out, um, I think over a decade ago now. Um, but it's still very relevant. And so this is similar. It builds on something like that, but it starts with your very manual threat hunting, which is what we're probably doing now. You are taking notes in JIRA tickets, slack threads, Google Docs. Um, it slowly grows where you document in a repo like GitHub, GitLab repo, um, which makes it easily searchable, easily accessible by AI. And then slowly you can add things like MCP servers or agents to it, where you can access all of your data. You can run threat hunts. There is really a lot of capability and it's all in the GitHub repo I have, like I even have it. So, uh, the Splunk MCP server is connected. So if you have a Splunk stack you can connect up to it. You could run queries right from um, right based on using the framework. It also has a CLI feature which is where it gets its power. I think it's the command okay. And um, it has a bunch of subcommands underneath it. But the you can use that to create your hunts and work through them, or the AI can use that. And that's what makes it more efficient and so much faster. From my experience and what I've been able to share with people.

Yeah, because I think that is one of the things that is really like a big draw around involving AI in this process is like, you know, threat hunting. I think historically, like a lot of early detection engineering is superhuman, intensive and labor intensive and takes a lot of time for people to kind of like run the queries or write the queries, run the queries, check the data. You know, that lock process that we just talked about, like that's a pretty labor intensive process without AI in the loop, right?

Yeah, I would previously when I was manually hunting and running queries, doing iterations, doing analysis, creating outputs, doing the research before all that, it could take me anywhere from one to four weeks to run a hunt. And now I ran one, last week and it took, I think, thirty to forty five minutes. I mean, that's a huge game. Everything that is insane. Yes. Um, so cool to see how we've been able to bring technology in. And I've said this before, but I think not. Threat hunting, like everything else in the technology world, is going to change immensely with the introduction of AI. And it is right now. So, uh, last year we were looking at threat hunting, and this year it's it's going to be totally different. It's changing the game, which is super exciting to me. I think it's a great thing. We can just do more and do more advanced things.

Yeah. I'm curious about one thing with this. When you say, okay, agentic um, threat hunting framework, do you think about this for AI powered applications or just anything? This is just like general purpose threat hunting. You know, that that can be used anywhere. But it's just like, way better, way more efficient.

Yep. Just general purpose threat hunting. But you can add agents on top. Like you can build your own agent. I know agent agent AI is like huge right now, but yeah, yeah, building an agent, um, that is, that can use Llms is one option. Yeah. Um, and that would help you with your workflows. So there are many different ways to go about it. But I think the goal is to be more agentic and use AI more authentically. So that's where this framework kind of pushes us towards. And you can use it how you want. Like you don't have to bring AI in if that's not something your team's comfortable with yet, just use the repository, um, interact with it. It's a really great way to keep your threat hunting organized and be able to search through it. And then when you get to that point when you can use AI, maybe you know you're waiting for legal to approve it, then you can start searching on that data and making more sense of it, giving everything a memory.

Yeah, yeah. Interesting when you think about kind of like the process that we've described already, the lock process or the work that you've done. Where do you think is the biggest AI boost right now? Is it on the log analysis? Is it on the kind of like the rather like the processing of the volume of logs? Is it on the analysis of the log files, once it finds something to go, kind of contextualize it and figure out like what, what a log file actually kind of represents or means or is it like really end to end?

That's a great question. I think my answer to this has changed in the last few months, but it is overall end to end. Initially, I would have said the research and the documentation because that no one likes. Most people don't like to document. I don't like to document. It's just a painful process, like taking everything I learned and putting it out and making sure I it makes sense. That's the hard part. And getting AI to do that is the easy part. Now, something I've been exploring more is the actual execution of queries. Using something like the framework. And I found that to be very powerful. Um, I think there's a lot of trust but verify in this, which we have to be careful. You know, we're having it run, um, queries or pulling back data, doing analysis. We should be double checking those things, at least right now. Like AI is still hallucinating. Yep. But I think it gets us so much further and it's only going to continue to get better. So it's very, very exciting to see. I love it.

Yeah. Yeah. That is super exciting. I think it's one of these things where, like you said, you know, we talk about Agentic a lot right now, and there's of course there's a ton of potential, but I still feel like we're in the early days of actually making it kind of, you know, production quality and to your point, like trusted at the level where we could be like, yeah, okay, this thing can run unsupervised. I don't need to verify the output of this system, but to your point, for instance, if I think about incident summarization or maybe contextualization, I still think hallucinations could be actually like detrimental more than they are productive in those environments because they might throw you like likely throw you down the wrong path. So I'm curious, like, okay, so you see a ton of potential. You see a lot of productivity, immediate productivity gains in some of these areas. Um, what do you think are some of the current shortcoming shortcomings aside from hallucination?

Ah, that's a great question. I think maybe. The some of the shortcomings are things that I've tried to address in the framework, which are like memory and context. So right now, context windows are all the rage and they are, you know, what we are trying to figure out and fight against. So that is definitely a shortcoming of, um, LMS, especially right now. Um, one thing with the framework that is really important to mention is, um, uh, the something like a repo and having the hunt history as well as other type of knowledge in there, say like domain knowledge or knowledge on your company is one way that you can give your AI memory and context. So when it's in that little context window, it can call to these things really quickly and pull them in and grab them instead of trying to hallucinate and go out on the web and pull something in.

Yeah. Um, so yeah, it's interesting, I've noticed in my own work and, you know, I don't work on active cyber investigations or even on internal cybersecurity anymore. Um, by the way, my early career was very similar to yours. Like, I did all the crap in it that you had to do. And for anybody who's curious, go back and look for the modern Cyber Breach series. And I share one of like, my worst ever breach that I experienced as a practitioner. If if you're ever curious, you're more than welcome to check that episode out. Um, but one of the things I was going to say is that, you know, in my own work, which is really around kind of staying up to date on what the latest threats are, in particular around AI adoption inside organizations. There's too much to get through, right? There's too much material, there's too much research, there's just too much. So I've come up with a number of little kind of personas of myself and profiles for analyzing different types of documents. And one of the things I've noticed is that to your point around having memory and having kind of an awareness of what type of analysis needs to get done for what type of data.

I've actually started creating a bunch of little prompts and and storing them separately, and I've really noticed a huge difference in quality. If I give it the prompt that says, like, hey, I'm a researcher and I'm curious to understand this versus, hey, I'm a product leader and I'm mapping out like a compliance framework for AI adoption. You know, I get very different results from the LMS with those different prompts. And so I've almost have like, I'm not sophisticated enough to really use GitHub too much anymore myself. I haven't coded in like twenty plus years, but I have like, you know, Google Docs with stored prompts of all my different personas. And whenever I undertake a new analysis, the first thing I do is warm it up and say like, okay, this is the type of analysis that I want for this type of document. I'm curious, like, do you think that we're at getting close to a point where you could tell an LLM. Hey, you're a threat hunter. Here's the threat model of the application. Here's the intelligence I've received. Go or not quite there yet.

I don't think we're quite there yet. I think it needs a little more instruction. Okay. I think we're close. We're very close. I use, like, clawed skills, like you mentioned, where you have, like, prompts for certain. Um. Uh, personas or something similar. And while those are great, I don't think we're at autonomous quite yet. I think it needs a little more, um, push forward. So what are the what are the gaps in the next year?

Fair enough. Fair enough. Next year? Sounds good. What are the gaps like when you're starting today? Like what? What are the gaps where you still have to give it that little bit of human nudge or context?

I think it gets sometimes gets a little excited and wants to complete, like the entire, let's say, the lock pattern in one go. And I'm like, no, you need to stop and, uh, do some iteration on this, like an iteration on a query or, um, iteration on some research it was doing. It's just not quite there. It needs a little nudge. Um, or even maybe it's analyzing your data and it's still not quite sure about some certain schema, and it keeps searching the wrong thing like it's almost there. I look at it as a it's my helper, my like threat hunter that works with me. It's it's like a human. Yep. Um, I've even seen it, like, make assumptions about a, um, user's son based on the file names. Yeah. Which is fascinating. Yeah. Um, but if you think about it, a human's going to do the same thing. If they see things, they're going to make assumptions, hallucinations, whatever we want to call it. Like it's human nature to do that too, and have hallucinations. So we're not. Machines are not perfect. We're not perfect.

Like, yeah, our VP of product always likes to think about it. He always talks about it like remember that it is it has the thinking level or sorry, the execution level of a PhD student, but the creativity and the starting understanding level of a toddler. And so you really have to like, give it the, you know, like all the context and explain things very clearly to get the best results out of it and then it can execute on some really complex stuff. So I think that's, you know, that's his take. I'm sure you've got your own opinion. I want to change gears for a second and talk about a little bit about some of your other work. You are the co-author of the Peak Threat Hunting Framework. Talk to us a little bit about that, what it is. And and you know, kind of where that lives today.

Yes. So I worked on that when I was at Splunk, um, with my former colleagues, David Bianco and Doctor Ryan Fetterman. Um, both very knowledgeable and skilled in threat hunting. Um, we built that framework. We want a Sans award for it. Um, we've pushed out a lot of material. Uh, Ryan Fetterman and myself actually put out a book called The Threat Hunters Cookbook last year that was built off of the peak framework. So the framework is just a structured way to threat hunt. A lot of people are doing ad hoc threat hunting. You're just running queries. You're going down rabbit holes. Really. You should bring some sort of framework into your threat hunting process and follow it for each time you threat hunt. Um, this is completely open source framework. It was published under Splunk, but similar to the framework which is open source. Like you don't, you can have any vendor, any tooling with it. Awesome, awesome. But yeah, that's been out for a few years. And the framework is not meant to replace that. It's meant to build off of that or whatever you're already using using Tahiti. You're using squirrel like it's meant to build off of that and, um, layer with it.

Awesome, awesome. And we'll have links, by the way, to anybody who's listening. We'll have links to both the peak threat hunting framework as well as the Agentic threat hunting framework. And something else that we mentioned in your bio at the beginning is that you're the co-founder of the Thor Collective. Tell us what the collective is and what are you guys up to over there?

So Thor Collective is me and two former colleagues, now friends. They were actually the the team that was at Centurylink that built our threat hunting program. So, um, it's a little bit of a reunion, um, that we are doing, but we run a Substack. We also run a podcast. We run a few threat hunting projects on GitHub, like hearth, which is a threat hunting repository of ideas. But our goal is to provide for not only the threat hunting community, but for like the greater infosec community. So we post to our Substack once or twice a week. We, the co-founders post. We have guest writers all the time post. We have a podcast every month where we have guests sometimes, but, um, we just want to give back and we just have a lot of fun working together and the community has responded really well. So we've kind of latched on and just continued to go about it.

That's awesome. And it looks like you guys just introduced an MCP server a little while ago. What's that all about?

People keep finding this. I never actually announced it, but there is an MCP server under our, uh, GitHub that you can check out that I was working on the past couple of months on and off, and it has a bunch of resources connected to it. So if that if you're into MCP servers, you're using them. It could be a really cool resource for you.

Awesome, awesome. That's fantastic. Well, we've only got a couple minutes left, and I want to kind of take these last couple of minutes to get just your opinions right. As somebody who is a very experienced threat hunter, and as we've talked about a couple of times, we're kind of right in this early era of agentic AI where real agentic solutions and applications are being built. How do you think about the risks from the threat hunting perspective? Because I think about like we work on this, by the way, in our background here at Firetail is we help organizations understand all the AI usage that's happening. We've got a number of capabilities for discovery, visibility, observability, some risk profiling and things like that. But when I look at that, I'm like, oh crap. You know, first of all, there's just the you don't know what you don't know. And so many organizations lack visibility and observability, so they don't even have any data to start a threat hunt, much less even just kind of like know where to open firewalls or, or, you know, whatever kind of the modern cloud equivalent might be for their, for their architecture. But we also see threats around, um, data access, around identity, around permissions being given. So what's the guidance that you give to people who are like yourselves in organizations where agentic solutions are being built and you talk to them about like, well, be ready to threat hunt. Looking for what? Like what are the things that are like on your mind and keeping keeping the community occupied?

Yeah, that's a really good question because, um, a lot of times, even with the framework, I present this to people and it depends on what's approved in your organization. If you can even do something like that, you might only have access to a certain tool. I think understanding what's approved in your environment, understanding how people access it, could be some way that you could, um, learn the risks and potentially threat hunt on it. Yep. Um, but very much trust but verify anything that you're doing. I say like we're doing threat hunting. I ran it in thirty to forty five minutes. That was like, um, a very quick one off. I didn't verify everything. You know, I could probably go back and verify. Might take me another thirty minutes. An hour? Um, but I think the there's still enough like, power and, um. New things that you can do with AI and you can prove them out, that it's worth going down that path and understanding the risks and seeing how you can apply it in your environment, because just the benefits sometimes outweigh the risks while they're not, while they're still important. And we want to understand them like there's so much benefits still. And so we just need to explore it.

That is exactly the tension and the pressure that I hear from organizations. It's like the if you're not using it, you're a you're going to be left behind and your competitors are using it. And this, you know, the FOMO of the, you know, we're going to lose our whether it's our leadership position, our competitive whatever it is. Right. But it's the like it's this real fear of being left behind by not adopting. But I do think to your point, like it's been proven enough that the benefits are real. So there really is like not just the FOMO argument anymore. There is the legit, you know, business case, hard facts, data driven argument that we can look at right now around this. And so I get that.

Something you said that I will also one hundred percent just concur with, which is just kind of like the what is allowed and not allowed. We think of that as just my personal opinion the way like people ask me right now a lot about AI governance. And when whenever people ask me, they're like, well, what does AI governance mean? Because I do think there's like actually a lack of real understanding about the risks that AI and AI powered applications pose to organizations. So the way that I describe AI governance is actually super boring and generic. It's basically just like, you know, that your organization is using AI the way that it intends to. So whatever the rules are for your organization, do you know that that is the way that it is being used. And if you know that, then you've got a good governance program in place. And if you don't know that, then you've got governance gaps effectively, like just that's the short version of it. But like that's the way I kind of think about it from that basic perspective. But to that end, any kind of ungoverned access or usage of a system opens up a ton of risks. And we've seen that play out again and again, whether it was like ungoverned cloud access and the open S3 bucket or whether it's ungoverned SaaS applications and the oops, we just uploaded all of our HR data into a platform that got breached or like, you know, an unapproved platform. And for instance, we caused a GDPR violation because, you know, for whatever reason, like all of those are kind of ungoverned use cases.

When you talk to organizations right now, like what is those fears like? Because I got to believe that I'm not the only one talking to people who, if you ask them, like, what are you scared about? They scratch their heads and they're like, I don't know, but I'm scared.

Yeah, there's I think with AI comes like a lot of skeptics too, because they're like, well, why? Like, you can accidentally do all these different things. Um, but I think and I put this in the, uh, repo when I talk about integrations like MCP servers is there's like, always so much risk to make sure we're understanding the risk. I'm saying connect up. You know, Splunk MCP server. But I don't mean going cloud code and just connect to Splunk and start running customer data through there. That's not that's not okay. In almost every if not every single org in the world, um, you know, use something that is approved, like bedrock or whatever you have in your organization. Yeah, just be smart, be very thoughtful. The bigger your organization, the more governance they are probably going to have. And that's not a bad thing. I mean, that means you have rules and you have something to abide by. Um, even if they're a big organization and they have a bunch of rules, they're going to allow certain ways. So just learning what's approved, what makes sense for your use case and just be smart.

Yeah, I think that is a great note to wrap up today's conversation. Sidney Marrone, thank you so much for taking the time to join us on Modern Cyber today. To our audience, we're going to have links to the Thor collective, links to some of the Thor Collective GitHub repos. Um, the MCP server also to Nebula. It sounds like Nebula is probably the best place for people to find you, if not one of your GitHub repos, right?

Yep. Awesome links to Nebula. Links to the peak threat hunting framework. Lots of links in this episode, because there's a ton of work that Sidney has put out into the world. Sidney for that, for all your work and for taking the time to join us on modern cyber today. Thank you so much. Thank you Jeremy. Appreciate you. Awesome.

And to our audience, we will talk to you next time. We'll talk to you later this week, depending on when you're hearing it for the This Week in AI security episode. And if not, stay tuned for the next episode of Modern Cyber. Bye for now.

Protect your AI Innovation

See how FireTail can help you to discover AI & shadow AI use, analyze what data is being sent out and check for data leaks & compliance. Request a demo today.