In this episode of Modern Cyber, Jeremy is joined by Ben Wilcox, the unique combination of CTO and CISO at ProArch, to discuss navigating the critical intersection of speed, risk, and security in the era of AI.

In this episode of Modern Cyber, Jeremy is joined by Ben Wilcox, the unique combination of CTO and CISO at ProArch, to discuss navigating the critical intersection of speed, risk, and security in the era of AI. Ben shares his perspective as a long-time practitioner in the Microsoft ecosystem, emphasizing that the security stack must evolve with each major technology shift—from on-prem to cloud to AI.
The conversation focuses on how to help customers achieve "data readiness" for AI adoption, particularly stressing that organizational discipline (like good compliance) is the fastest path to realizing AI's ROI. Ben reveals that the biggest concern he hears from enterprise customers is not LLM hallucinations or bias, but the risk of a major data breach via new AI services. He explains how ProArch leverages the comprehensive Microsoft security platform to provide centralized security and identity control across data, devices, and AI agents, ensuring that user access and data governance (Purview) trickle down through the entire stack.
Finally, Ben discusses the inherent friction of his dual CISO/CTO role, explaining his philosophy of balancing rapid feature deployment with risk management by defining a secure "MVP" baseline and incrementally layering on controls as product maturity and risk increase.
About Ben Wilcox
Ben Wilcox is the Chief Technology Officer and Chief Information Security Officer at ProArch, where he leads global strategy for cloud modernization, cybersecurity, and AI enablement. With over two decades of experience architecting secure digital transformations, Ben helps enterprises innovate responsibly while maintaining compliance and resilience. He’s recently guided Fortune 500 clients through AI adoption and zero-trust initiatives, ensuring that security evolves in step with rapid technological change.
Episode Links
All right. Welcome back to another episode of Modern Cyber.I am your host, Jeremy, as usual, and I am excited as always to be coming to you, with you, with another interesting interview. And I love getting perspectives from people with different experiences than mine, people who have worked on different tech stacks, different ecosystems. We've had people on the podcast talking to enterprises, MSPs, mid-size companies and everything in between. And today I'm delighted to be joined by somebody coming from the Microsoft ecosystem, maybe not directly from Microsoft, but a long time practitioner in that world. And I myself. I haven't used much Microsoft technology since two thousand and seven, so I'm really curious to learn what's changed on that side.
I am joined today by Ben Wilcox. Ben is the chief technology officer and chief information security officer at Prague, where he leads global strategy for cloud modernization, cybersecurity and AI enablement. With more than two decades of experience Architecting secure digital transformations Ben helps enterprises innovate responsibly while maintaining compliance and resilience. He's recently guided fortune five hundred clients through AI adoption in zero trust initiatives, ensuring that security evolves in step with rapid technological change. And that is definitely something that we'll be talking about on today's episode. Ben has a ton of certifications CISSP, Ccsp, certified Azure Solutions Architect expert. I didn't know that that was a level or designation on the Azure side, but is also certified on AWS. Ben, thank you so much for taking the time to join us today on On Modern Cyber.
Hey Jeremy, thank you so much for having me. It's my pleasure to join you today. Awesome, awesome.
Well, let's start with something that I know you have talked about a lot, and you work with customers a lot, which is kind of the modern security stack. And I like to think about this, and I'd be curious what your perspective on it is. I've given talks recently about how I've gone through three phases of technology transformation in my career, from kind of like desktop computers and small office lands, to data centers and co-location and internet services, to the cloud, to mobile apps, to now AI. And the argument that I've raised with people is that as you make those transitions and the technology that you use, your security stack has to evolve along with it, because mostly your security tooling doesn't provide coverage for the new technology platform that you're in the process of adopting. So I'd be curious. One, does that resonate with you? But two, how do you talk about that transformation or that transition process with your customers?
Sure. Um, I think that when it comes to helping customers understand what they need to protect today, um, it's important to think about it from maybe, like two angles, right? What's what do you have at this moment that still needs to be protected. And can you do it better? And then secondly, where is your business going based off of your business objectives? So I come from a consulting background. I spent a lot of years in mid-career, going through consulting and trying to understand what the business value is and how does that translate to technology and what should we be doing in that space? Um, so if you can understand the business needs and understand kind of what the next evolutionary period of technology is for the business, you can help prepare them kind of for alignment into that stack.
Um, and that's why if I was to kind of think about where Microsoft falls into play on this, and this is where, um, we've aligned quite closely with them is, you know, especially when you're looking at AI and, you know, so many businesses are looking at this next frontier firm or frontier company vision, right? How do we how do you do more with less and so forth? There's always that kind of need to kind of stretch yourself there. Well, so many businesses haven't completed all the things in the foundation area and they don't necessarily have, for example, a good strategy on their data security, right? They may have bits and pieces. Right. They have their structured data. Maybe they feel like they've secured it, or maybe they haven't done it with their unsecured, you know, or unstructured data there. Now they've also kind of bled into the cloud over the last ten years. Right. And started adding cloud services. But they might still have some on prem stuff. There's so many different pieces. And when we talk to customers, the challenge that I keep hearing from them is, well, we need to, you know, we have data in all these different locations, right? It doesn't matter. It could be us, it could be Microsoft, it could be on prem. It could be, you know, in some of the Microsoft three hundred sixty five services or Salesforce or whatever those pieces. But we want to be able to use all that information and use it in a safe, secure manner, right. That matches the governance and the security policies that we feel like we want to adhere to. And that's really hard to do when your dad is all over the place. And also, it's very hard to do if you have multiple security vendors that have disjoint solutions around that.
So we've been seeing kind of more and more interest over the last year and a half of moving to those providers that have more centralized and robust capabilities across all of those stacks. And that's kind of why we're relying on the Microsoft side when it comes to security, is it's got very broad capabilities across cloud on prem Microsoft three hundred sixty five, the business tools. And really it's even adding they're adding lots of capabilities into, um, third party apps at this point. So try to try to help people get the most and create that big story that tie this all together. Right. Because at the end of the day, they want AI that works, AI that's fast and rapid to deploy an AI that they can be, you know, secured and protecting their information.
But to that point, there's something that I want to dive into from what you said. In fact, there's three things that I want to kind of dive into. One is when you talk about helping customers kind of check their readiness to take that next step. So whether it's we're about to embrace AI in a serious way and we need to get our data in shape or we need to start leveraging that data, and then you realize that you don't have good data security. Or maybe it's data governance. Maybe you've got a lot of duplicate data. Maybe it's data hygiene. You find that there's this like backlog of tech debt inside the organization to move forward. How do you work with them to make sure that that doesn't derail a lot of these processes? Because I've seen that be a real concern for organizations.
Sure. And I think it's going to really depend on the organization and how much risk they're willing to accept in those first use cases. So maybe there's some first use cases that we can, uh, help enable from, from a technology perspective. Right. That isn't going to expose them to a lot of risk. Um, and some of those might just be enough to get those wins that help secure that security budget for advancing out of that tech debt, because half of the stuff you have to do is convince someone that it's a worthwhile investment to get them to remove that, that technical barrier. So, um, I think if you can show the use cases, try to find use cases that don't put a lot of risk, but also can maybe bring some good ROI, right? Something that can bring some value directly to the business. Um, you can try to make that case with, with the business in turn. Gotcha. I mean, it's a little bit like the organizational side of things is actually most important to line that up in order to unlock the technological side of things. Absolutely. Got it, got it, I think. Yeah. The business buy in.
Second thing that I wanted to dive into is, you know, and people who know me know that I've like, been out of the Microsoft ecosystem for a long time. I did start my career as a windows NT four admin. Um, and, you know, in fact, when I joined a company back in nineteen ninety eight, we still had one NT three point five server online, and that was actually the source of my worst ever breach, uh, inside an organization. It was because of that windows NT three point five server. Um, you can find that episode here on the modern cyber feed. By the way, I tell that story about how we got breached and why. But in any event, through whatever set of circumstances, I ended up joining other companies later on in my career where Microsoft was not really a vendor that was favored for weather, for desktop operating systems or for email platforms or for cloud or whatever the case may be. PowerPoint seems to be the one constant that nobody in the world can escape from, um, for whatever reason, but we'll just leave that as it is. I don't really care. Um, but on that point in particular. But I'm curious, you know, you said that you guys have put a lot of eggs in the Microsoft basket, so to speak, in dealing with your customers because they bring everything together. Just give us a high level about like, what does that mean in the transformation to AI phase? What are the things that you're getting, or what are the capabilities that you're getting from the Microsoft platform that help customers move forward?
Yeah, that's a great question. What do you get from the Microsoft platform that helps advance AI more rapidly? I maybe just paraphrase your description there. Um, I would say it brings that story. So. Right, let's it's a place that you can store your data, right. You got various, you know, all the cloud resources, right? From fabric to various individual pieces and things like Snowflake or Databricks that you can run for your data platform. You have, um, advances now where copilot is becoming kind of a de facto standard on the desktop for for many mid-size organizations , um, and Copilot is also able to bring the agentic side in place so you can bring your own custom, um, use cases, write custom data sources, custom um, implementations that are focused on your outcomes and what's acceptable for you. So it's not just an end user tool, but then you also have all of the security sides, right? So you have the security and the device management. You have the security on the user identities. So the piece that I think people are really interested in is how do I make sure that the person who's using the the copilot tool today only gets access to the information that's designed for them, and that starts with your identity, right? And that identity piece can trickle through your MCP server and go out to your data sources.
So the context that is being what is returned is only applicable applicable for that user role. Right. So that that agent doesn't have to then figure out necessarily, um, what is applicable. Right. It's getting more of the context from that that that user itself. And then um, returning that that's appropriate for that user context. Right. What's the role other things like that information can be in the MCP server. But because you're also tied into it knows your identity, who your report to what department you're in, all these other pieces. And that entire story can be applicable throughout your entire devices your identity, your data and also your security tools. Right. So surfacing that context is relevant in protecting this information as well. Gotcha.
And so okay, so you get all of that, let's say from an overarching capabilities, on a more practical basis, what are some of the areas where you're hearing the most concerns? It is, is it that hey, I want defender on the desktop because I need to be able to monitor what employees are doing? Or is it more I need the Android stuff because I'm going to be provisioning non-human identities for my agents? Or is it like I've got the Azure AI foundry that I'm going to be using for my experimentation? Or is it, you know, all of the above? What are the things that people are like most immediately latching on to purview data security? And it's data security across your unstructured data. So all your word docs, Excel docs, etc. people want to be able to leverage that, but they're the business is afraid of what that means, right? From a risk exposure perspective because they don't have confidence that they've implemented, for example, great security controls from Rbac, right? Have I secured all my SharePoint folders? Right. In the past we've just been able to kind of hide and obscurity. Is your security today co-pilot because of graph surfaces, all those things that people haven't been able to see before. So that's one side. And the same is true coming on the data source side too, right? Which is I have all these business apps. Right. Let's say let's say Salesforce. Right? I want to be able to leverage that data in Salesforce. But is that context of this data appropriate for the user? That's that's accessing it.
So I want to bring that information into a data platform. I want to be able to apply a the same the same labeling principles and the same governance that I have with my unstructured data. I want to apply that in a structured data source. So they like that one piece, right? You define find one strategy across your unstructured and structured documents, and you have that trickles all the way through everything else. So you know what. You have your fabric side of it and you go to show it in a power BI report. Those same privacy labels apply on that. That user can only see what they're supposed to be seeing. Otherwise it's obscured or obscured. Obscured on that front. Or maybe they go to export it. Well, you can't export it and save it to a document because it's meant to be encrypted. So having those type of capabilities, um, allows for much faster and more confident implementations in AI because there's no questions anymore, right? Yeah, yeah, yeah. And it kind of ties back to what we were talking about earlier, which is that, you know, as organizations go to, uh, try to make that first step into AI, they can quickly realize that they don't have good governance. You know? So I get what you're saying. You know, providing something like this, you get a quick overview and then you can drill down onto use cases as needed.
So I'm curious from the from the customers that you work with, you know, we talked about in your intro that you've recently helped a number of fortune five hundred customers kind of go through some development of transformation strategy, development of AI strategies and things like that. Are there any kind of lessons learned through some of these conversations as you go into it? Aside from what we've covered already around kind of the data readiness side of things?
Yeah. Um, some things that I've really observed is that there's much greater speed for organizations that have good compliance already. Right? So they have been able to kind of skip that step. We get right into figuring out where the ROI is on their strategies. The other piece that I think that, um, maybe has been understated and maybe there's more opportunity for organizations to leverage us going forward and helping them kind of get through the innovation piece. Is there still is a bit of resistance, even even with the security pieces in place, to using that data and ideating on that data and seeing what that that looks like. So we've been doing a lot more synthetic data, right. So tell us what that interesting output looks like. That would be ideal potentially for you know, you know what's your data set kind of represent right. Let's and then we'll instead of using sensitive information or confidential information right. We'll create a custom data set that doesn't take that much long. But what we can do then is help them get through that, um, ROI piece of understanding what that return looks like. Right? And we can do it much faster, rather than having to deal with a compliance hurdle or getting access to even the data. Right. Have half of half of what people want to figure out is how do I how do I work quicker? How do I see a return here And yet, you know, the business processes take some time. So if we don't have to have access to the data, we can just kind of help you along without even getting access into your environment. Yeah, yeah, it makes a ton of sense.
It's interesting. You know, there was a meme that I saw recently which was, you know, a lot of security companies like to say that they enable the business to move forward, and security should be seen as an enabler, not a blocker. And I actually really like that messaging, and I really like that positioning. I think maybe the marketing teams have gotten hold of it a little too much recently, but the meme that I saw that I really liked was from the David Beckham documentary. And I'm not sure if you're familiar with this meme, but it's kind of. Yeah. So it's kind of, uh, in the documentary, I'll just set the context of it. You know, Victoria Beckham's going through this interview, and she talks about how she had a middle class upbringing. And David Beckham, her husband, is very much questioning her on that, because it turns out that she was actually driven to school in a Rolls Royce sometimes. And so, you know, kind of calls her on the middle class upbringing, let's say. And you know, like a lot of memes, they take that the set of screenshots and then they superimpose them with some text. And this meme was, oh, security enables the business. And it's. Come on, be honest. Come on. Be honest. You know, how does security actually enable the business. And somebody says by providing visibility into risks. And they said, well how does it really. And you know push comes to shove it's actually compliance that enables the business more than anything. Because, you know and there's some truth to that, right.
When we as firetail, we go into a customer relationship, we don't tend to get asked, what's your data handling process? We might get asked that by certain teams that like to dig deeper, but nine times out of ten we get asked for our soc2 documentation. And so it's really that compliance step. But I think what you just called out is something that's really interesting that I hadn't really thought about too much, which is great. Um, maybe compliance enables that business relationship to move forward. But actually, if it forces discipline on the organization in a way such that I'm more ready to go into an AI adoption because I've got good internal procedures and processes in place, like that's a strong win, and that's actually not a value prop that I hear any of the compliance vendors really touting as they come out and talk to us. So that's really interesting to hear that. You see that observation in the real world with with customers that you're dealing with.
I love your point. Yeah, I completely concur on there. And like I love the analogy to the meme. So one of the other thing I think that's kind of interesting too, in the space is that everyone's really seeking, like, how do I just want it fast? Like we're at a pace of just being so impatient. And whether it's compliance, I want compliance fast. Um, and, you know, they want to know about how you're handling your data, right? Um, the a good question to ask people is, especially when you're working with AI is describe your data flow. Right. How does your data flow in an organization? Very few organizations can actually handle that. And that's where you end up with someone that does have a good data handling process. Like most of the, you know, the financial firms have have a decent idea there. But you start getting into, um, any any other industry. Honestly, it doesn't really even matter. Um, you know, even the medical side. Right? They have a hard time understanding where all the contacts have all their information flows, which makes it very hard to understand where your risks are in AI. Right? If you can't kind of map that out, you're not going to be because we're in a state where, right, some of the some of the stuff we're doing in AI is very simple, right? You have one source that you're querying. Well, what if all of a sudden you have fifteen data sources on there and there's plenty of all these things together, right? And it's pulling in dynamic side of it. So you got to start understanding where your data exists and how this data might flow back through. Um, yeah. Because then you'll start seeing where your risks are. I think in the future.
Well, I'm curious on that note. You know, we talked about a couple of risks around AI adoption, the data, one being the one that I think we spend the most time discussing. We kind of, you know, we had a little comment about entry IDs and non-human identities being used there. But outside of those risks, what are some of the other big risks that are coming up in these conversations? When you talk to customers about like, hey, let's go down an AI transformation journey, let's figure out, you know, what, AI use cases make sense for your business. And then as we examine those use cases, let's figure out what the associated risks are. Are there any other risks that are kind of coming up regularly or is it more like one offs here and there?
Honestly, I'm kind of a bit surprised that there's so much still conversation around the output of the AI being, you know, whether it's bias or hallucinations. Like, I feel like that's kind of a given at this point. That's not really the risk that you need to focus on. And I'll I'll get pushback from my coworkers on this because, you know, my I'm less concerned with with the output being incorrect. I'm more concerned with the big data breach. Right. Like that. That's that's a business takedown. The other ones, you know, sure, it's a problem, but, um. Yeah, that's a really. Yeah. I was just going to say that's a really great point because, you know, to your point, if I think about what's the impact of either of those scenarios. So the impact of the output being wrong is nine times out of ten a one off. And this is a little bit to the nature of the non-deterministic kind of algorithms that these LMS follow. So Ben or Jeremy has a customer support interaction via some chat bot, and I get an A response that's off. Okay, I might maybe it's a terribly off and it's offended me and ethnically slurred me or, you know, insulted my grandparents or whatever the case may be. You know, all of those in one, right? Right. All of them in one. You know, I could be the most outraged company in or the most outraged consumer in the world. And to be sure, I'm going to screenshot it. I'm going to blog about it. I'm going to post on LinkedIn about it. I'm going to tag the company. I am absolutely going to, you know, clickbait, rage, bait, whatever you want to call it about that interaction. And you know what? The company is going to probably point their finger at the LM and say that they're tightening up and LMS are experimental, and it's a new AI thing, and they'll maybe announce that they're going to slow down that initiative. But to your point, it's not going to derail that organization. But if they have a major data breach of a customer data set that they're using for training purposes or for Rag or whatever the case may be, and that gets out there, that's a potentially, you know, company ending kind of issue.
I agree. And we're we're also interacting with all these third parties now that are AI based apps. Right. And the input output of those. Right. We're no one's talking about that on that front either. Right. How do you how do you kind of deal with the potential access that those systems have going out? Um, so you have all these little areas that can cause risk data breaches that aren't necessarily being addressed. And you as a business, might not have much visibility into that. Right. If you're feeding systems in, um, and you're at the application level where it's processing data output and input and it's able to do actions. Yeah, a whole nother set of wrists sitting on that side.
And to that point, it's one of the things where I think actually, you know, if you think about, okay, security in my mind is always a risk management kind of thing. And, you know, there's a one of our advisors, Sunil, he has a saying that I like to borrow from time to time, the most secure computer in the world is the one that's unplugged, switched off, encased in cement and at the bottom of the ocean. And even then, I can't guarantee it. So, like, everything comes with some level of inherent risk. And right now, you know, to the point of the conversation, we just had people worry about the inputs and the outputs. And that is the nature of the inherent risk of the technology that we're using. It is non-deterministic. We all know that going in. And yet we go in eyes open, knowing that if I use this, Lem and I put the same thing in ten times, I might get eight times one answer and then two times two different answers from that, as opposed to software code, where I put the same thing in ten times and I'm going to get ten times the same output. And we still do it because we think that there is a gain to be had and there's a value to unlock for the organization. So that is the risk that you cannot control in my mind. Right. You cannot control what that LM does. You can do some training, you can put some guardrails, etc. on the LM itself. But what you can and should control arguably, is all the things around it.
So if you think about what kicks off an LM kind of powered process, in our experience, every architecture that we've seen, it basically starts with some application that then talks to the LM. Well, how does that application process get started? Typically with an inbound API call. And so then what can you control about that inbound API call? Well, I can make sure that all my endpoints are properly authenticated and that there's authorization checks for every function call. And I can sanitize and validate all of the inputs going in, and then and only then I can hand it off to the LM. So I know that what's going on is as clean and as authorized and as safe and secure as it can be. So I've got full control over that. And arguably that is a point where more investment should be made. I'm curious, like, does that come up in any of the conversations you have with people?
Yeah it does. People are interested in making sure that they have a clean inputs, and I think that the where the struggle is, and maybe where we have lack of technology at this point is ensuring that the outputs are and the things that are happening in between have the right visibility and the right telemetry at this point. Um, okay. Or like for example, right. You can have all the guardrails that you want, you know, um, the LM tools, right? And the, um, AI tools like foundry from, from Microsoft. Right. You have guardrails. Well, that's not going to stop a threat actor from being really sneaky and finding ways to get around those those keywords or whatever your, you know, implementation has put in place, because that's very static too. Right. That's not a dynamic control. That is something that is designed kind of like a firewall, right? Like you're just yep. Have these static rules. It's no different than a WAF or a firewall, you know. Sure it blocks it. But you know what? I can get creative. I can get around that.
Now you get into the inside, right? And you have to, you know, there's some technology advancements. Um, you know, there's a number of vendors out there that are looking at that props and trying to figure out, you know, are they doing something sneaky in here or not? Right. But that's still you're only still playing catch up. Right? You're still always kind of getting into that point of, um, that threat has to be known. Those, those, uh, kind of like a, you know, a minor attack, right? You got to kind of know which way they're trying to do and what they're trying to get out of it. Um, so we still have a long ways to go on that front. That's that's an area of risk. And like seeing what happens in between. Right. As between the user, the application, the model itself. There's all sorts of different translations and manipulations. And you can get things to, you know, happen your way. Um, if you're a threat actor in there. So I think if we get more and more visibility in there, we're going to be able to do more and learn more about what people are doing. But like the next two years, that's that's where I think a lot of the AI advancements are going to be, is trying to see what's happening inside that box.
Yeah. Shameless plug for those who are listening. If you're interested in getting that visibility, please contact us at firetail. That is a big part of what we do and stuff that we're working on. So I appreciate that call out on there. I want to change gears with kind of the last five to ten minutes that we've got here today. When I was doing your intro, there was something that jumped out at me, which is I think this is the first time that I've ever had a guest who is both the CTO and the CISO of an organization. First of all, I'm curious how that came about.
Yeah. So that that's been, um, I am this guy who likes shiny objects, and I have always gravitated towards doing new things. So okay, very tech oriented. I had my own business when I started off doing this, um, back in the nineties through the early two thousand. I got into software development for a while. I went back to a system integrator, and I just never said no when an opportunity came up. And so I kind of went from the infrastructure side into the the more focused security side of things, um, around twenty ten. And that led me to being just really interested in how do we do things securely? How do we do things where we solve the business problem? And how do we do things also at faster? Because, you know, I, I am someone who also likes to solution things. So when I, when I build something right, I build for the seventy percent right. Can we get to the seventy percent and then we can customize from that. So being able to bring solutions that can be deployed quickly is one of my goals.
And what has kind of happened here is that in the world of AI, and I think we're going to see more and more of this, right. It's not just one single team that's involved, and you got to be able to put together the subject matter expert team that covers data, that covers AI, that covers security. Um, and so having having dual roles for me. Um, yeah, I have, I have, you know, my guiding light on security, but I also have my guiding light on the business, which is what we need to help implement on that front. So I enjoy both roles. I think it's it's a balancing act, but I think everything should be um, right. There's you gotta you gotta adhere to the security side of it. Um, but you know, if you're not enabling the business and helping the business get to where it is, that's where the money is made. That's where why we all have jobs in this side.
Yeah. How do you how do you find in practice kind of on a regular basis if you think about kind of I mean, there's a little bit of inherent not contradiction, but I'd say, let's say maybe a little bit of inherent friction in that if I think about a CTOs role, it is to identify the right technologies for the for the organization to use to move forward and then push the organization towards that, typically as fast as possible. And then I think about a ciso's role in that is manage risk every step of the way. And so if I think about, you know, speed versus risk management in practice, what happens on a regular basis, like maybe talk us through when you've had to think about kind of finding the right balance there.
Oh that's good. I like that question, Jeremy. So, so so how do I balance speed and security? Well, I'll take it from an application perspective. Right. Building a new application. What the approach I take is what do we do from a foundational perspective that would meet the minimum security requirements for how we are using the application today, right. I have my baseline. It may not be where I want it to be, but that is what we are going to bake in to that MVP. Now that baseline is going to cover the fundamentals of where I feel like the biggest risks are right. As we get through this and we iterate and add more features right as we light on a new feature that might have kind of that next, um, risk level to it. Right. Let's say it's something that's going to be something. It can take more action, right? In the agentic world, we might layer on some new security controls in that we're also going to kind of incrementally build on our security, but foundational things that are going to be in the very first build, we're going to make sure we're as much telemetry as possible. Right. I can't can't make decisions on the future with security. If we don't have the visibility immediately. Right. We're going to have all of those normal guardrails in. We have all the tools lit up so we can get as much, um, on that front.
But as we go through, we're going to add more and more and being able to kind of iterate on that and adding security in layers so that when we get to a point where we have, um, you know, let's say a more mature product, right? All the security controls are there. We're just not going to front load it, and we're not going to try to slow it down. Um, the innovation side because, like, if you're building something you like most of every, every product builder today, right? If you're a product owner or you're a developer, right. Your job is to ship features. And, um, if you can have someone that is guiding you into saying, hey, this is what I need to do initially, and this is kind of the long term roadmap, right? Try to give a roadmap out for a year like this is the product. This is where we need to be. And um, you know, that may change, right? And we may have to update things on a sprint by sprint basis. But, you know, at least it's already been planned. And then we can kind of adjust from that. So try to balance speed and innovation. Um, but ensuring that there is some security in place in the very beginning is key.
Do you ever find yourself at that uncomfortable moment when, to your point, every developer's job is to ship features, ship code, ship product where you've had to let something out the door that the CISO in. You didn't want to let go, but you knew that it needed to be released for whatever business reason? That's a great question. Um, yes. And there's been quite at that point. The follow up is, you know, that's the next point, right? We have guarantees that we're going to get to that next side very quickly. Okay. So how about the other side. You gotta you gotta understand where the stakes are and how much of a risk it is, right? If it was a big risk. No, no, we're gonna we're going to draw the line. Well, I was just about to ask, is it? You've had the other side of it, too, where you've had to hit stop on a release because the security risk was too high? Yeah, yeah, yeah, yeah. If we don't feel like it's going to go and there's something at risk in that front not going so awesome, it doesn't. People don't like it. And people don't like the other way either. Right. You're making someone mad on one side or the other. Um, yeah. You know, oftentimes I find that there is there is at least a balance somewhere there where you can kind of always find a happy medium. Yeah. Yep, yep. Awesome. Or maybe there's a compensating control that's not going to be long term. That's the other thing, right? We always doesn't have to be black and white. There could be a compensating control that we can put in place somewhere downstream. That solves that problem near term. Maybe it's not one that can scale or whatever else, but there's opportunities there. Well, and that's what risk management is all about, right? It's finding the right balance. And if there is a business benefit that outweighs the security risk or a business benefit coupled with a mitigation, even if it's short term, that then gets you on a solid footing, you can move things forward. Yes.
Awesome. Well, Ben, thank you so much for taking the time to join us. Any last thoughts? Any last questions that I should have asked you that I didn't? And also, where can our audience find out about more about you or about your organization if they want to do that?
Well, Jeremy, it's been a pleasure coming on here. I love talking about AI. I mean, I think for all of us, right? We can learn something new from every single interaction that we have, whether it's on something like these podcasts or working with customers. Right. We're all kind of we're all learning as we go. Um, and it's just too new. It's, you know, too many things are happening too quick and new risks are coming out every day. So, so fast. The pace is insane. And this is by far the fastest I've ever experienced something. I know, it just feels like you're in this, like, I don't know, catapult or something, right? You're just kind of rocking, rocking through space and hey, there's a, there's a, there's a meteor that we might run into over here, but let's try to get the ship turned around or whatever. But, um, you know, I think for everyone, like, just keep your eyes on the horizon. Um, that's what I always like to keep people thinking about is like the future. Um, the signs are going to be there. The risks are going to be there. Just be ready to adapt. Keep keep your eyes up and, um, be willing to learn and try new things on this. So awesome.
And then from people can go ahead. No I was just going to say, and where can people find you?
Sure. Um, I will be actually at Microsoft Ignite, um, in a week. Um, so if you'd like to connect, find me on LinkedIn, be Ben Wilcox. Um, also, you can find, um, I do blog posts and other content over on our website. Uh. Com and, uh, awesome. Please reach out. Awesome, awesome. And we will have Ben's LinkedIn as well as com. And it is spelled Pro. So don't forget that last H. We'll have both of those linked from the show notes Ben Wilcox, thank you so much for taking the time to join us on Modern Cyber today.
Thank you, Jeremy, for having me on Modern Cyber. Awesome. We'll talk to you next time. Bye bye