Modern Cyber with Jeremy Snyder - Episode
62

Kyler Middleton & Sai Gunaranjan of Veradigm

In this special episode recorded at fwd:cloudsec 2025 in Denver, Jeremy sits down with two cloud leaders from Veradigm: Kyler Middleton and Sai Gunaranjan. The duo shares insights from their joint talk on securing AI usage in enterprise platforms, highlighting real-world challenges around governance, model usage, data sovereignty, and developer enablement.

Kyler Middleton & Sai Gunaranjan of Veradigm

Podcast Transcript

00:00:00 [Speaker 1]
Alright.
00:00:00 [Speaker 1]
Coming to you again from the sidelines of Forward Cloud Tech like we did last year.
00:00:04 [Speaker 1]
I am the host of Modern Cyber as Always, Jeremy, and I am really excited to be coming to you today with two guests, not one, but two.
00:00:10 [Speaker 1]
You're getting two for the price of one on today's episode, which, by the way, the price is always free, so you shouldn't be complaining about that.
00:00:16 [Speaker 1]
But we are delighted we get an opportunity to sit down with some of the speakers from this year's event, and we're gonna talk to them about their research, about their lessons learned, about what's in their talks, and what are the messages they wanna share with the audience.

00:00:27 [Speaker 1]
You're getting a chance to get some of this from us even if you weren't able to attend the conference or view the livestream.
00:00:33 [Speaker 1]
By the way, all the talks from fwd:cloudsec are recorded, so you should go check them out.
00:00:37 [Speaker 1]
Today, I'm delighted to be joined by Kyler and Sai, and I will let them introduce themselves.
00:00:42 [Speaker 1]
Say Say a little bit about yourself, Kyler, and then Sai, and then we'll take the conversation from there.

00:00:46 [Speaker 2]
Absolutely.
00:00:47 [Speaker 2]
Thank you so much for having us, and hello, world.
00:00:49 [Speaker 2]
I'm Kyler Middleton.
00:00:50 [Speaker 2]
I'm our one and only, so far, principal internal AI developer.
00:00:55 [Speaker 2]
We're we're both employed at a company called Veradigm in The United States that does health care focused stuff, and, I am very excited to talk about it.

00:01:04 [Speaker 2]
Awesome.

00:01:05 [Speaker 3]
Hey, Ruen.
00:01:06 [Speaker 3]
I'm I'm Sai.
00:01:07 [Speaker 3]
I'm a lead architect with the Veradigm cloud platform team focusing on Azure and Azure based technologies.

00:01:12 [Speaker 1]
Azure and Azure based technologies.
00:01:14 [Speaker 1]
Awesome.
00:01:14 [Speaker 1]
So for the audience, tell us a little bit about what your talk is about because I I know you two are co presenting, and it sounds like you've co presented a number of times in the past as colleagues.
00:01:24 [Speaker 1]
So what's your talk this year, here at fwd:cloudsec?
00:01:27 [Speaker 1]
What's it about?

00:01:28 [Speaker 1]
What and what also kind of inspired it?

00:01:30 [Speaker 2]
Sure.
00:01:31 [Speaker 2]
So we're focusing on the platform engineering precepts that are going to be very useful as AI starts to creep its way in, as I imagine it has for, like, almost every platform in the parenting.
00:01:42 [Speaker 2]
Yeah.
00:01:43 [Speaker 2]
We have started to see folks, implement AI without a lot of good, underpinnings and rules and governance.
00:01:50 [Speaker 2]
And so it's really hit or miss for this was very well done because the engineer cared a lot.

00:01:55 [Speaker 2]
Right.
00:01:56 [Speaker 2]
This, sends a lot of our private data to a public AI endpoint, and they implemented a ChatGPT API token.
00:02:01 [Speaker 2]
And we have to say, please turn that off.
00:02:03 [Speaker 2]
Yeah.
00:02:04 [Speaker 2]
Please don't do that.

00:02:05 [Speaker 2]
So it's really all across the board, and it's coming for all of our applications.
00:02:10 [Speaker 2]
So

00:02:11 [Speaker 1]
So, I mean, it's really inspired by real world kind of pain points that you guys are experiencing day to day within the organization.

00:02:17 [Speaker 2]
Yeah.
00:02:18 [Speaker 2]
I I Sai is the lead of the the platform team, so he's sort of providing the governance.
00:02:23 [Speaker 2]
And I'm coming at from it from the opposite perspective where I'm the troublemaker.

00:02:27 [Speaker 1]
Okay.

00:02:27 [Speaker 2]
I'm the AI developer.
00:02:28 [Speaker 2]
So I'm holding things, sometimes wrong, sometimes to just get it working and not as secure.
00:02:33 [Speaker 2]
And and then Cy comes in and and secures

00:02:36 [Speaker 3]
the

00:02:36 [Speaker 2]
secure things that I make.
00:02:37 [Speaker 2]
So it's it's a good partnership.
00:02:38 [Speaker 2]
We work well together in

00:02:40 [Speaker 1]
that way.
00:02:40 [Speaker 1]
Well, it's funny because what you just described is very much what I think happens a lot of times in the real world and is especially something that I'm seeing around AI today, which, by the way, for me, this is also reminiscent of, like, ten years ago in the cloud where, you know, ten years ago, it was, oh, we've got these developers doing these things in the cloud, and the security team doesn't know about it, and the governance team doesn't know about it.
00:03:03 [Speaker 1]
And it sounds like we're really playing out that same story again when it comes to AI.
00:03:07 [Speaker 1]
I'm seeing a lot of nodding.
00:03:09 [Speaker 1]
And and it's kind of this, like, constant, like, you know, security and governance running to keep up with the developers, and developers are, like, wanting to play with the shiny, new, cool tools.

00:03:20 [Speaker 1]
So I'm curious, maybe, Cy, for you a question for you is, like, as you go chasing these developers, so to speak, not literally, but kind of figuratively, like, what are the number one or two kind of mistakes that you're seeing from them?

00:03:36 [Speaker 3]
So, I think from from a platform point of view, two major issues I've seen majorly is that, you know, one is data access patterns Right.
00:03:43 [Speaker 3]
Especially when you're accessing super sensitive, highly protected compliance data.

00:03:48 [Speaker 1]
I mean, that's all your data.
00:03:49 [Speaker 1]
Right?
00:03:49 [Speaker 1]
As a health core organization.
00:03:50 [Speaker 1]
Yeah.

00:03:51 [Speaker 3]
And also the second thing is, like, Ale mentioned, like, trying to use some unwritten models on the Internet

00:03:57 [Speaker 1]
Oh, okay.

00:03:58 [Speaker 3]
From from having phase or just downloading Yeah.
00:04:00 [Speaker 3]
Maybe DeepSeek or something like that, which we haven't fully validated and if it works for the organization or not.
00:04:06 [Speaker 3]
Yeah.
00:04:06 [Speaker 3]
That's also the big kind of a major concern.
00:04:09 [Speaker 3]
And a lot of the tools don't have built in, controls to block stuff.

00:04:14 [Speaker 1]
It's

00:04:14 [Speaker 3]
actually built over on top of the tool that you're using.
00:04:17 [Speaker 3]
Yeah.
00:04:18 [Speaker 3]
And the last thing, for at least from an Azure implementation I've seen a lot is if you don't select the correct values, your data could be shipped off to a different geography to process because of cheaper compute available at that point of time there.

00:04:30 [Speaker 1]
Yeah.

00:04:31 [Speaker 3]
So those things are something that really worries me when someone's like, hey.
00:04:35 [Speaker 3]
I'm just gonna do some AI stuff on Azure or Yeah.
00:04:37 [Speaker 3]
This.
00:04:38 [Speaker 3]
And it's like

00:04:38 [Speaker 1]
Yeah.

00:04:39 [Speaker 3]
Yeah.
00:04:39 [Speaker 3]
You can, but, you know, you have to take it off all of these things.
00:04:41 [Speaker 3]
And a lot of the times, developers, it's difficult for them to keep track because they are trying to achieve a certain goal.
00:04:47 [Speaker 3]
Yep.
00:04:47 [Speaker 3]
I think as platform architects, it's left to us to build those guidelines into the platform so unsecure features.

00:04:56 [Speaker 3]
Yeah.
00:04:56 [Speaker 3]
Very similar to, like, ten years ago, what PaaS service like, hey.
00:05:00 [Speaker 3]
This is super cool SQL PaaS database, and, you know, you have to to secure it.
00:05:04 [Speaker 3]
Like Yeah.
00:05:04 [Speaker 3]
The same same exact like you mentioned.

00:05:06 [Speaker 1]
Yeah.
00:05:07 [Speaker 1]
And and, ultimately, I think you you can't stop them.
00:05:11 [Speaker 1]
Yeah.
00:05:11 [Speaker 1]
You have to find the right way to kind of work with them and tell them, like, hey.
00:05:15 [Speaker 1]
Look.

00:05:16 [Speaker 1]
From a governance policy, compliance risk, whatever perspective, these are the things that we have tested, we have validated, we know are okay for the organization, whether it is, for instance, like compliance, let's say, for you guys as a health care organization.
00:05:29 [Speaker 1]
I'm sure there are rules around, you know, data cannot leave The U.
00:05:32 [Speaker 1]
S, right?
00:05:33 [Speaker 1]
So that's got to be a real concern for you.
00:05:36 [Speaker 1]
Right?

00:05:36 [Speaker 1]
But once you get those rules out there, I'm kind of curious from a, from the perspective of, like, Okay, now we've put these rules in place.
00:05:46 [Speaker 1]
What's next?
00:05:46 [Speaker 1]
Like, do you just communicate those rules, or do you then put in technical controls to make sure that people can't color outside the lines?
00:05:49 [Speaker 1]
From a platform?
00:05:49 [Speaker 1]
Yeah, I think controls to make sure that people can't color outside the lines?

00:05:53 [Speaker 3]
From from a platform, yeah, I think we take it to to the

00:05:55 [Speaker 1]
and so

00:05:56 [Speaker 3]
and and one is Skyler.
00:05:57 [Speaker 3]
She actually runs, like, a brown bag where we kind of do trainings.
00:06:00 [Speaker 3]
Okay.
00:06:00 [Speaker 3]
We tell everyone, like, hey.
00:06:01 [Speaker 3]
These are the do's and don'ts.

00:06:02 [Speaker 3]
This is what they're supposed to like.
00:06:04 [Speaker 3]
This is what you could do and stuff stuff like that.
00:06:05 [Speaker 3]
Right?
00:06:06 [Speaker 3]
Yep.
00:06:06 [Speaker 3]
But then not everyone remembers all of them.

00:06:08 [Speaker 3]
Yeah.
00:06:09 [Speaker 3]
And that's when the technical controls come into picture, and that's, I think, where, other AWS SCPs or Azure policies, those are, like, guardrails that we can push in and just outright deny stuff or, you know, give them a warning, like, hey.
00:06:21 [Speaker 3]
This is not secure.
00:06:23 [Speaker 3]
If you still want to do this, maybe talk to security team, and we can we can work our way around it.
00:06:27 [Speaker 3]
But then Right.

00:06:28 [Speaker 3]
Something that's just not gonna happen.

00:06:29 [Speaker 1]
Right.

00:06:29 [Speaker 3]
Things like that.
00:06:30 [Speaker 3]
Yeah.
00:06:30 [Speaker 3]
Right.

00:06:31 [Speaker 2]
Yeah.
00:06:32 [Speaker 2]
Just generally, the setting technical rules or even setting organizational rules that say, don't do this.
00:06:38 [Speaker 2]
It's not allowed.
00:06:39 [Speaker 2]
Just don't work.

00:06:40 [Speaker 1]
Yeah.

00:06:40 [Speaker 2]
As much as we want them to work

00:06:42 [Speaker 1]
and on

00:06:42 [Speaker 2]
paper, they look great.

00:06:43 [Speaker 1]
Yeah.

00:06:43 [Speaker 2]
People will still they have goals to achieve.

00:06:47 [Speaker 3]
They

00:06:47 [Speaker 2]
have things they need

00:06:48 [Speaker 1]
to do.

00:06:48 [Speaker 2]
Yeah.
00:06:49 [Speaker 2]
So So as much as we can, we're setting the rules, the right rules.

00:06:52 [Speaker 3]
Yep.

00:06:52 [Speaker 2]
But we're also creating paved paths for the right way to do it.
00:06:57 [Speaker 2]
Yeah.
00:06:57 [Speaker 2]
That's a lot of technical work and a lot of training, and they have to go hand in hand, or people won't use the things that you have built.

00:07:03 [Speaker 1]
Yeah.
00:07:03 [Speaker 1]
That that phrase that you just used there, paved path, is one that I hear come up a few, you know, here and there when it comes in particular to kind of guiding developers towards, like, what is the, quote, unquote, right way to do something.
00:07:17 [Speaker 1]
What are some of the things that you've learned over the years as far as, like, how to work with developers, engage developers, communicate with them about what is the paved path and what is acceptable?

00:07:26 [Speaker 2]
Totally.
00:07:27 [Speaker 2]
The bottom line is you have to make it easier to follow the rules than to break the rules.
00:07:32 [Speaker 2]
Oh, okay.
00:07:32 [Speaker 2]
You can make it any way that you want, but just having a bigger stick that says we're gonna punish you later just does not work.
00:07:38 [Speaker 2]
You have to say this is easier,

00:07:39 [Speaker 1]
maybe

00:07:39 [Speaker 2]
in the short run, maybe in the long run, that if you do it our way, Yeah.
00:07:45 [Speaker 2]
Then we'll implement it.
00:07:45 [Speaker 2]
We won't question you as much.
00:07:47 [Speaker 2]
We won't have to change as much of your architecture later.

00:07:49 [Speaker 1]
Okay.

00:07:49 [Speaker 2]
One of our our fun examples was, we have a rule in the company that you're not allowed to use ChatGPT Right.
00:07:55 [Speaker 2]
Public AI.
00:07:56 [Speaker 2]
You're the product if

00:07:57 [Speaker 1]
it's free.
00:07:58 [Speaker 1]
Yep.

00:07:58 [Speaker 2]
And, absolutely no one was listening.

00:08:01 [Speaker 1]
Okay.
00:08:01 [Speaker 1]
Yeah.

00:08:01 [Speaker 2]
So what we decided to do was build an internal AI bot that they could use, and we put it right in Slack and right in Teams.
00:08:08 [Speaker 2]
We made it it easier and also more secure.
00:08:10 [Speaker 2]
And in that way, that was just a really great success.
00:08:13 [Speaker 2]
That that product that project worked really well

00:08:16 [Speaker 1]
Yeah.

00:08:16 [Speaker 2]
Because it was easier, and it did more for them.

00:08:19 [Speaker 1]
So, like, as much

00:08:20 [Speaker 2]
as you can build something better, that's also secure.

00:08:23 [Speaker 3]
But I

00:08:23 [Speaker 1]
think a lot of security teams would hear that and say, like, okay.
00:08:26 [Speaker 1]
So I I need to make it easier to do the right thing than to do the wrong thing.
00:08:30 [Speaker 1]
In so many organizations, I would say that reinforcement mechanism is much more oriented around stopping the wrong thing than it is around doing the right thing.
00:08:40 [Speaker 1]
And so, you know, the developer who's like, hey, I'm just gonna go build this thing, they're just gonna get, like, deny, deny, deny policies kicking in.
00:08:47 [Speaker 1]
And then, you know, until they get the accept, then you can sit there and say, like, well, yeah.

00:08:51 [Speaker 1]
Like, you got the accept, so do just do it that way.
00:08:53 [Speaker 1]
That's the right thing.
00:08:54 [Speaker 1]
Right?
00:08:55 [Speaker 1]
But, like, where where does that break, or why does that not work as an approach?

00:09:01 [Speaker 2]
That's a good question.

00:09:03 [Speaker 3]
I don't know.
00:09:04 [Speaker 3]
I think my thought process would be is that, you know, seeing a lot of denies actually starts to feel like, okay.
00:09:10 [Speaker 3]
You're you're you're closing all the doors for me for it to work.
00:09:13 [Speaker 3]
Yeah.
00:09:14 [Speaker 3]
And the only one option that you give doesn't maybe really you kind of submit, kind of serve the use case.

00:09:20 [Speaker 1]
Right.
00:09:20 [Speaker 1]
Okay.

00:09:21 [Speaker 3]
And and that's one that's something that we've seen at times happen where we block a certain action from being performed.
00:09:27 [Speaker 3]
It's like, but that's I just need this so that I can I can I can do this, and then I don't have to do 10 more steps to get there?
00:09:33 [Speaker 3]
But then Yeah.
00:09:33 [Speaker 3]
Like, maybe for the POC, I'll let you bend it and just see if it works for you.
00:09:38 [Speaker 3]
But then to productionize this use case, we have to go through these hops and do the secure way.

00:09:43 [Speaker 3]
Yeah.
00:09:43 [Speaker 3]
Another thing I've actually felt more effective is that if you if you kind of communicate the impact of what is happening, like, you know, okay.
00:09:51 [Speaker 3]
We we we can allow this to happen, but just understand the impact of allowing this is going to be so much more, you know, kind of security breach or something like that.
00:10:00 [Speaker 3]
Right?
00:10:00 [Speaker 3]
Yeah.

00:10:01 [Speaker 3]
That kind of sometimes sinks in very deep and then, like, okay.
00:10:03 [Speaker 3]
So this so there is a reasoning why we tell not to do some stuff Yeah.
00:10:08 [Speaker 3]
When when governance teams comes to us and say, hey.
00:10:10 [Speaker 3]
Can you please block this something like that?
00:10:11 [Speaker 3]
There's a reason why that happens.

00:10:13 [Speaker 3]
And many times the context behind why that's happening is missed.
00:10:16 [Speaker 3]
And when when we communicate that, we see a lot more success patterns for secure development practices.

00:10:22 [Speaker 1]
Yeah.
00:10:23 [Speaker 1]
Yeah.

00:10:24 [Speaker 3]
That's the end of the year.

00:10:25 [Speaker 2]
Yeah.
00:10:26 [Speaker 2]
I I think part of this is how you structure your information security team, internally.
00:10:32 [Speaker 2]
Are you the bad guys that say no all the time?
00:10:34 [Speaker 2]
Yeah.

00:10:34 [Speaker 3]
Or are

00:10:35 [Speaker 2]
you the good guys that are able to create things that are better?

00:10:38 [Speaker 1]
Yeah.

00:10:38 [Speaker 2]
And, you have to be both, unfortunately, because you have to say no sometimes.
00:10:42 [Speaker 2]
But as much as we can, I I try to help steering towards the, like, we build the thing?
00:10:46 [Speaker 2]
It's better.
00:10:48 [Speaker 2]
I'm your friend.

00:10:49 [Speaker 1]
Yeah.
00:10:50 [Speaker 1]
Yeah.
00:10:51 [Speaker 1]
Sai, earlier you mentioned that there were kind of, like, three things that you see, popping up.
00:10:56 [Speaker 1]
Right?
00:10:56 [Speaker 1]
So one is, not using the approved models.

00:11:00 [Speaker 1]
Two is maybe not using the right regions for those models, so a data sovereignty kind of issue.
00:11:07 [Speaker 1]
And then three is the data that's being used and whether that data is allowed to be used.
00:11:11 [Speaker 1]
Right?

00:11:11 [Speaker 3]
Also access by yeah.
00:11:12 [Speaker 3]
Yeah.

00:11:13 [Speaker 1]
Yeah.
00:11:13 [Speaker 1]
Okay.
00:11:14 [Speaker 1]
And and so, like, along those notes, I'm or along those lines, I'm curious because I think a lot of organizations are going through this pain point right now And, like, not a shameless plug for Firetail, but this is something that we help organizations discover is, like, what's going on.
00:11:27 [Speaker 1]
But I'm really curious from your perspective.
00:11:29 [Speaker 1]
Like, how did you figure out that this was what you guys were facing?

00:11:37 [Speaker 3]
I think after the DeepSeek thing happened, as in we were like, okay.
00:11:41 [Speaker 3]
We need to stop, you know I I I don't think until then, there were really good ways to control it.

00:11:46 [Speaker 1]
Okay.

00:11:47 [Speaker 3]
And then we got a thing we got a request saying that, okay.
00:11:50 [Speaker 3]
We have to know prevent developers from using DeepSeek for sensitive data and basically, for anything else within the organization.

00:11:56 [Speaker 1]
And that was primarily when you say the DeepSeek thing happened, you just mean, like, the the release of DeepSeek and then all the controversy around, like, hey.
00:12:03 [Speaker 1]
It's sending your data to China and all this.
00:12:05 [Speaker 1]
Okay.
00:12:05 [Speaker 1]
Okay.

00:12:06 [Speaker 3]
And and and we and then kind of sort of our conversation is like so today we have Deepsea.
00:12:12 [Speaker 3]
Tomorrow, we'll have something else.
00:12:14 [Speaker 3]
We can have so

00:12:14 [Speaker 1]
many more.
00:12:15 [Speaker 1]
Right.

00:12:15 [Speaker 3]
And how do we start to secure?
00:12:18 [Speaker 3]
And maybe have, like, an allow list of all the models

00:12:20 [Speaker 1]
that we have

00:12:21 [Speaker 3]
accessed in our company and things like that.
00:12:23 [Speaker 3]
So it's I yeah.
00:12:24 [Speaker 3]
It it that that that's where the conversation started off from.

00:12:27 [Speaker 1]
Okay.

00:12:27 [Speaker 3]
And then that's when we started to see that there's at least on Azure, there's policies that we could actually implement to Yeah.
00:12:32 [Speaker 3]
Block certain Yeah.
00:12:34 [Speaker 3]
Marketplace providers or also registries who use.
00:12:37 [Speaker 3]
Yeah.

00:12:37 [Speaker 1]
Yeah.
00:12:38 [Speaker 1]
Well, I wanted to ask about that because Azure of of the three cloud providers, Azure is the one that I know the worst or or the least about.
00:12:45 [Speaker 1]
Right?
00:12:45 [Speaker 1]
I'm more familiar with AWS and and Google.
00:12:48 [Speaker 1]
On that, my understanding of Azure is that you kind of have, like, two main AI services within Azure.

00:12:53 [Speaker 1]
You have the Azure AI service, and then you have the Azure OpenAI service Yeah.
00:12:58 [Speaker 1]
Which is kind of like a a licensing of chat GPT and related technologies and models coming from OpenAI.
00:13:04 [Speaker 1]
Right?
00:13:05 [Speaker 1]
Within the Azure AI service, are there is it also kind of a marketplace of different AI providers?
00:13:10 [Speaker 1]
Okay.

00:13:10 [Speaker 3]
So the, Azure AI service, you also could use open g like, the the Charge GPT and the the the opening of the models.
00:13:17 [Speaker 3]
And you also can get any other models from Meta or Nvidia and things like that as well.

00:13:23 [Speaker 1]
Coherent, a 21, and all the other providers.

00:13:25 [Speaker 3]
All the other.
00:13:25 [Speaker 3]
Okay.
00:13:25 [Speaker 3]
Okay.
00:13:25 [Speaker 3]
Understood that.
00:13:26 [Speaker 3]
And we also could use Azure ML workspace to Okay.

00:13:29 [Speaker 3]
Build on AIs and fine tune and stuff like that.
00:13:32 [Speaker 3]
Okay.
00:13:32 [Speaker 3]
Models, I mean.
00:13:33 [Speaker 3]
And then, and and then yeah.
00:13:36 [Speaker 3]
And then and then and then you have the OpenAI, which is also the the the the the the the the open AI stuff that's available there.

00:13:41 [Speaker 3]
And all of these are policies that are applying.
00:13:43 [Speaker 3]
We just do we can actually apply policies and all of these saying you only can get a certain version

00:13:46 [Speaker 1]
or a certain Right.
00:13:48 [Speaker 1]
This provider, this model, this region, this version of the model, etcetera.
00:13:52 [Speaker 1]
Okay.
00:13:52 [Speaker 1]
Have you found any and I think, like, basically, all the cloud providers work that way, and you can put in place policies around that for the organization in any of the cloud providers.
00:14:03 [Speaker 1]
But right now, it's a little bit the Wild West, I think, in most organizations as far as, like, a, they don't have a known policy for, like, oh, this is the model provider version region that we want to be using.

00:14:15 [Speaker 1]
And two, I just think there's, like, so much experimentation going on right now, right, that, like, so many organizations are just moving really, really quickly.
00:14:23 [Speaker 1]
And, you know, today, it's I've got this chatbot use case that I want to build, and tomorrow, it's a recommendation engine.
00:14:28 [Speaker 1]
And next week it's a, I don't know, a business intelligence or data analysis.
00:14:34 [Speaker 1]
So how have you thought about that in terms of, let's say, like, allowing a use case of experimentation where it might be a lot of different models and a lot of different requests.
00:14:46 [Speaker 1]
Did you think about, like, data anonymization, or did you think about, let's say, like, limiting access to accounts in terms of making sure that there's no external access in certain experimentation use cases, or how have you approached that?

00:14:59 [Speaker 3]
So for the platform that I manage, it's a combination of both.
00:15:02 [Speaker 3]
Okay.
00:15:02 [Speaker 3]
We have accounts that dev teams have access to, like, dummy data.
00:15:07 [Speaker 3]
Okay.
00:15:07 [Speaker 3]
They could, you know, use against models and test it out.

00:15:11 [Speaker 3]
Okay.
00:15:11 [Speaker 3]
But, again, by default, everything is blocked.
00:15:13 [Speaker 3]
If they have something, then they'll reach out to us.
00:15:16 [Speaker 3]
We can vet what they're trying to do, get security involved, get the compliance teams involved, get everyone involved, and let's be on the same page as what's happening and make it a time bound access.

00:15:24 [Speaker 1]
Okay.

00:15:25 [Speaker 3]
And if it really works for the use case, then we can start to see what mitigation controls we can implement to make it maybe productionized or something like that.

00:15:33 [Speaker 1]
Yeah.

00:15:33 [Speaker 3]
But if it doesn't work out, then we just scrap it like this.

00:15:36 [Speaker 1]
Okay.
00:15:37 [Speaker 1]
Okay.

00:15:37 [Speaker 2]
Yeah.
00:15:38 [Speaker 2]
We we have pretty strong controls around our data as you would imagine.
00:15:40 [Speaker 2]
Yeah.

00:15:41 [Speaker 1]
I can imagine.

00:15:42 [Speaker 2]
So what we're trying to do is enable folks to have, like, sort of sandbox y access to AI.
00:15:47 [Speaker 2]
Do whatever you feel because it doesn't have access to data.
00:15:50 [Speaker 2]
But when you wanna cross that threshold and read the data into the vector database or copy it or clone it or modify it or or provide it, then you have to interface with the platform team and with information security and things like that.
00:16:01 [Speaker 2]
So really, it's data engineering in a funny hat.
00:16:05 [Speaker 2]
That's literally one of our slides from fourth our talk.

00:16:08 [Speaker 1]
Data engineering in a funny hat.
00:16:09 [Speaker 1]
I love it.
00:16:10 [Speaker 1]
And and so, like, along those lines of experimentation, because I imagine this is a lot more in your camp with the platform engineering teams that are building new products and whatnot.
00:16:19 [Speaker 1]
Is that is what I described kind of an accurate picture where you're going through, like, different use cases, maybe different models, different providers, different testing?

00:16:27 [Speaker 3]
Mhmm.

00:16:27 [Speaker 2]
Absolutely.
00:16:28 [Speaker 2]
Okay.
00:16:29 [Speaker 2]
I am trying to pick which things to do next, and I have, I think, 15 people offering me 25 different projects.
00:16:36 [Speaker 2]
So I have no idea what to work on next.
00:16:38 [Speaker 2]
And we're trying to figure out a system of how do we choose what to do.

00:16:42 [Speaker 2]
Yeah.
00:16:42 [Speaker 2]
There's this huge FOMO.
00:16:44 [Speaker 2]
I think it's basically FOMO at that companies that if we pick the wrong things, we will be outcompeted.
00:16:49 [Speaker 2]
Yeah.
00:16:50 [Speaker 2]
Yeah.

00:16:50 [Speaker 2]
We need to choose and we need to implement and we need to secure today.
00:16:54 [Speaker 2]
Yeah.
00:16:54 [Speaker 2]
And that's that's really scary.
00:16:56 [Speaker 2]
People can make some very poor choices in that kind of high pressure environment.
00:16:59 [Speaker 2]
So we

00:16:59 [Speaker 1]
have to

00:17:01 [Speaker 2]
systematize it and put our heads together and and choose, which things can make the greatest impact with the least amount of risk.
00:17:07 [Speaker 2]
Yeah.
00:17:08 [Speaker 2]
AI is very good at summarization.
00:17:09 [Speaker 2]
It's very bad at novel thinking.
00:17:11 [Speaker 2]
Yeah.

00:17:11 [Speaker 2]
We have to be very careful that it doesn't actually make health care choices, but maybe it just supplements our providers.
00:17:16 [Speaker 2]
So there's a lot there to to be concerned about no matter your industry.

00:17:20 [Speaker 1]
Yeah.
00:17:20 [Speaker 1]
The this FOMO point that you raised, I really think is super relevant to think about right now, and it really is this kind of, like, not not so much your team, but maybe, like, more your company at the executive level.
00:17:33 [Speaker 1]
There's this fear that, like, okay, if we don't get there, our competitors are going to get there.
00:17:37 [Speaker 1]
And if they get there better, faster, cheaper, you know, they can undercut us on price, or they can outcompete us on service offering, or whatever that thing might be.
00:17:47 [Speaker 1]
And I just see this across, like, so many industries right now where, you know, like, you have to use AI.

00:17:53 [Speaker 1]
Right?

00:17:54 [Speaker 2]
Yeah.

00:17:55 [Speaker 1]
And it just is, it's, again, a little bit like cloud was ten years ago where a lot of organizations were like, we're gonna get ahead because we're gonna be more scalable or we can experiment more quickly and so on.
00:18:08 [Speaker 1]
So I'm curious.
00:18:09 [Speaker 1]
You know, you guys have this partnership between platform engineering and governance, Obviously, it's been working for a number of years now, which is, by the way, awesome.
00:18:16 [Speaker 1]
And I would say in my observation, not super common, like not super common that you do have a good partnership between the two organizations.
00:18:26 [Speaker 1]
One of the things I can't remember which of you said it earlier but, like, this department of no, I think, is much more the common thread in many organizations, where there's a lot of friction between those teams.

00:18:36 [Speaker 1]
I'd love if you could share from your perspective some of the things that you've learned in working with each other.
00:18:40 [Speaker 1]
Obviously, it seems like you two have a good, you know, professional working relationship at one on one, which is great.
00:18:45 [Speaker 1]
But, organizationally, how did you guys get to this point, with your teams and with kinda within the broader organization?

00:18:52 [Speaker 2]
Totally.
00:18:53 [Speaker 2]
I I think our dynamic personally is is really fun and and has been very enriching for me where Sky's perspective of platform engineering is to integrate with all these teams

00:19:03 [Speaker 3]
Right.

00:19:04 [Speaker 1]
And

00:19:04 [Speaker 2]
see how they're doing it and kind of bring home the collective knowledge.

00:19:07 [Speaker 1]
Yep.

00:19:08 [Speaker 2]
I I find that fascinating.
00:19:09 [Speaker 2]
But he'll generally kind of collect these problem sets that we don't have solved.
00:19:13 [Speaker 2]
No one's done a good job solving this yet.
00:19:15 [Speaker 2]
Yep.
00:19:16 [Speaker 2]
And I try to pluck some of those out and go and solve them and then bring them back.

00:19:20 [Speaker 2]
And so platform engineering, in my head, it's the hub of the wheel.
00:19:24 [Speaker 2]
They should you should have a finger in every pie and be integrating with all these teams.
00:19:28 [Speaker 2]
And then you do need those problem solvers that are, you know, solving those intrinsic intractable problems.

00:19:34 [Speaker 3]
Yep.

00:19:34 [Speaker 2]
Yeah.
00:19:35 [Speaker 2]
That's that's my perspective.
00:19:36 [Speaker 2]
Is that

00:19:36 [Speaker 3]
No.
00:19:37 [Speaker 3]
Look.
00:19:37 [Speaker 3]
Yeah.
00:19:38 [Speaker 3]
And I think one of the organization objectives that we have other than, you know, design principles and stuff is to ensure that developers don't have any velocity breakers.
00:19:46 [Speaker 3]
Yeah.

00:19:46 [Speaker 3]
So saying no to them, a friend, might be a good thing from security.
00:19:50 [Speaker 3]
I totally understand that.
00:19:51 [Speaker 3]
But but also it'll be a kind of velocity killer for them.

00:19:54 [Speaker 1]
For sure.

00:19:55 [Speaker 3]
So understanding their use case, trying to work with them to see how we can mitigate any potential problems Yeah.
00:20:01 [Speaker 3]
I think has actually worked really well for us.
00:20:03 [Speaker 3]
Yeah.
00:20:04 [Speaker 3]
It's very easy to just say no.
00:20:05 [Speaker 3]
Right?

00:20:05 [Speaker 3]
And then you say stop, and then they'll just okay.
00:20:07 [Speaker 3]
I can't do this.
00:20:08 [Speaker 3]
I'll go a different way and and start to incur a lot of tech debt and stuff like that.
00:20:11 [Speaker 3]
So Yeah.
00:20:12 [Speaker 3]
I think asking those questions and then understanding the use cases and and then helping them with the whole process has has really been helpful for us as, as a team as well.

00:20:22 [Speaker 3]
Yeah.

00:20:23 [Speaker 1]
That's fantastic.
00:20:24 [Speaker 1]
Well, in the last couple of minutes that we have here, I'm just curious.
00:20:27 [Speaker 1]
Is there anything else from your talk that you would think is, like, you know, high, value, high importance points that we haven't hit on in today's conversation that you'd like to share with the audience?
00:20:38 [Speaker 1]
Otherwise, we'll definitely get a a link.
00:20:40 [Speaker 1]
I think this will probably go up before the talk is live on YouTube, so we'll definitely link back to that talk.

00:20:46 [Speaker 1]
But anything else that you'd wanna share with the audience just kind of in wrapping up today's conversation?

00:20:50 [Speaker 2]
Perfect.
00:20:51 [Speaker 2]
One of the things that I I love about Azure that I I wish AWS had caught up with, I we're gonna talk about this more in the talk Okay.
00:20:58 [Speaker 2]
Is when you deploy an AI model into Azure, you pick a guardrail and it's just intrinsically packaged.

00:21:04 [Speaker 3]
Okay.
00:21:04 [Speaker 3]
If you

00:21:04 [Speaker 2]
wanna use it, you have to use a deployment that has a guardrail.
00:21:07 [Speaker 2]
With AWS Bedrock serverless models, that's not true.

00:21:11 [Speaker 1]
It's part

00:21:12 [Speaker 2]
of the API call.

00:21:13 [Speaker 1]
Yeah.
00:21:13 [Speaker 1]
The guardrail is kind of decoupled from the model itself.
00:21:15 [Speaker 1]
Yeah.

00:21:16 [Speaker 3]
If you

00:21:16 [Speaker 2]
can turn off security, you can.

00:21:18 [Speaker 1]
That

00:21:18 [Speaker 2]
shouldn't be something users are able to do.

00:21:20 [Speaker 1]
Interesting.

00:21:20 [Speaker 2]
There's no config rule or resource policy you can set that'll control that to my knowledge because it's not a resource.
00:21:27 [Speaker 2]
So there are no resource policies.
00:21:29 [Speaker 2]
Yeah.
00:21:29 [Speaker 2]
That's pretty concerning.
00:21:30 [Speaker 2]
So it's just something to be very cautious about with Bedrock.

00:21:33 [Speaker 2]
I I'm a big fan.
00:21:34 [Speaker 2]
I've built a lot of things on it, but there are some weak points there in terms of auditability and securability, to watch out for.

00:21:40 [Speaker 1]
Interesting.
00:21:41 [Speaker 1]
I did not know that about Azure.
00:21:42 [Speaker 1]
Thank you for sharing.

00:21:44 [Speaker 3]
No.
00:21:44 [Speaker 3]
I didn't know that as well.
00:21:46 [Speaker 3]
I already know who's saying that stuff.
00:21:47 [Speaker 3]
No.
00:21:47 [Speaker 3]
I I I spoke on yeah.

00:21:48 [Speaker 3]
I think the three which I spoke with earlier was the the three key table that that we'll talk about.
00:21:53 [Speaker 3]
Yeah.
00:21:53 [Speaker 3]
It will it'll sound very simple and easy, and it's, like, very very obvious, like, how it should be done.
00:21:59 [Speaker 3]
But, again, a lot of the platform defaults don't enforce them.
00:22:03 [Speaker 3]
Yeah.

00:22:03 [Speaker 3]
Why really tend to keep cost low?
00:22:04 [Speaker 3]
Because all the other options that you select select them make it very expensive to run models or do anything else.
00:22:11 [Speaker 3]
Yeah.
00:22:11 [Speaker 3]
So that's I think that's the big call that I always want to make is, like, they're they're available, but it's not your defaults.
00:22:16 [Speaker 3]
Yeah.

00:22:16 [Speaker 3]
And you have to really pay attention to, like, ensure that you select the correct things Yeah.
00:22:20 [Speaker 3]
And then have the right access setup.
00:22:22 [Speaker 3]
Yeah.

00:22:22 [Speaker 1]
This is also such a valid point.
00:22:24 [Speaker 1]
I think, like, right now, you know, we talked about, hey.
00:22:26 [Speaker 1]
Like, there's this FOMO of the of companies, and, you know, if you don't adopt AI, your competitor is going to and they're gonna outcompete you.
00:22:33 [Speaker 1]
But the same is true on the cloud provider side.
00:22:36 [Speaker 1]
Right?

00:22:36 [Speaker 1]
You know, they're they're in this race, this, you know, massive arms race, like billions of dollars being spent on AI services.
00:22:43 [Speaker 1]
And to that point, I would say that the priorities for the cloud providers, at least my opinion and based on what I've observed, is making it easy for you to use AI, not necessarily making it easy for you to secure AI adoption.
00:22:56 [Speaker 1]
Right?
00:22:56 [Speaker 1]
And so secure defaults aren't there, and, you know, let's say, putting good rules around data access and data access policies.
00:23:04 [Speaker 1]
Like, they're not there by default just as you were saying.

00:23:07 [Speaker 3]
Yeah.
00:23:07 [Speaker 3]
It's it's not by default, and you have to actually dig through a lot of documentation to understand what exactly that option means.
00:23:13 [Speaker 3]
Yeah.
00:23:13 [Speaker 3]
Like, the the SKU global doesn't really tell you that, hey.
00:23:17 [Speaker 3]
This is going to go anywhere.

00:23:18 [Speaker 3]
It's Yeah.
00:23:19 [Speaker 3]
It's it just it it feels like like like, the option feels so different from what actually is going to happen.

00:23:24 [Speaker 1]
Yeah.

00:23:25 [Speaker 3]
And that only you understand once you start to dig through documentation and re reread and then went slow.
00:23:30 [Speaker 3]
This is not the right option.
00:23:31 [Speaker 3]
And this maybe they should do standard or something else.
00:23:33 [Speaker 3]
You know?
00:23:34 [Speaker 3]
Yeah.

00:23:34 [Speaker 3]
That that's that's that's the risk which I I'm, at at times, very worried about.
00:23:38 [Speaker 3]
So Yeah.
00:23:39 [Speaker 3]
I'm sure at least they have policies to just block it, which we did.
00:23:42 [Speaker 3]
But, yeah, on AWS.

00:23:46 [Speaker 2]
Yeah.
00:23:46 [Speaker 2]
It's it's just more difficult to block specific models when they're they don't have a resource.

00:23:50 [Speaker 1]
Yeah.

00:23:51 [Speaker 2]
It's just difficult to secure it.
00:23:52 [Speaker 2]
And I hope AWS catches up there.
00:23:54 [Speaker 2]
I agree they've prioritized making it easy to move quickly.
00:23:57 [Speaker 2]
Yep.
00:23:57 [Speaker 2]
But, unfortunately, you can sometimes break stuff.

00:24:00 [Speaker 1]
Absolutely.
00:24:01 [Speaker 1]
Absolutely.
00:24:02 [Speaker 1]
Well, Kyler and Sai, thank you so much for taking the time to join us here on the sidelines.
00:24:05 [Speaker 1]
It's been great to, well, a, step away for a minute here and catch our breath, but, b, just get a chance to talk about this.
00:24:12 [Speaker 1]
This is such a relevant problem for organizations right now as everybody is in this kind of FOMO race to adopt AI and and make the most of it.

00:24:20 [Speaker 1]
So your lessons learned, I think, are are really valuable lessons learned for the audience.
00:24:25 [Speaker 1]
We will have links to the talk from fwd:cloudsec.
00:24:28 [Speaker 1]
We'll also have links to Kyler and Sai.
00:24:30 [Speaker 1]
Anything that they wanna share, we'll get those into this show notes.
00:24:33 [Speaker 1]
Talk to you next time.

00:24:34 [Speaker 1]
Bye bye.

Protect your AI Innovation

See how FireTail can help you to discover AI & shadow AI use, analyze what data is being sent out and check for data leaks & compliance. Request a demo today.