Modern Cyber with Jeremy Snyder - Episode
52

Breach Series 2 - Mike McCabe of Cloud Security Partners

The second episode in Modern Cyber’s Breach Series features guest Mike McCabe, CEO of Cloud Security Partners, sharing a real-world security incident that unfolded due to a compromised G Suite admin account.

Breach Series 2 - Mike McCabe of Cloud Security Partners

Podcast Transcript

Alright. Welcome back to another episode of Modern Cyber. We are coming to you today with another episode from our Breach series. And just as a reminder for people, this is a series of real world breaches typically recorded years after the event, years after it would have happened, typically anonymized in terms of the organization breached. But the goal of this series is to really educate and share experiences and lessons learned, you know, from the trenches, so to speak.

You know, this is real world stories of things that have happened. I'm delighted to be joined today by somebody who has a long history in cybersecurity, who's gonna be sharing about one of his breaches. I don't know if it's the only one. We probably won't get into any others, but we've got the one today that I really wanna talk about. I'm joined today by Michael McCabe.

Michael is the founder of Cloud Security Partners, having started the company in 2017 with the goal of creating and implementing security solutions for a select number of clients around. I presume around cloud adoption. I think that's probably a fair assumption to make here. Yeah. But since then, Cloud Security Partners has grown to become a recognized leader in cloud and application security.

Michael's focus on cloud native software, security, and application security enables him to, enables him to help his customers navigate security challenges with unique and client tailored solutions. Michael, thank you so much for taking the time to join us today on Modern Cyber. Yeah. Thanks for helping me. That was quite the intro.

I, thank you for writing all that up for me. You know? Exactly. Happy to do so. Happy to do so.

Well well, let's go back to, you know, kind of the event in question. Set the scene for us. What type of organization? What's the general time frame? We don't need specifics on the year or month date or anything like that.

But help us understand kind of the context in which this breach happened. Yeah. I can I can kinda tell you the story? Very early on in my security career, just kind of a security analyst. We're getting a start up, pretty well funded.

Not you know? Okay. It was no longer 20 folks in a garage. It was pretty well funded, but still very much in the start up mindset, both from a growth and a security perspective. So So kind of the classic move fast and break things mindset?

Exactly. Yeah. So I was doing a lot of application security, some cloud security. It was early on in the cloud space, so it only had a small footprint. But, I remember the actual, like, the day and when we found out what was going on.

It was, it was kinda funny because it was a, a dev pinged me on Campfire. It was back then before Cycles existed. Yeah. Yeah. And, and it was like, Hey, we saw a couple database spikes.

Do you mind taking a look at this? And they sent me over the graphs from the observability tool they were using. Okay. And it was like two days after these had popped up. So it wasn't like happening right that minute.

Right? Okay. Wait. Who was doing a select everything from users table on the local on the local, machine? And, basically, from there, spun up a whole IR response and got a little lawyers involved and outside counsel, all those kinds of things.

So but that was I remember that very clearly. Just that direct ping of, hey. Do you mind taking a look at this? Yeah. And not that database spike.

I'm curious, like, was it the fact that you had a bunch of select star statements, or was it the volume of data returned by the queries, or was it like Yeah. I don't know, a CPU spike on the database side from this from these very, you know, kind of data intense queries? Yeah. Yeah. It was a the the attacker was not very smart to just select star.

On on the user table that had millions of users. And, basically, they saw a CPU spike, which slowed down responses on the Okay. On the database, which slowed down web requests. They saw they saw that. They tracked it back down to the SQL query.

And then, yeah, a couple days later, let us know about, that happening. So that's that's was the first indicator that we saw for anything else. Okay. And and so, you know, you've got a database server that is compromised. But was the database server directly exposed, or did they have to get there somehow?

Yeah. That was that was the fun thing. They actually chained a few different things together to to get access to things. Thankfully, the database wasn't on the Internet. Thankfully, the production database was not on the Internet.

Okay. So we yeah. After many months of IR log review, trying to figure out kinda what happened, it got tracked back to a phishing attack to start, of course. Yeah. Yeah.

A phishing attack on a G Suite admin, which is probably the worst the worst person you can Phish are the best if you're an attacker. No MFA, and that person, basically added themselves to a bunch of groups in g Suite, including one that had access to Jenkins. Okay. From Jenkins, they then got RC onto the underlying, server that Jenkins is running on and then pulled a bunch of credentials from that, including the database credential. So, basically, they went from G Suite sorry.

That had to get on VPNs. So G Suite to VPN took Jenkin Jenkins to database server. And they told Yeah. Did you figure out how they figured out your VPN access point? I think that's a good question.

I think it was in emails. Okay. So, I think it was just found through emails and they set up a user. So a total failure of not having MFA on everything. But this was like fifteen, fifteen years ago.

So. Yeah. But, yeah. So then we saw that they they tried a lot of different things to kind of poke and prod around, but they found that Jenkins server and Jenkins is such a good target. And that was on the Internet and such a good target for, for, just getting tons of credentials, getting remote code execution on the actual server.

I mean So once they found that Source code? Yeah. So source code with credentials in it. Unless they got that, they basically had the keys of the kingdom. But they went after they were pretty sleuthy up until that point, and they just decided to do a a select star on users.

Yeah. We don't actually think there's a whole public breach notification, and there's yeah. It was a well known case. But we don't actually think they got all the data because they did such a greedy selects. They think we think they actually, like, dumped, like, you know, 20,000,000 records of, like, I think, a 200, three hundred million record database.

So, they they weren't the smartest in the end, but they got enough to cause a huge headache. Yeah. Were they issuing the queries from the Jenkins server to the database server? So, like, they were sitting kind of, you know, logically on the Jenkins server at the time that they were issuing the queries. Okay.

And and kind of writing the output to some local files on disk on the Jenkins? The servers shipping it off of that. Yeah. Okay. And then from the Jenkins server because the Jenkins server was on the Internet, they were able either able to, I don't know, like, SCP upload it to a server of theirs or, you know, put it into Dropbox or something, or who knows what they did with it.

Right? Yeah. I think they're SCPing. It's been that that part was fuzzy. But, yeah, I think they're just SCPing it off out to Okay.

Their their network. But, yep, that was yeah. It was a a few different failures that went to that. So but it was it was interesting one that everything up until the database query was much harder to see. Yeah.

To to you know, none none of it was that suspect until they started just running huge database queries. Yeah. Well, there's a couple things you said I wanna come back to because I wanna dig into two aspects of it. First is just going into kind of, let's say, like, the forensics phase of the whole incident and the incident response. You said earlier, you know, months later, you figured this out.

Did it really take months from that, you know, kind of database spike until you actually piece together, oh, okay. This is how they got in. These are the systems they got into, etcetera? It was it took a couple months to get the full insight into everything that they had done. Okay.

We were able to track back the activity on the on the server that access the database pretty quick and then go back through to, like, VPN connections and then G Suite. But to really see everything they did during the time that they had access, which was a decent amount of time it wasn't, like, one or two days. It was a it was a decent amount of time. One or two weeks. Incentive?

Yeah. Yeah. So it's a little time to get the full access of that and understand what other data they were trying to access. So Yeah. Initial like, understanding the initial compromise is probably only a week or two.

But Okay. Yeah. The external IR team took months to kind of comb comb through everything to figure out all the details of what they access. Yeah. You know, given the time frame that you've kind of hinted at in this, you mentioned that there was a number of mistakes made.

But I'm hearing you tell the story, and I'm like, actually, things going well for you because, for instance, observability was, I would say, you know, at that time frame, if we're thinking kind of, let's say, fifteen years past around the twenty ten ish time frame, I I'm not sure that observability was, you know, kind of like a widely adopted solution at that point in time. So being able to even pick up on a database spike like that, I think a lot of organizations wouldn't have been in good position to do that. So, you know, I'd say, like, kudos in the sense that you were able to actually do it. And by the way, also actually looked at what was happening. I I think there's it's also true that a lot of organizations have observability tools and collect the data.

But how many are, let's say, like, actively writing or or, running detections on top of that or reviewing the logs and things like that. But in your opinion, you know, what were some of those mistakes made, and what did you kinda learn from them? Yeah. No. I mean, fair point.

I the I will say this. The tech team we worked with, the developers, like, very forward thinking, very on, like, the cutting edge for a lot of things. But typical start ups, the security is always, you know, secondary to kinda growing and moving fast. Yep. I mean, I think the the big things were MFA wasn't nearly as, prominent back then.

That would have saved a huge amount of headache. But I think the other pieces were, having, like, internal services like Jenkins on the Internet. Like, not having that on the Internet would have saved a huge amount of pain. Yep. If they had gotten, you know, onto the VPN or onto G Suite, they could probably still do quite a bit, but it would have made it a little were a little harder for them to get access to things so quickly.

You know, something like ephemeral credentials or not getting Jenkins Azman access to things would be Yeah. Another big, another big control that we should have had in place. But, again, it was it was a long a decent amount of time ago, and we've improved quite a bit in the security space. So those concepts were not nearly as popular as, Yeah. Back then.

So as they were now. Look. I think at best at that time frame, if you're talking about, you know, system a needs to be able to access system b for whatever system process purposes, you were probably talking about storing credentials in, you know, I don't know. At best, like, a hidden, a hidden file in a hidden folder with, like, where you go change mod and try to strip the permissions so that only, like, you know, daemon processes can access the file to, you know, to take out the credentials. We didn't have things like AWS secrets manager at that time frame where you could have really, like, externalized these and put them in, you know, kind of hard to access types of environment, variable or or whatnot.

Yeah. So it's really interesting. Do you also think about kind of the network segmentation access? Obviously, you know, not having Jenkins on the Internet is probably a good practice. But, like, do you also think about, hey, you know, there's no reason that a a development build server should be able to access a production database or should be in the same subnet.

Right? Yeah. No. For sure. Yeah.

It was a it was a pretty flat network, obviously. So, yeah, that that would have been a very good control to keep those keep production, keep dev, keep, you know, QA and, deployment tools kind of separated. That will definitely be a good thing. But, yeah, that was that was not even on the road map for for getting that, kind of segmentation in place back then. Yeah.

And I guess, just for, you know, for you as somebody who has obviously gone on to, you know, fifteen years plus of experience in cybersecurity, as you think back on this, what are, like, the core kind of guiding lessons learned for you from this incident? Yeah. I mean, I think it was quite a few. The big one, kind of the defensive side is, like, build layers of controls. Like, some layers are always going to fail.

Some someone's gonna be able to get around certain layers. But as we kinda talked about, you need multiple layers of controls. One thing probably wouldn't stop, you know, a very determined attacker. Right. I think the other piece, and we see this a lot when we when we do breach investigations, is not having the log insights into what's going on in your environment.

Being able to go from our it was a G Suite user to, Jenkins login to, or VPN login, Jenkins login, then Yeah. Database login. Like Yeah. I always tell clients, like, be able to trace through how a request goes to your environment. Like, you say, okay, it's this and it's that and it's out.

I know the timestamp. That's a big piece as well. And I think the last thing was when you're a technical security person, you're just focused on like you're focused on hacking, you're focused on vulnerabilities, you're focused on that kind of stuff. But when something like this happens, you're talking to a million different teams. It's not just the security team that is is coordinating a breach notification and reactions.

We had state's attorneys, you know, calling all the time to get a lowdown on what happened and notification. So many people in your legal team is really important, your clients team, even your marketing team who's gonna do a, you know, a PR notice to say what happened and kind of shake the language around, not to lie, but to say what happened in the most, you know, positive way or at least negative way, I should say. Yeah. So knowing all those those folks beforehand so when you get thrown into a room and you're trying to figure out how to tackle this, kinda know who does what and who is responsible for what and who you can kinda put your opinion to is really helpful as well. Yeah.

Yeah. Quite a bit there from that. Yeah. Yeah. Totally.

On that last point in particular, I would say, like, that's that's it's a lesson that the more I talk to more senior leaders in who who run operational teams, whether it's a SOC, whether it's, you know, security for a larger enterprise organization or whatnot, the importance of that point in particular gets brought up again and again. And and not only internally. Right? Like, obviously, internally, you you hear about the, hey, you know, you need to test your plans. You need to table top them.

You need to run through them. You need to make contingencies for when is Mike out of office and this thing happens, you know. Mike's on vacation. Who's the other person having multiple people kind of trained up on those things? But then externally, I hear about this and to your point about, let's say, like, talking to counsel, you know, I talked to a number of people at the FBI and at FBI cyber divisions, and they always say, like, hey, reach out to us, build a relationship, bit larger organization, kind of like you mentioned, an ecommerce company that's a little bit past that phase and and is e to the hundreds of millions of customers kind of time frame, you become a very lucrative target for, you know, criminal gangs, right, who might want all that data.

And so, you know, it might make sense at that point of the company's life cycle to start building some of those relationships. Yeah. Thank you so much for sharing that with the audience. I guess, you know, for people who wanna learn more about you and your work, what's the best place for them to go check out? You can check us out on LinkedIn, cloud security partners, cloudsecuritypartners.com as well.

Pretty easy. Cloudsecuritypartners.com. Well, Mike McCabe, thank you so much for joining us on this episode of the Modern Cyber Breach series. We look here forward to hearing from more guests on their breaches and lessons learned. We'll talk to you next time.

Bye bye.

Protect your AI Innovation

See how FireTail can help you to discover AI & shadow AI use, analyze what data is being sent out and check for data leaks & compliance. Request a demo today.