In this Breach Series episode of Modern Cyber, Jeremy speaks with Adam Burns, CEO of BlackVeil, about a real-world ransomware incident that struck one of Adam’s MSP clients.
In this Breach Series episode of Modern Cyber, Jeremy speaks with Adam Burns, CEO of BlackVeil, about a real-world ransomware incident that struck one of Adam’s MSP clients. Adam shares how the breach was initially detected, the role phishing played in the attack, and the team’s recovery efforts—including narrowly saving the client’s backups. The conversation also touches on CrowdStrike’s 2023 outage and key lessons learned from both incidents, including the importance of fundamentals like email security, air-gapped backups, and response readiness.
About Adam Burns
Adam Burns is the CEO of BlackVeil, a New Zealand-based company focused on simplifying email security for businesses of all sizes. With a career rooted in the MSP space, Adam has worked across support, engineering, and project delivery roles. Drawing on years of experience responding to cybersecurity incidents, he now helps organizations protect against common vulnerabilities—especially phishing and email-based threats.
BlackVeil Website: https://www.blackveil.co.nz
All right. Welcome back to another episode of Modern Cyber. We are again back into our breach series where we go into conversations with people who have been there, done that, so to speak, and who have actually lived through some of the pain, some of the suffering that it is to go through a data breach or a data compromise event, incident, however you wanna phrase it. But, you know, we're talking to people, and again, a reminder, the purpose of this series is not to kind of name and shame. No.
No. In fact, quite the contrary. The purpose is to shed light on how these things actually play out in the real world and to share real world learnings and experiences. We've got an interesting conversation teed up today with somebody who comes from the MSP space, which is again a space that I don't know a ton about myself, but I know a lot of our audience is either in that space or adjacent to that space and has learned a lot. So I am thrilled to be joined today by Adam Burns, the CEO at BlackVeil.
Adam has spent his entire career in the MSP space starting out as a service coordinator, help desk, modern workplace, project delivery, all of the kind of stuff that has to get done to make these services work. It's really a way to cut your teeth, learn, work your way up through everything. Working with MSPs, sorry, MSBs all the way up through the enterprise. Adam, great background. I've done some of that stuff myself.
Thank you so much for taking the time to join us today on Modern Cyber. Thanks for having me, Jeremy. And I'm really looking forward to chatting and hopefully shedding some light on how these things play out in the real world. World. Yeah.
Yeah. So let's get into the conversation. You know, on those lines, like I tell everybody in the first questions we always ask is give us a little bit of context. What type of organization were you working at at the time? What was the time frame for the incident that we're talking about?
And what were some of your first indicators? Okay. So, yeah, at the time, I was working for, a managed service provider. We had quite a number of customers. This particular customer was quite large.
They basically got hit with a phishing attack. Okay. And we we worked out that they'd been in the environment for at least a few weeks before they actually chose to detonate. Okay. And the way that the way that we actually noticed it was all their servers and desktops, just started going offline.
We were getting all all sorts of alerts into our Okay. Ticketing system. Then obviously, they phoned the help desk and said, hey. What's going on? All our machines are just turning off around us.
And it was at that point that we realized, you know, something's really wrong. And it just so happened that one of our other engineers was sitting on the backup server at the time. And, luckily, he he saw what was happening and switched it off before that got hit. Yeah. Well, let's talk about that for a second because there's a couple of things that you said I wanna just make sure I understand correctly.
So you mentioned that you start getting these tickets coming in, and these are automated tickets coming off of the systems before the client has phoned into the help desk. Right? Yeah. It was the whole heap of, offline alerts for the servers. Yeah.
Okay. And so did you have something from, let's say, from your side as the MSP monitoring their environments doing, like, ping checks or some kind of system help checks? Yeah. It was just an RMM tool. So, obviously, it sends us an alert when something goes offline and has been offline for more than five minutes.
So we started getting yeah. They had about sixty, seventy servers. So, yeah, we just just started getting those fire through. And, obviously, that's that's not normal. It's either a power outage or something really bad's happened.
Right. Right. But along those lines, I'm curious. Right? So RMM to me implies that you're remote.
Right? Remote management software. So Yeah. When you get this initial alert, you see these systems going offline. Okay.
There's an issue. How do you figure out from that point that they're you know, how do you go from offline to determining that there's been a phishing attack? Like, what were the steps to kind of trace through there and understand? Because I imagine getting access to those systems and then getting the server logs, that doesn't just magically happen. No.
We basically we we knew there was an issue when everything started going offline, obviously. We weren't too sure what at the time. Yeah. But just from some of the descriptions from the staff on-site at this customer of of how how it started and how everything just kinda cascaded, we assumed, that it was a breach. And then we confirmed, yeah, after a little bit of digging that there was a phishing email that had been clicked on.
Email that had been clicked on. And, yeah, literally everything on the network got hit. Okay. And Yeah. When you say everything on the network got hit, you know, did it sound very much like it was cascading from, you know, system a to system b to system c, that that type of pattern?
That type of pattern. Yeah. It went through the servers first, and then it it did the desktops after that. Okay. Okay.
And so doesn't sound to me like there was a lot of, let's say, like, network segmentation that was disconnecting desktops from servers. No. No. Nothing like that. No.
That's it was a flat network as well. So yeah. And and was that, like, was that a conscious design decision, or was that just kind of, like, hey. We're a small company. We're not really that big a target.
Why would we need this complicated network topology? Yeah. I mean, this particular customer had been a customer of ours for some time, and we'd obviously alerted them to some issues with their setup. But they decided, for whatever reason, that they weren't gonna invest in technology at the time. Yep.
And, yeah, unfortunately, they learned quite an expensive lesson. Always yeah. Keep keep your tech up to date and make sure it's secure. Well, along those lines, you know, to to try to think about keeping the tech up to date, you know, you've got you said something earlier that they you realized that they had kind of been on the systems for a little while. How did you figure that out?
We found a basically, one of their entry points. They've left some some evidence behind some files, that they've been manipulating while they were while they were connected. And we just saw the the created date of that file. Yep. And that was, yeah, a couple of weeks prior to the detonation.
So I've been in there digging about and looking for, well, whatever it was that they wanted to find, I guess. And So once they've done their reconnaissance, they, yeah, hit detonate. And detonate in this case means triggering a malware of some kind? Yes. Sorry.
Triggering triggering the attack essentially, which, yeah, kicked off the malware and encrypted everything. Oh, so this is, like, you know, kind of ransomware type malware. Yes? Yes. Yep.
Yep. Oh, okay. Okay. Sorry. I didn't understand that sooner when you were kind of going through it.
And so but, like, along those lines, was this one of the kind of more sophisticated packages that, you know, some of the EDR tools weren't detecting well at the time? Yes. This this was not picked up at all. I can't remember the type of antivirus we were using at the time or that customer was using, but it wasn't it relied on definition files. It wasn't, you know, one of the smarter ones with machine learning and looking at patterns and and whatnot.
So, yeah, even though their AV was up to date, it just didn't didn't pick up on this Interesting. Particular kind of attack. Yeah. Yeah. Yeah.
Yeah. Well, talk us through the response. You've done, you know, you've done your reconnaissance at this point to kind of understand, okay. You know, it was a phishing attack that, gave the attacker system access. It happened a few weeks back.
You found some files. You kind of have an understanding about how the breach has happened up to this point. What were your steps from that point forward? So at that point, we rallied the troops, took us basically a stock take, right, of what actually was hit and what we actually had left to recover from, if anything. I mentioned earlier, luckily, one of my colleagues was on their backup server while this was going down.
And I kid you not, he saw files being encrypted as he was on the server, and he turned it off, basically in the nick of time before it got to the backups. So we turned that back on with no network access, so it couldn't do anything. Okay. And basically restored everything pre pre attack, and then we had to basically rebuild the server. So we rebuilt the entire network from the ground up.
We kept the domain, but everything server, everything with desktop was was completely wiped and reset. And we built a brand new infrastructure for them, really. Brand new Citrix environment, new exchange server, new domain controllers, the whole the whole nine yards. Yeah. And then there was a team of about four of us working on it for a month.
This was a a nationwide, business as well. So, yeah, we were sending, USB build keys to to the offices around the country and and getting the staff to rebuild everything for us as well because, obviously, we couldn't get right around the country ourselves. Yeah. So so USB build keys for people like me who haven't done this in a little while, this is the effectively the equivalent of, like, machine images. Right?
Like, you know, something like this and reboot. Yeah. Okay. Yeah. Yeah.
So we basically sent each site a bunch of USB sticks with PC images on them and a sheet to follow. So because Wow. Yeah. And that that sped the whole recovery process up because they have a lot of staff. They're all around the country.
So yeah. Very helpful. I mean, I was about to ask, like, the what you're describing, this sounds like well well, actually, before we get into the human impact, I mean, real quick. Was there a ransom message? Did the client choose to pay the ransom?
How was that handled? Yeah. So about two weeks after the recovery was complete, the owner of the company received a ransom email, and it was a screenshot of their files. And they basically left the left the demand. I can't remember the exact figure, but they said if you don't pay the phone, pay the fee, we're gonna release your your data onto the dark web.
Okay. They consulted us at that point. We, yeah, we just told them not to pay, as you as you should really. That's your, you know, last resort paying them because that just encourages them to to keep doing it. And also, you never know what's in the I'll encrypt you know, once they release the data to you, they could be they could leave another payload in there and do it again.
So, yeah, we we told them not to pay, and, yeah, it's basically just time time lost. They recovered everything. But on that topic, like, the time lost sounds pretty substantial because, I mean Yeah. Given that this is a nationwide customer that you're working with, they've got, I don't know, tens of locations and, you know, hundreds of employees that we're talking about. Like, you know, when you're sending out these USB keys, you've gotta prep them, you've gotta prep the instruction sheet, get everything out to everybody.
You know, what are we talking about? Like, two weeks of lost company time multiplied by x employees? Yeah. Roughly about two weeks, they were down in some capacity, which really it's not that bad if you consider the, you know, the the scale of the the attack. Literally everything got hit.
So Yeah. Two weeks is probably not too bad in the grand scheme of things. It's funny you say that because, you know, I've talked to other people where, you know, two days is viewed as massive just because of, let's say, like, lost revenue or, you know, larger organization. Obviously, the amount of downtime is multiplied by the number of employees, locations, you know, services, etcetera. And we can easily be into the tens or hundreds of millions of dollars with larger organizations.
So, you know, but I totally get your perspective that two weeks, you know, as opposed to a going out of business or b let's say like terrible reputational and brand damage and and things like that and loss of customer trust. Yeah. I I get your perspective on that. Yeah. It was I mean, yeah, two weeks does sound like a long time.
I guess the thing I didn't say was that a lot of their staff weren't computer users, so they could they could operate in some form or another. Computer users, so they could they could operate in some form or another. So it was essentially like management and head office that got hit. The rest of the the team could do their work in some capacity, although quite limited. Yeah.
Because obviously, the the type of industry they're in is quite heavily reliant on computers, but the staff themselves aren't. If you know what I mean? Got it. Yep. Yep.
Understood. Understood. Got it. Yeah. That that provides a lot more context as to why in the in the grand scheme of how you might look at it.
This is like, yes, expensive on the one hand, but on the other hand, we're able to remain in business. We're able to keep operating. We're losing some productivity, but the general output of the of the organization isn't too severely impacted. Makes sense. Makes sense.
Yeah. I mean, it wasn't, you know, it wasn't a fun time for them, but they could still work. So yeah. Yeah. I mean, you know, as you think back on it, first of all, like, one of the first things that comes to my mind is, like, how lucky were you that you had a colleague sitting on a backup server at that point in time who noticed it and took the exact right action to take the system offline?
I mean, yeah. Have you ever heard of anyone having that, like, before? I haven't. That was, yeah, super, super lucky, man. Yep.
No. Yeah. And when you think about, let's say, like, lessons learned for the organization, what did like, what do you come away with thinking about is, like, you know, that really taught me x? Basically, to never overlook the basics. I mean, you can spend a lot on fancy firewalls and and EDR software and whatnot.
But if you haven't got the basic security protocols in place and tightened up like, email authentication, then you leave yourself open to phishing attacks and spoofing and things like that. So, yeah, the the the big takeaway from me for this was to make sure you've at least got the basics right. And when you talk about the basics on the email side, I mean, I'm I hear a lot of things about email, and I'm no expert on email security. I hear about, like, DKIM and DMARC and SPF and all these different things coming on the DNS side and, of course, like, multi factor authentication as far as access to the mailboxes and, you know, and then Yep. Employee security awareness around, like, keep an eye out for phishing emails, what to look for, that kind of thing.
But, like, what else should we be thinking about with regards to email security? So yeah. Like you mentioned with SPF, DKIM, DMARC. Good good example actually is Yahoo and Google are actually enforcing DMARC records now. So you gotta make sure you actually have one.
Otherwise, they'll just block your email straight out. But SPF, Deacom, DMARC all work in harmony together, and basically, it's like a list of you've you've deemed a bunch of servers, able to send emails on behalf of your domain. Right? So you configure those records with those entries and set them to a strict policy. That way, if anything sends mail on behalf of your domain that's not specifically listed in those records, the recipient should block that message.
And that that is how you avoid spoofing and phishing attacks because the hacker then has to hijack one of those servers. Otherwise, they can't do it. Right. I mean, it does sound like we're moving kind of necessarily towards an Internet where, you you know, the changes that the major providers and when I say major providers, I mean, major providers on the email side, which is like Google, Yahoo, Microsoft. Right?
Primarily. The changes that they make will have a trickle effect across the ecosystem. Right? Because if they change the rules and they say strict policy, strict enforcement, more or less, everybody's going to have to comply over some time period. Pretty much.
Yeah. You can't you'll have to conform. Right? Because the you know, it it's Google. You're gonna you you're gonna need to email Google at some point.
Right? So Yeah. Yeah. And plus, you know, it's in it's definitely in everybody's best interest to get these records configured properly. Otherwise yeah.
You might experience something like this customer did. Yeah. Yeah. Yeah. Yeah.
And not to, not to kinda shamelessly plug what you guys are working on over at BlackVeil, but I take it this exact this exact set of circumstances around checking domains and checking email, conformity with all of these policies is something that you guys are working on. Right? Correct. Yeah. We've seen seen an issue.
Like, I've been in the industry for a long time, and I've noticed that these these issues are still apparent and very common. So Yeah. I've taken it upon myself to build a system to help people sort that out because it's those three records, even though, you know, there's only three of them, they're quite confusing. They can be very confusing to to even a senior engineer if you haven't spent a lot of time in the email space. Yeah.
But the the power of having them configured correctly, it's yeah. It's unmatched. Yeah. Awesome. Any other kind of lessons learned with regards to, let's say, like, you know, making the right level of investment in security products, not overlooking security in favor of, you know, cost cutting measures or anything around, let's say, the backups or or whatnot that you'd wanna share with the audience?
Yeah. I guess the main learning from me for for this particular issue was always air gap your backups. Don't have it on the on the domain. Don't have it on the same network as everything else. Yeah.
Yeah. Because if, you know, if you get hit, at least your backups are safe. So Yeah. Yeah. Sure sure it's off premise, but is that if it's off premise, can people still get to it?
You gotta make sure that it's air gapped and only the, you know, a particular couple of machines can get to that server or whatever whatever it is. Yeah. Yeah. Do you in the MSP world, do you follow the three two one backups rule? You know, three backup copies, two different media types, one off-site?
Well, we try to. But Yeah. You know, living in living in New Zealand, that's quite a small country. Right? So people's investment in tech might be a little bit less than what you you might be used to seeing.
Yeah. Yeah. Oh, yeah. Yeah. So, like, for example, like, a Microsoft best practice is, you know, New Zealand is is not gonna be conforming to that.
Most of the time, we kinda kinda do our do our own thing. Yeah. You know what I mean? Yeah. Yeah.
Totally understood. Well, Adam, this has been really helpful. I've really enjoyed it. We've got, like, just one or two minutes left, and I I'd love to hear because I know there you've also been through kind of the the madness that was CrowdStrike last year and what went on there. I'd I wonder if you could just, like, briefly share your experience there and and, you know, one of some of the actions that you took on that.
Oh, man. Yeah. I saw I I'd actually finished for the day when it all started kicking off, and I was at a shop. Custom gave me a call and said, have you have you are you doing any work, Adam? All of our servers have gone down.
I was like, no. No. I'm I'm actually not even at my computer. He's like, okay. Well, everything's going down.
I said, okay. Well, I'll I'll head home and see what's going on. So I headed home, jumped on our machine, and I could see in our internal chat at work. A few other customers' machines were falling over. They were all kind of, like, in the same area, so we thought initially it was a power outage.
Yep. We quickly figured out, no. It was CrowdStrike. And, yeah, basically, every every customer of ours basically got hit by it. Luckily, I had the quick thinking to jump on our RMM system before it got hit by that CrowdStrike Strike update and switched it off.
Uninstalled Crowdstrike and switched it off, before that update rolled out to it. So we were able to get back online and start assisting people pretty quickly. We were down for maybe an hour or two. Interesting. And do you have any idea, like, what percentage of, let's say, the total fleet across the customers was hit versus what percentage you were able to kind of, you know, proactively remove CrowdStrike from?
So we only proactively removed it from our own systems. By that time, everything else had been hit. So, or even all of our servers got hit except those those RNN tools. So Yeah. Yeah.
It was it was basically 60% of our customers and, 95% of our own systems. Yeah. Did you have to go through some, like, kind of recovery techniques, you know, mailing out keys and whatnot to get people back online? It wasn't as bad. We we did roll out some instructions and keys as well, but, yeah, it wasn't as bad as that cyber attack.
Gotcha. Gotcha. Obviously, the machines machines didn't need a full rebuild. So Yeah. Yeah.
Yeah. And it's so funny. I mean, if you look at kind of the post mortem on that event itself, right, I I think a lot of people have classified it as a cyberattack, and I don't really view that as an accurate description. You know? It was basically a bad patch.
Right? It's like a bad software update. It was a botched update. Yeah. So I guess, yeah, lessons learned from that is to make sure that you thoroughly test updates and then test the rollback as well.
Yeah. Before you actually go. Totally. And it comes back to something you said towards the beginning of the episode. It's like always the fundamentals and the basics.
And, you know, we've done a few episodes here on this breach series, and I I think almost every guest that we've had on has reinforced that same set of principles. Don't overlook the basics. Remember the fundamentals. Backups has come into play, I think, in every single, in every single episode that we've done in the breach series to date. So the importance of backups, I think, can't be overstated.
Well, Adam Burns, thank you so much, for taking the time, for sharing your experience. I know this was a little while ago, but I love learning about this. And I think all the things that you said are still relevant in the world today. For people who wanna learn more about you and about the work you're doing on the email security side, it's just www.BlackVeil.co.nz and that's n-zee or n-zed as you would say. Correct?
Yeah. That's correct. If you go to the site now, you'll see it's under maintenance because we're we're working on a couple of things. We'll be back up in April. But, yeah, feel free to reach out and say hi.
We're on LinkedIn. Go to company page in there if you wanna come up, pop over and say hi. Awesome. Awesome. Well, thank you one more time, Adam.
And to our audience, remember the call is still out. We've got a number of these episodes recorded. They'll be releasing, but we're also always looking for guests who can come on, share their knowledge, share their experience of what they've been through in a real world cyber breach scenario. We will talk to you next time on Modern Cyber. Bye bye.