In this special episode of the Modern Cyber Breach Series, we welcome back Jeff Lyon, CEO and Founder of TheCyberWild. Jeff shares a firsthand account of responding to a major ransomware attack on a hospital—one that unfolded just before the COVID-19 pandemic.
In this special episode of the Modern Cyber Breach Series, we welcome back Jeff Lyon, CEO and Founder of TheCyberWild. Jeff shares a firsthand account of responding to a major ransomware attack on a hospital—one that unfolded just before the COVID-19 pandemic.From the initial emergency call to the recovery process, Jeff walks us through the critical decisions, the challenges of an unprepared IT team, and the difficulties of restoring operations while ensuring patient care. He also sheds light on lessons learned, including the importance of proactive security measures, incident response planning, and continuous monitoring.Join us as we dive deep into the realities of cybersecurity in healthcare, the risks of ransomware, and why no organization—regardless of size—can afford to be unprepared.
About Jeff Lyon
Jeff Lyon is a Business Information Security Advisor who empowers organizations to take control of their cybersecurity frontier. As the CEO and Founder of TheCyberWild, Jeff guides organizations through the intricacies of cyberspace. With a wealth of knowledge accumulated through years of hands-on experience and strategic leadership, he provides security awareness training, security assessments, 24x7x365 Security Monitoring and governance, risk, and compliance strategies and other cybersecurity solutions tailored to the unique needs of each client.
TheCyberWild Website - https://thecyberwild.com/
All right. Welcome back to another episode of Modern Cyber. We are in our Breach series again today, and we're actually delighted to be joined by somebody who has been on the show before. For those unfamiliar, our Breach series is a series where we go deep at times or we go topically at times, really depends on the situation, but we go into a real world breach example with somebody who was there, who's familiar with the incident, and our goal is to walk you through what were the steps taken to understand what happened, and really importantly to take away lessons learned from the whole situation. Breaches are unfortunately a terrible thing to go through.
I've been through them myself. You might remember that my episode was actually the first one in our breach series, but they are things that we can really build on, share experiences, and learn from in a respectful anonymized way. So as with all other episodes, we won't be talking about named organizations. We will share what we can to the extent that we can and again focus on lessons learned. With all of that said, I'm delighted to be joined today by Jeff Line.
Jeff is a business information security advisor who empowers organizations to take control of their cybersecurity frontier as the CEO and founder of the CyberWild. As I mentioned, Jeff's been on the show previously. If you want more and more details about Jeff and his background, I really do encourage you to go back into that episode with Jeff previously. We talked about health care, we talked about challenges around cybersecurity in health care. Health care is one of the most targeted industries by threat actors.
With all of that said, Jeff, thank you so much for taking the time to join us again. Really looking forward to hearing and and learning from you today on today's episode. Thank you, Jeremy. I'm glad to be here, and and good afternoon, everyone. Awesome.
Awesome. Well, let's start by getting into it, and I I kinda wanna start the conversation like we do with a lot of these conversations, which is how did you first learn that something was going on? Well, I was on my way home from a a long weekend, wonderful trip, and I got a phone call on Sunday afternoon. And it was one of my clients, the hospital Mhmm. That, they were panicking.
They didn't know what to do. They had got a mysterious email saying that their systems were being encrypted, and, yes, they were. Stuff was shutting down. Yeah. And it was a ransomware attack.
Yep. And they did not have a clue what to do. So the next morning, I was on a plane to the the hospital to help, spearhead the recovery efforts and to help determine what caused it and get get them back up and running. I mean, it was a tough situation. Once I got there, I realized just how unprepared they were for such an attack. Yeah.
And, you know, this is a, you know, this is a hospital environment. I tend to think of hospitals, and I I grew up, by the way, as a son of a doctor, so I spent a lot of time in hospitals. They're busy split places. There's places with a lot going on. They're not places that I think of as having a ton of extra space for somebody to just kind of, you know, plop down, set up, and get working on a computer.
What was that like? It it was tough. I mean, the their IT staff was strained. Okay. They they did not really have anyone there that was a cybersecurity person, and no one really claimed that role.
It wasn't assigned to them either. Yeah. I mean, they were fighting fires all the time. So when this happened, no one was prepared to to take control. One thing we saw when the CIO was actually out of the country, you know, fully crews.
And so the system admin was left in charge, didn't know what to do. The incident response plan, nobody knew where it was or if it even existed. Yeah. Yeah. So that means if it even existed, it was probably pretty dusty.
Yeah. Yeah. Well, these these things happen. People get busy, and they they just don't have the luxury that they need to to sit and plan things like this if they're always in firefighting mode, which Yeah. Unfortunately, a lot of the smaller institutions, this this happens.
Yeah. No. I mean, it's one of the downsides of running a quote unquote lean organization where you don't have a lot of buffer and you don't have a lot of, let's say, extra manpower or capacity. And you know, like along those lines, one of the things that I've heard and we've talked about previously on the show about incident response plans is that even if you have one, great. Maybe it's dusty to your point, maybe it's a little bit outdated.
It does need regular review whether that's on an annual or let's say like twice a year kind of basis because systems change, processes change, tools change, technologies change, all of that. But another big point that has come up regularly around this is that what if somebody is on a cruise and out of the country and they're not available? Is there somebody else who's also been trained on the plan that you can work with? So, okay, sounds like you didn't really have much of a plan to go off of. So how did you deal with that?
Did you kind of, like, make up a plan on the fly, or you just went with kind of, like, okay. I've been through these incidents before. I roughly know the process unfolds like this. Yeah. Well, we had to kinda ad lib because there was a plan in place.
But so what we did, first of all, I mean, from my team, also first one there, I commandeered a room to set up as a what we call a war room where Yeah. This this is where all the action and all the decisions are going to be made. We we would have whiteboards where things could be written down and Yeah. Just kinda get some control over what was going on. We had a a lot of staff from the firm come in from overseas to help out with the, with the recovery efforts.
And Yeah. They would take over a lot of the acquire pulling the systems offline and bringing them in and reformatting them, reimaging them if if if at all possible. Yeah. Once they were infected, once they were determined infected. And I actually took command of their input protection and to help out and Yeah.
Would help push that out and keep track of where we were with it. And Yeah. It just which really opened my eyes to even more things. I mean, there were some of their systems that had not been patched with their inputs, input protection for two years. Yeah.
Yeah. Yeah. I think that's Two weeks now is just scary. Yeah. Yeah.
But it's an unfortunate reality. And and, you know, like, I've been in cybersecurity for a long time, and I remember the first time that I saw the stat about how long a vulnerability lives in a production environment, you know, it's something like a 80 some days, you know, more than six months. From the time that a, and I think this stat particularly is for critical and high severity vulnerabilities. I don't know what CD, or CVS score above, but, you know, and it's been a 80 or more for like twenty years. And and like that stat has never improved in the life cycle that I've known it.
So you're dealing with this situation. You've got stuff that hasn't been necessarily patched in two years. Did you kind of like, okay, say, okay, great, but if those systems aren't infected right now, that's not high priority. Highest priority is dealing with infected systems and recovery. How how did you manage, like, we know we're now uncovering this other stuff, but we've got this, like, in our face emergency to deal with.
How did you balance that? Well, first of all, how can we sustain operations in a in a disaster mode where where it's not patient impacting. Unless most important is your patient. Yeah. Yeah.
We will make sure that that the the staff is still able to do their job, the patients still be being taken care of. So pen and paper for that is about all we can do. Yeah. Backups were so systems were being restored from backups. Unfortunately, some of the backups also had the I would call it the the script to see for the For the ransomware.
Yeah. Yeah. It was it was actually trick Bot if you've ever heard that. I've heard the name. I'm not familiar, but I've heard the name for sure, and most of our audience will have as well, I'm sure.
Yeah. And, but it was you know, we had fun ways to work around. I had to be able to clean it off because you couldn't really rely on the the backups for some of that. But eventually, you know, the most critical systems that that the hospital needed to come on come back online were were restored first. Okay.
Makes sense. Less critical. I mean, it it they could have been two weeks down the road before everything else was was back online. Okay. In that intermeeting time, I'm curious about one thing because I personally haven't been through a ransomware attack, knock on wood here.
But one of the concerns that I've heard around ransomware attacks is is the, high probability of reinfection. So as you deal with that, it's like, okay, you've prioritized the critical like the mission critical systems, makes it makes perfect sense, of course. You've got these other systems. What do you do with them in the meantime? You take them all off the network to prevent the risk of reinfection, or how do you handle that?
Yes. You you take them well, we took them off the network just for the cost. And yeah. Okay. Makes sense.
And did you find okay. So you found that some of the backups were already kind of infected, so to speak, so you couldn't really rely on those. Hopefully, other backups were kind of okay, so you were able to work through some systems that way. Did you did you were you able to end up tracing back to figure out how the initial infection got into the environment? Yes.
We actually were it it was a long complicated process, and and, of course, we did this much later. Okay. After after the recovery was complete. But Okay. Using the input protection logs with this system that they had at the time, we were able to go back and and follow this particular pattern.
Okay. Well, first of all, the event happened in March right before right before COVID hit. Jeez. Yeah. Terrible.
Yep. Which means the hospital is getting even busier. But once we isolated what that it was the TrickBot thing, we were able to trace it back using these logs and realize that this thing had been lurking in the network passively since the previous November. Wow. So that's about four or five months at that point.
Yep. Yeah. So something was there, something triggered it. Yeah. Yeah.
But that almost brings to mind my question then that, like, how how confident did you feel in the overall network at that point? Because if I'm thinking about this again from the perspective of the risk of reinfection, there's part of me that says, I don't know if I want to bring anything back online into this network, but I will what I might want to do, and I don't know if this is even possible, is provision a new network, and as I clean systems and get them reinstalled, migrate them over to the new fresh network. Was that an option or not really? Because there's to your point, you know, this is like right in the run up to COVID. There's a lot going on at a health care facility at that time.
A lot going on, and they would not have had the resources. I mean, that's what I'm saying. A a very resource, strap organization. Yeah. Yeah.
Okay. Okay. So you you kinda had to deal with it. You knew that it was on there since November. Were you able to find, like, a a cleaner script or something that so that as you did bring systems back online, you did get some level of confidence that, like, hey.
We've got all traces of a off of these clean hosts or or or what did you do there? Yes. Use using the same tool, we were able to to create scripts that that make sure it was removed and and continuously monitored these devices. Okay. And to make sure that they were clean before allowing them back onto the main area.
I mean, we we did have them isolated. Yeah. Yeah. How long did this whole thing take you? I mean, to from, you know, from that initial flight up on on the Monday morning or whatever it was, and, you know, and then you've got a couple weeks, and then you've got probably the hospital going into some kind of restricted access mode.
I've got to imagine, like, well, first, how did you deal with that? And second, how long did the whole process take? I was there, two weeks. And after that, everybody went virtual because of COVID. Yeah.
And I believe the the team from overseas stayed there a bit longer to to continue with the restoration. Okay. Okay. And I don't honestly, I don't remember all the details of it since, you know, it's been five years. Well, it's But, I do recall that I was still working on some of the things months afterwards.
Yeah. Yeah. Three or four months afterwards. Yeah. I mean, five years in a high stress situation to start off with with ransomware, couple that with everything else going on relative to the pandemic.
And, yeah, I'm sure it's easy to forget some of the details. We've talked about a couple of the, kind of, let's say, the lessons learned along the way about, let's say, you know, your the importance of your backups and and the importance of having an incident recovery plan or an incident response plan. But I'm curious from your perspective, what did you take away most and what did you learn most from the experience? Well, what I learned most is that health care organizations, regardless of their size or the amount of assets, and they need to do everything they can, everything possible to avoid any type of ransomware attack, any type of cyber attack. You know, the next one may be patient impacted.
The next one may be may not be as well, this one wasn't easy to recover by any means, but Yeah. The connection may not be recoverable. Yeah. Yeah. I mean, it really does put it in perspective.
We often talk about the cost of a cyber breach, and you'll often hear the statistics around, you know, the the what is the value of one breach data record, and it's I think the number is usually in the like 160 range. But when you talk about patient impact, you're really taking that question to a different level of consideration that is, you know, impossible to calculate. Right? You can't really put a price on. So that that's really a strong, strong point around, you know, healthcare organizations really having to do everything that they can.
From a process perspective, or let's say like a hands on practical perspective, any other things that really pop to mind as far as just like the day to day of it or something that you ended up having to do that you wish you had been more prepared for or thought about more previously to the event or or or with the team coming in. I'm curious, like, because you kinda quarterbacked the situation. Right? So you would have seen the the whole project kinda end to end. Yeah.
Actually, I mean, this, organization was, like I said, one of my clients, and we were actually working on a plan to boost their cybersecurity efforts. Just not this hospital, but several hospitals. Yeah. And what we were doing, first of all, is the security assessment based on a combination of the the NEST CSF as well as the CIS benchmarks. Yeah.
So we did that and created a score for each of the hospitals. And based on that, we did a gap analysis and created what we call a get well plan to help them determine what they can do immediately to increase their resilience, what can be done with just a little money, what but what can be planned for, you know, the next phase. Yeah. So just simple things like password management and, MFA and monitoring. Monitoring and monitoring.
And and knowing how to monitor. Knowing how to use a SIM Yeah. Would be so important. Yeah. Yeah.
Developing the incident response plan and not just developing it and putting on the shelf to collect dust, but how those tabletops Yeah. If you even wanna call them tabletops anymore. I learned yesterday that that's that's an old fashioned phrase. Oh, I didn't know that. Well, I didn't mean that, but somebody told me it was.
I I don't I don't remember what it what the replacement for it is. But Okay. Okay. Yeah. Yeah.
We'll go there. And also, any any tool that is there to protect you, never pretend it's a set and forget. Make sure someone is accountable for it. Yeah. Make sure that they understand everything about that tool, that they know how to manage it, that they know how to understand when something's not working right with it, That they understand the consequences.
Yeah. Yeah. Yeah. That that point about kind of, let's say, the ownership and configuration and and kind of, you know, making sure that the implementation is done up to the level that it's meant to be done It's so important and I mean, by the way, this is true outside of cybersecurity as well. You hear so many projects about, oh, we bought this thing and we never really got it up and running very well.
So we were dissatisfied customer and then we ended up, you know, not renewing our license or whatnot. But, you know, how much time and effort went into the that initial selection and also how much opportunity cost is there and the fact that you actually went through that process, bought something, and didn't actually get it properly implemented, you know, that that's a real missed opportunity to drive efficiency gains or productivity or security or whatever the outcome was that you originally were targeting. That that's a great point. Well, Jeffrey Lyon, I don't have any more questions. Oh, please.
Don't you're not buying a tool. You're buying a solution and a peace of mind. Yeah. You gotta think of it holistically like that. You buy one tool that doesn't fit everything else.
Yeah. That point about buying a solution, I think that's that's that's the key point. Mhmm. Awesome. Well, on that note, I will thank you so much for taking the time to join us back on here, for sharing the experience.
I'm sure lots of our audience will learn from this and hopefully not too many of our audience has had to go through the pains of a ransomware breach, themselves. But for those who who have, maybe you will learn something from today's experience or if you think that your organization might be at risk, take some of Jeff's advice to heart, around planning, around implementation, around ownership of tools, around not having just one person go through the IR plan with you. Thanks again, Jeff, for taking the time to join us on the Modern Cyber Breach series. For our audience, we will talk to you on the next time. Thank you so much.
Bye bye. Goodbye.