MKG MarketingMKG Marketing LogoQuotation Marks
Podcasts > What's the Problem

Incident Response Plans (IRPs). The good, the bad, the ugly.

Mike Krass • Wednesday, June 29, 2022 • 15 minutes to listen

Subscribe to the Podcast or listen on...

SpotifyAnchor

Join our weekly newsletter

We care about the protection of your data. Read our Privacy Policy.

Transcript

Opening

Hello everyone, and welcome to What's the Problem, the show that explores problems that buyers, practitioners, operators, and leaders in the world of cybersecurity face today.

Today, we are fortunate enough to have James Williams join us as a guest expert.

Conversation

Mike Krass: James, say hello to the listeners.

James Williams: Good afternoon, listeners. I hope everyone's well.

Mike Krass: All right, James. Let's get into it. Tell us why you're qualified to talk about security.

James Williams: I've been in cybersecurity for 38 years now and certainly been in security before being the buzzword of cyber security. I have a military background in the United States Air Force. I served there for ten years. I started working on KY3 and KY7, which were all military cryptographic equipment and was a cryptographic specialist in the United States Air Force.

My background covers the legacy of 30 years of security to include managed security service providers, with the major corporation Symantec, also including in the federal space providing cybersecurity for all of the majority of federal agencies here in the DMV area. My most recent knowledge and experience come from providing cybersecurity in the cloud for the Decennial Census taken in 2020. You can understand the challenges with COVID being in the air then, to put that together and come out there. Harley's had all of the personnel, and the Americas information secured.

Mike Krass: Absolutely. James, I did want to take a moment to thank you for your service. We have quite a few folks who have been on the show with military backgrounds, and we always want to take a moment to say thank you for serving the United States of America. Well, you gave us a lot to chew on there. You're talking about federal space and the census. I imagine even a few of the listeners in today's show can thank you for safeguarding a lot of their information if they participated in the 2020 census here in the United States.

Let's get into a problem. James, you've got all this military background. In the world of security, you've got federal work, and you've got to know the MSSP work. Name the problem that you want to explore with our listeners today. What's the problem on your mind around security incidents and threats?

James Williams: There are multiple problems in the world today, but one of the problems that always piques my interest and gets my juices going is the problem surrounding IRP incident response plans for organizations, and that's corporate, commercial, and federal agencies. And without that clear cut, incident response plans to escalate or deal with today's ransomware or breaches. You could see a loss of the difference in the loss of data just to put it in dollars. You can go anywhere from the time you spent on an incident to losing millions of dollars when that proper escalation isn't in place. It takes up to an hour or, in some cases, no escalation occurs because that's the operational SOC analyst sitting in the SOC at three o'clock in the morning and doesn't want to wake anybody up until business hours at nine o'clock. And by that time, your containment strategy and eradication strategy are long-delayed, and the robber could have been in the bank, pointed out, and on their way to Mexico.

Mike Krass: Tell me about a good example of an IRP.

James Williams: A good example of an IRP, and I've seen them evolve over the years. Some are real basic, and others are very detailed. I'm detail-oriented, and I like detail-oriented incident response plans where people don't have to think but just act. In this particular case, that is surrounding the impact of the incident and applications impacted the severity guidance in charts to allow that to happen. For example, if I'm not a SOC analyst sitting at the console at three o'clock in the morning, and I see we have 250 applications, and only one application is impacted, then I go to my chart and see what severity I can give that whether that be moderate, high or critical. But again, that can change based on it being an HBA, or a high-value asset. Even though only one high-valued asset application has changed, that could still be measured in severity as critical and continues you down the path of how do I set up my Incident Response bridges, communications, escalations out to individuals, and if it's in a civilian commercial realm, at that point, how do we present this to the media. So that would be an incident response plan that gives you from A to Z, everything from who to escalate to, what the severity is, what to communicate, and how it should be communicated to the media if there are third-party vendors or things that make an important factor that you can escalate to as well and keep them in the loop.

Mike Krass: So with a poor example of an escalation be just the absence of all of those things, are there certain things that you just listed on the positive or the good side of the example that is more important than the others?

James Williams: It's a cascading effect. Once you determine the severity, on the good side, those things begin to cascade into who should be notified. If it's decided at this point that we're going to make a pre-incident response, which means just a couple of people from the incident response team get together and say, "Hey, okay, here's the information we've got, we're going to make a judgment call." Okay, we're going to escalate to the next level. But like I said, on the other hand, where there's a poor example is I've seen organizations where they don't even know where the incident response plan is. I've interviewed SOC analysts to the point where I said, "Okay, where's your incident response plan? If you see a security event that you want to escalate right now, I don't know." So that's the answer to that question.

Mike Krass: Yeah, the house is on fire. Where's the closest fire hydrant? I'm not sure yet.

James Williams: Exactly. Are you on an extended extinguisher and a home? Because most homes are carried out today, that's a good example.

Mike Krass: Well, as we wrap up the discussion on Incident Response Plans, the last question I wanted to bring up is that it needs to be written in advance. You can't be flying the airplane while you're still bolting the wings on as you go down the runway. I guess; theoretically, you can know how far you'll fly. An incident response plan is written in advance. What do you think the role of things like tabletop games or other practice exercises play in updating that IRP? Is that something that you also practice and recommend? Is there a certain cadence? Do we do these tabletop games once a year or every six months? What's your advice to listeners?

James Williams: You brought up a very good point. Every incident response plan is only as good as exercising. I can still be that SOC analyst at three o'clock in the morning, sitting there and finally seeing an incident. And here's this huge document that I know where it's located, but I've never been through it or had any training on it or official training on it. These are what I like to call cyber drills. The cyber drills consist of writing up a scenario of some type of attack that could be a DNS query to a malicious site detected, and you take that, and you write that scenario up. And then, you take your SOC analysts and your incident response team, and you'd work together and go through the steps of that incident response plan. It not only trains and gets your people incident ready, but it also helps you go through the incident response plan and says,” Okay, what's my lesson learned from this? Is there additional information that needs to be added? Are there some steps that are too many? When we went to our containment strategy, were there two or three steps on which we could have saved time and contain a cluster or something? We've got 10 clusters sitting out there. One of them got infected, and we need to pull that one. But did those two steps take an extra 30 minutes where we could have brought that cluster down much sooner? So through that exercise with those cyber drills, I call. We're going through that process, and then we're also timing ourselves. It's not only the process itself, but how long does this process actually take? How long does it take for us to escalate out? And then making sure we do primary and secondary contacts so that there's somebody we need to decide. We've also got another person that maybe we can reach. They may be at the lake with their family, and no cell service out there. We've got a second person we can call to make those decisions so that it doesn't slow down the process.

Mike Krass: Yeah, when it comes to decision making, what's the successive order of decision-makers and who's notified first, who gets to keep waterskiing at the lake because they're out of touch, and they're unreachable? And somebody else can fill in for that role in that exact moment of need.

James, I really appreciate you listening and walking our listeners through the topics of incident response plans, good, some bad, and a little bit ugly thrown in there. Let's finish this episode with some fun. Tell our listeners about a terrible haircut you've had at some point in your life.

James Williams: That's going to be a tough one. I'm bald-headed. But, I did have a particular case where I have a beard that connects down to a goatee and in the middle where it connects between the ear and the jaw. There was a piece saved out of the middle. So it looks like the bridge was on the beard connection from the ear to the chin. There was a misstep that got that one section cut out, which led me to go back to a goatee, get rid of all of that, and let it all grow again.

Mike Krass: Yeah, I was thinking, burn it to the ground. You've just got to start over at that point, because you're not going to make it even on the other side.

James Williams: You have to make it pretty much even. I'm very keen on the avenue facial hair, looking very neat.

Mike Krass: I love it. Well, I'm glad that the bridge grew back in sounds like we're back on better ground now. And I am so glad that you were able to educate the listeners on the topic of incident response plans. Some people might want to reach out to you and just connect professionally. James, what's the easiest way to get in touch with you? How would the listeners reach out and say hi?

James Williams: There are a couple of ways; one is LinkedIn. They can go to my LinkedIn profile under James L. Williams, and my certification of CISSP follows it. The other means is to go to my business website, jlwtech.com.

Mike Krass: Awesome. You heard it, listeners. If you want to get in touch with James to easy avenues, jlwtech.com. Or go and check him out on LinkedIn James Williams, CISSP.

James Williams: James L. Williams, CISSP.

Mike Krass: Perfect. Thank you so much for being a guest on the show. James, say goodbye to the listeners.

James Williams: Thank you, Mike, and I appreciate the time and to all the listeners out there. Have a great day.

James Williams

James Williams is the Global Network Security Manager at Symantec and the owner of JLW Technologies, a security consulting firm based in the DMV are (Washington D.C. / Maryland / Virginia).

Join our weekly newsletter

Get industry news, articles, and tips-and-tricks straight from our experts.

We care about the protection of your data. Read our Privacy Policy.