Let’s face it, sometimes we are guilty of impatience. It doesn’t matter the industry or problem, it seems like everyone is offering a quick fix to solve all our ills. But, it has been my experience that those “shortcuts” can, and often do, turn into “longcuts” – as the saying goes. Cyber security, and threat detection in particular, is no different. The industry often seems infatuated with revolutionary “big fixes.” In reality a series of evolutionary “small fixes” can often have a much longer lasting impact. We’ve put together 4 of these small fixes that can make a big difference to threat detection in organizations.
The first small fix that can help the threat detection is something that applies to many facets of life: moderation. Often organizations believe that more security “content” means more security. This is true for indicators, rules, signatures, and queries. The reality is that quantity alone can actually have a negative impact on security operations. This is because quantity alone, especially without vetting, can result in huge number of false positives. These false positives burn analysts out, and overwhelm support teams.
Organizations should focus on moderation. For instance, when ingesting indicators, don’t ingest every indicator found. Instead, focus on ones that have higher fidelity and that threat actors use long term, like command and control (C2) nodes. For rules, if they are technology-specific, enable only the ones that apply.
Another small fix that can have a big impact on threat detection is analyst-focused process documentation. Now, to some this may appear to be a “not-so-little” fix. But, in reality, most organizations often see the same detections again and again. So, to build procedural documentation around the top ten is is often easier than it appears. Why is it important though?
Building out procedural documentation has a host of benefits for organizations. If done properly, it allows new analysts to be immediately productive. It also ensures that detections are triaged, investigated, and remediated consistently. As organizations focus on automation, the documentation also forms the basis for that automating.
During an investigation, especially on something that turns out to be true positive, every second counts. If an analyst has the information they need to understand the threat, that is less time they have to spend Googling it. This means they don’t have to try and “fill in the blanks” in the middle of an investigation. This prevents rabbit holes, and ensures more rapid mean time to detection (MTTD).
Organizations often determine how protected they are by the numbers. They look at how many indicators, rules, signatures, and queries they have for a particular threat. The higher the number, the more protected they feel. And this is a perfectly rational line of thought. But it is often fatally flawed. This is because, from our research, most organizations get their detection content from free sources. And as we mentioned before, that the community provides these is a testament to the folks in it. Still, the problem remains that most organizations can’t test that content.
Or can they? In fact, building validations for content has never been easier using tools like those provided by Red Canary. These tools allow organizations to simulate specific tactics and techniques that malware uses. The major advantage is that organizations can do it in a safe manner, in their own environment. This allows teams to test their detections, and their analysts’ responses to those detections, in an efficient manner. Now, will those tests replace a pen text or red team? Absolutely not. But, it gives organizations peace of mind that their threat detection works as intended.
The concept of a “quick fix” can be very alluring. The reality though is almost every significant security impact for organizations doesn’t come from a revolution. Instead, it comes from an evolution. Small steps and changes that can have big impacts, especially with threat detection.