Effective post-hunt activity stands as one of the hidden threat hunting steps that cybersecurity organizations can take to maximize the ROI from their threat hunting programs. The measures organizations take to follow-up on their cyber threat hunting findings can often reap big benefits to the security organization when it comes to long-term detection and defense.
While every one-time cyber threat hunt absolutely holds intrinsic value just for the ability to find stealthy adversaries active in an environment, the true value should go beyond that. The long-term gain from a threat hunt rests in the identification of the new tactics, techniques, and procedures (TTPs) that the adversary used to get around the organization’s detection mechanisms. Ideally, threat hunts should be fueling continual improvements to an organization’s defenses, primarily detection content.
After all, let’s face it, if your threat hunting team detects something malicious that snuck through defenses and the security organization doesn’t learn from that then CISOs are leaving money on the table.
This is why a cyber threat hunting plan should include next steps when a hunt hypothesis is proven. Those steps should not only include escalation steps, but also what will need to be done to develop threat detection content around that particular threat. The idea is that this detection content can fuel existing security controls for organizations and can be pushed to traditional security analysts to watch for similar TTPs and behaviors in the future. This way the threat hunting team is not wasting expensive resources doing the same thing over and over again.
The content that follows up on cyber threat hunt findings can take the form of a piece of SIEM content, a YARA rule within endpoint detection and response (EDR) platforms, SNORT or Suricata rules for IDS/IPS devices, or it could be in a number of different formats. Typically, these will not be low-level atomic indicators of compromise, but instead based on network or host artifacts, tools or behaviors that can be mapped to a framework like MITRE ATT&CK.
Meanwhile, what if a hunting hypothesis isn’t proven? That can be both great and not-so-great. Great, because it might indicate that no such activity is occurring within the network. However, it could possibly mean that the threat hunting team’s data or methods just aren’t finding that particular threat.
If a threat hunting team doesn’t post any findings for a single hunt, that may not offer cause for alarm. However, if the team is coming up short or empty-handed over numerous similar hunts, a security team may want to consider following up with some form of validation of hunt’s methodology and log sources.
This is where techniques like cyber threat emulation within an environment can help. Rather than just detonating malware within an environment, a team can use emulation to simulate a particular malware or threat behavior just to be sure that the team can see and pick up on the clues it drops along the way. This can help prove that the team didn’t find anything because there was nothing to find versus not finding anything because the of a flawed methodology.
As organizations work to mature their threat hunting into a repeatable, structured program, it is essential to cover all the steps from beginning to end to achieve long-term detection and defense. Cyborg writes more on this topic in our blog, What Is Structured Threat Hunting?