Let’s face it, the cyber security industry is full of strong opinions. This is true about best practices, the best methods, or even the best caffeine sources. Another topic that can be polarizing though is the role of automation. Some people swear that the growing volume of attacks means that automation is the only way to keep pace. Others swear that automation is the 21st century version of snake oil. This is an argument I have seen play out for years in SOCs and break rooms, globally. Lately though, I have started to hear a lot of folks say things like: threat hunting can be fully automated. Now, let’s be clear, I am not saying that SOAR-type automation doesn’t serve a purpose. Far from it. But can threat hunting be fully automated? Not a chance.
Automation? What Automation?
First, let’s discuss automation and what I mean by it. The reason this is important is that most tools and teams already have automation to some degree. Every time AV blocks a file, or NIDS or SIEM generates an alert, that is an automated technology at work. So, to say that automation “doesn’t work” would be untrue. Not only does it work, but that automation is a huge time saver.
But when people say “threat hunting can be fully automated,” that isn’t what they are talking about. Instead, they are often talking about “cradle-to-the-grave” automation. Removing the analyst from the equation altogether, replacing it with simple programming logic. And at this point, that isn’t possible.
False Positives, Automation, and Human Analysis
If you talk to any security analyst and ask them about false positives you’re likely to get a very candid answer. This answer is likely unbridled frustration. They will tell you that their team pays for some of the most cutting technology on the industry. Technology that has gone through countless upgrades and refreshes. Technology that has promised to solve every problem other than world hunger. Even with that technology, though, the analysts continue to see a stream of false positive alerts. Keep in mind, this is only traditional security analysis, dealing well understood threats.
This is one of the key reasons we still rely on security analysts. Analysts are able to look at these alerts in ways beyond simple boolean, or even fuzzy, logic. And consider that these threats are well understood. Threat hunting is looking at unknown threats, often based on unknown behaviours. This means that hunt teams are going to find false positives in their environment. The industry still relies human hands for traditional analysis. Those saying “threat hunting can be fully automated” must consider the business impact of false positives.
Artificial Intelligence (AI) and Machine Learning (ML): Man vs. Machine
Another topic brought up when discussing automation is the rise of the machine. The claim is that with AI/ML, the machines will become smart enough to replace analysts. In fact more vendors then ever will tell you that with their AI/ML threat hunting can be fully automated! And the reality is that AI definitely has had some interesting (and controversial) moments in recent history.
But AI/ML solutions also have some big risks, like inversion attacks, biases, and model drift. But even beyond those risks, AI/ML has a big limitation when companies buy an AI/ML product. These security products are often black boxes. Often even more so than many other products. This means that while some of these products can protect against attacks, they don’t really explain what they are doing or why. This means that organizations are often very much at the mercy of the product and vendor. Also, they may not be able identify the real problem.
It is this last piece that is critical. Threat hunting is not only about finding malicicous activity (though that is a big part). It is also about identifying critical weaknesses and vulnerabilities. Hunters are often the first ones to find new vulnerabilities in an environment. So relying on an AI/ML solution may provide short term protection, but it is at the expense of long term defence.
Anyone who has implemented SOAR in large enterprise can attest, it is often more complicated than plug and play. This is because many teams facing a deluge of data look to SOAR solutions to save their analysts. The reality though is that most teams, whether they have SOAR or not, do not have alert-level processes.
These processes, often in form of flowcharts, go through the analysis process. They identify the validation and triage steps, the needed evidence, and decision and escalation points. Without these processes in place, full automation is impossible.
Structured threat hunting, for instance, relies on a hypothesis. This hypothesis is often derived from an organization’s threat intelligence and experience. This hypothesis will have unique requirements. The hunt will produce a unique output. This results in a chicken-and-the-egg scenario. Threat hunting can be fully automated only if engineers building the automation know what the output will be.
If they know what the output is, is it threat hunting? No, it isn’t.
Seeing the Automation Forest for the Process Trees.
As I mentioned above, cyber security is full of polarized opinions. And I would forgive you for thinking this rant is a troglodyte’s response to someone saying “threat hunting can be fully automated!” The reality, though, is that automation is key to cyber security. Maybe even to threat hunting. But that automation should not focus on replacing an human element in security.
Instead, automation should aim to help the analysts and hunters do time-consuming menial tasks. This cloud include:
- Alerting on pre-defined detections from previous hunts
- Gathering log files or other evidentiary material
- Managing ticket creation and assignment for incident response and remediation
- Producing pre-formatted reports based on identified activity