Why Behaviors Matter in Threat Hunting

Introduction

If you’ve ever engaged in the age-old sport of “people watching” you’ll know that almost everyone has unique behaviors. From the barista behind your local coffee bar that pulls on his beard when he is bored, to the girl sitting at one of the tables that likes to punctuate her sentences with “eh,” to even the dog tied up outside that remains lying down until bicycles come by, and then he barks and wags his tail. Of course, this observation isn’t limited to just people watchers: poker players look for so-called “tells” when other players bluff, psychologists do it to better understand their patients, and of course security staff (such as police, private detectives, and loss prevention teams) look for suspicious behaviors that might be exhibited by criminals. All this is to say that behaviors, out in the wild, matter to a whole lot of people. That is why it can be very frustrating when looking at the field of cyber security.

In cyber security we often choose – consciously or not – to overlook behaviors entirely. Instead, we hone in on concrete details, like indicators of compromise and artefacts, often leaving behaviors to serve more as a confirmatory piece of evidence to an already well-established investigation rather than as a starting point for an investigation in and of themselves. And I think that is a missed opportunity for security teams everywhere.

Behaviors in Real-World Security

Let’s focus in on the example of security staff, and their use of behaviors in their day-to-day work. If those security personnel operated similarly to how many cyber security programs operate, they would have a list of known perpetrators – too long to memorize, and that would include a list of code names used by countless other companies (but I digress…) – they would also have a list of disparate details about the perps, including clothing, fingerprints, hair and eye colors, brands, vehicle makes, models, and license plates, phone numbers, and even their mothers’ maiden names. They would then have to interrogate everyone that they meet to see if they matched any of those details. They would also ignore anyone that failed to match on those details, even if they were in the middle of committing a felony.

If you aren’t on the list, you’re good to go.

As you can see that model has two major flaws. The first is it assumes the perp will answer honestly, continue to dress the way they were when committing their crime, and otherwise remain the same in enough dimensions to be identified. The second is that security staff are only ever looking for people who have fully committed a crime: meaning that if a security guard sees someone in a store stuffing clothing into a bag, but they don’t see the person leave, then they would ignore them. Despite these obvious shortcomings, however, this is how a lot of cyber security teams operate.

Now, the argument that I often hear in rebuttal to this is that cyber security teams are often so overwhelmed with data, acting as a security guard at the door, asking the questions, is simply a case of efficiency. And to that I say, fair enough. After all, who hasn’t seen security guards hanging out by the doors of a store around Black Friday before?

However, just because there are security guards by the doors, doesn’t mean you don’t have

  • people watching the CCTV cameras,
  • undercover staff wandering throughout the store,
  • having store staff act as informal informants

And the list goes on. Those guards by the door serve a purpose, but it is a purpose layered on top of a broader threat detection strategy aimed at protecting the company from harm.

Incorporating Behaviors into a Cyber Security Strategy

The first step in incorporating behaviors into a threat detection strategy is to realize that while security staff often focus on human behaviors, cyber security professionals probably shouldn’t – at least not exclusively. After all, there are a lot of human behaviors that, when interpreted through system logs (and especially at scale), can be hard to differentiate between suspicious and innocuous. Is that 4th logged event ID 4625 someone trying to brute force a user’s password, or is Mary in accounting just having a bad case of the Mondays?

Is John in marketing trying to download a set of fonts from that file sharing website, or is an adversary trying to use it for tool ingress?

And without being able to pick up a phone and talk to Mary or John, building the context around their behavior can be difficult or impossible. And even if you can talk to the users, such a strategy can’t scale past a small start-up. However, what cyber security professionals can look at is program behavior. The individual behaviors that applications and code on the system exhibit. These are behaviors that can be identified and interrogated at scale.

An Easy Example of Behavioral Security

One of the most common examples of this type of security that I like to use is malicious document (‘maldoc’) phishing. In it, a user receives a phishing email with a document containing malicious code or macros embedded within it. When the user opens the email, they trigger the code, which will often leverage something like a command prompt or PowerShell to execute additional actions.

Now, the traditional approach would be to gather a list of MD5 sums, known-bad IP addresses and domains, and maybe even malicious tool filenames or filepaths and try and match some element of the maldoc to one of those concrete details. More advanced methods might try to implement fuzzy logic, like assessing filepaths and names for consonants vs vowels, and in what order they appear. And, we can say with a high degree of certainty, for instance, that if you see an MD5 sum match, that you have a true positive. However, as adversaries develop even basic operational security (opsec) practices, they are implementing practices that neuter these detection strategies. For instance, recompiling their tools to bypass hashing, using bulletproof hosting to limit the detection of IP addresses or hostnames, and randomizing filenames and paths using dictionary-based words.

However, approaching this from a behavioral perspective allows us to instead approach this from a different angle. Here, we can identify the “behavior” being exhibited in the attack, namely Outlook.exe, spawning Word or Excel, which in turn spawn suspicious child processes like cmd.exe, ps1.exe, rundll.exe, or many others. By looking for this type of behavior moving forward, we eliminate the need to keep hundreds, thousands, hundreds of thousands, or even millions of indicators which may never be observed, but still must be gathered, analyzed, documented, and reviewed regularly. Instead, we cut this down to a more focused and manageable list of indicators that apply to our organization, and layer that with a series of telltale behaviors that allow us to identify suspicious and malicious activity, even if no one else has observed it yet.

Getting Started with Behaviors in Security

If you are looking to try out moving past behaviors, and building out your behavioral content, a natural first place is “where do I start?”

This is a fairly normal question, especially considering most threat intelligence sources don’t really provide operationalized behavioral content in a lot of the public reporting. Thankfully, there is a growing community of behavioral threat hunting content providers that are beginning to offer this type of content. We offer access to the HUNTER platform free of charge for companies of all sizes that want to get started with behavioral threat detection! Click sign up and use the promocode “BEHAVIORS” to get your free account today!

Conclusion

As adversaries continue to improve in their opsec practices, we as an industry also must continually improve in our ability to combat them. While this has often resulted in new and more capable tools – which is itself nothing to turn your nose up at – we also should not look past changing or adapting our practices. Looking to more efficient or resilient threat detection strategies, such as those rooted in program and user behaviors, should be top of mind as they allow for more more effective and efficient use of resources, while also hampering adversaries’ abilities to hide.

Join our newsletter

Discover More!