Why Artificial Intelligence Can’t Save Your SOC

With so many security operations centers (SOCs) today struggling with analyst burnout and staffing shortages, it’s only natural that cybersecurity leaders are looking for every possible avenue to simply and affordably ease the pain. Which is why the siren call of artificial intelligence (AI) sounds so good to the ears of so many security buyers today.

Unfortunately, many security vendors today tout the miracle properties of AI-backed automation, claiming that it’s a lifesaver for understaffed SOCs. The common refrain is that AI is the answer to the cybersecurity skills gap, and the right AI engine can replace countless human analysts in the SOC.

New call-to-action

Now, while we believe that AI and automation certainly have a limited role in helping to streamline certain tasks in the SOC, it isn’t the cavalry most people are hoping for. Like many in our industry, we’re optimistic that AI has a lot of long-term potential for cybersecurity. But we’re also realists and know that it is not nearly there yet to replace the vast, vast majority of the tasks that SOC analysts must carry out to fight off threats today.

AI ISN’T READY TO REPLACE YOUR SOC ANALYSTS

The reason that AI isn’t the complete answer to SOC worries is two-fold. First of all, the AI and machine learning modeling today isn’t nearly as sophisticated as many cybersecurity marketers will lead you to believe. In many instances they’re just an iteration or two more advanced than the rules-based signature detection mechanisms that everyone in our industry already agreed long ago were insufficient to keep up with the volume of different threats out there today.

That leads us to the second point. Even if the perfect cybersecurity AI engines did exist, they’d break down fairly quickly without human intervention to properly train that artificial brain with information about how attackers are constantly switching up their tactics, techniques and procedures (TTP). The bad guys are pretty good at gaming automated system to evade detection, and you’d better believe they’re on to our “advanced” AI detection mechanisms.

This means that somebody has got to be generating the custom threat content who knows the context in which adversaries are operating, who has dug around to find out the newest TTP being employed today, and who can teach that AI engine how it should be working. The fact is that whether a cyber detection ‘machine’ is human-powered or AI-powered, it must be fueled smart intelligence daily or it will stop running.

The best analogy we like to give is if an organization were to take the best threat hunter in the world and lock them in a room without access to outside information for the next year, they would no longer be the best threat hunter in the world by the end of that black-box experiment.

HUMAN-POWERED INTELLIGENCE IS YOUR BEST OPTION

The truth is that cutting-edge AI modeling today is best working in enterprise scenarios where there are many fewer variables at play for a machine learning system to contend with. Machine learning applications today that are excelling outside of security are in areas like speech recognition where there aren’t quite so many dynamic elements at play. A speech recognition system modeled for English may need to learn different dialects, slang, or speech patterns, but it wouldn’t be called upon to all of a sudden parse algebraic equations because the rules of the game changed midstream. But this is exactly what an AI system in the SOC is called to do. Which is why we think that the modern SOC will need to be fueled by lots of human-powered intelligence for some time to come.

To learn more about why our vote is human over AI, read our blog, Threat Hunting and You: Why Content is Critical to Threat Hunting.

Join our newsletter

Follow Us

Discover More!