The Human Element: Where Technology Has Its Limits

Well, we made it back from RSA Conference 2020 and had quite the time! This year’s theme was “The Human Element” and what an ideal them for current events. Although there were some key participants that pulled out due to growing concerns over the COVID-19 outbreak, it was still great a success with a large turnout, which was perfect for our mobile lounge.

This conference gave us the opportunity to speak with others about the issues surrounding the technologies in our industry today. But technological falsehoods were not the only topic of discussion, there’s a human problem that needs to be solved as well including the large skill shortage we have in our industry. Amid the plethora of opportunities, there just aren’t enough people that are able to take on advanced roles, such as Threat Hunting, or organizations aren’t able to find, afford, or retain those who have those skills. As a result, their analysts end up suffering from alert fatigue without the assistance of more seasoned analysts or engineers that are able to see through the noise and quickly identify true positives. Because of this, many companies are hopping on the opportunity to market and sell technology that claims to overcome this skill shortage.


New call-to-action


However, technology is not the end all be all that their sales teams will have you believe. In recent years almost every industry has toted new technologies such as Artificial Intelligence and its many subsets, including Machine Learning, Neural Networks, Natural Language Processing, and more – all claiming that these advancements in technology will change the way their customers operate for the better, limiting the need for resources and increasing productiveness. The cyber security industry is no exception – companies spend huge sums of money on new products that claim to use [insert buzzword here] technology to stop threats, reduce the need for a dedicated team of security analysts, and mitigate the risk that we, as humans, bring to the workplace in general. This brings raises a question – can these advancements in technology fix or replace “the human element”?


While these technologies absolutely have their place in the cyber security industry, at some point in the analytical process it will require human interaction with an analyst behind a keyboard. A great example of the benefit of this technology is the WannaCry Ransomware outbreak that occurred in 2017 – WannaCry Ransomware was a virulent strain of Ransomware that was attributed to North Korea and famously used the NSA’s “EternalBlue” SMB exploit to rapidly infect the multitude of Windows machines that were vulnerable to the attack, and continued to do so for years for those who had not patched against the vulnerability.

There have been numerous stories where these new technologies were able to detect anomalous deviation in SMB network traffic and give the responsible teams the heads up necessary to thwart or stop entirely the attacks that were occurring on that vector – both WannaCry, threat actors, and generic malware alike. However, while these technologies were able to detect the initial anomalous behavior, they ultimately just served as alerting measures to the human behind the keyboard to this activity and required them to take further actions and weren’t capable of replacing the human element.


Another benefit that these technologies often purport is to further mitigate the risk that we, as humans, bring to our organizations. Humans have long been regarded as being the weakest link in any organization – and regardless of how many technologically advanced products, group policies, or other precautions you deploy, they are often just a mouse click away from compromising your network. There are technologies that can help detect and alert on deviations in expected user behavior, such as Carrol from the Finance Department using PowerShell – but how can it determine if this activity is a true positive or a false positive? Ultimately the human attack surface is too large and unpredictable to be completely mitigated by new technology at this time, and we must instead ensure that we are providing our user base with the proper education on best practices and how to identify and respond when a threat arises.


With all that being said, while many in the security community have heralded the sophistication of ML/AI-backed automation as a means to dig out of information overload and alert fatigue, even the most advanced ML/AI cannot currently overcome the crafty adversaries who have every motive and means to evade them (read more about our views on this in our White Paper.  Rather than automating and removing the responsibilities of an analyst, we at Cyborg Security are more of the mindset of enabling the analyst to effectively and efficiently respond to threats as they occur. Cyborg specializes in the detection of adversarial toolsets and techniques. With decades of cumulative threat hunting, intelligence, and incident response experience, we have taken our knowledge of dealing with threat actors throughout Government, Defense, Technology, Healthcare, Media, and a myriad of other industries, and developed Cyborg C.O.R.E. This platform provides you with contextualized, enriched, and validated detection logic to detect even the stealthiest of adversaries. In all our content, we include full playbooks, mitigation recommendations, correlative intelligence, and more, to enable your analysts to quickly and efficiently review, respond, and react to any anomalous events in your environment.

To learn more, please read here or contact us.



Join our newsletter

Follow Us

Discover More!