It is now clear that the general cyber-threat has escalated beyond the capacity of a human-centric response. Artificial Intelligence may emerge as a solution to this cybersecurity problem, but the evidence to date is poor.
Not unlike every technology trend that preceded it, AI is rife with hype and stingy on content. The definition of AI itself is so fungible that venture money pours into almost any deal that even resembles augmented let alone artificial intelligence and even when the application of AI technology is vague or promised or sort-of in there somewhere, hidden in secret algorithms that drive mysterious stuff like deep machine learning, expert systems or predictive analytics.
Case in point is IBM’s Watson, a mega-star of consumer marketing even before its’ prodigious capabilities are applied to a business problem. Combined with the requisite Big Data component, we know that IBM has created an outstanding product that will revolutionize something or other and soon.
One big commercial win was the demonstration of its ability to analyze and process 11 million of H&R Block’s tax returns. A recent development was its data sharing arrangements with some in the cyber security intelligence community, and its publicized ability to ingest over 700 terabytes of data.
This of course creates the potential for AI in cyber security, allowing advanced and deep machine learning software to automatically detect, diagnose and counter cyber breaches in a more informed and yes, intelligent manner. Has it done this? No.
At last count, we have something like 200,000 daily cyberattacks resulting in over 20,000 human hours chasing false positives. We know that the unemployment rate for cyber security analysts is 0.0% with 1.5 million vacant positions available and I am told by one of our board members that no bright, up and coming software engineers want to work in the field anyway.
They see it as a graveyard for innovation as the bad guys continually prove that they can outwit the good guys given a day or two after discovering the latest defense and that a career in cybersecurity will lead mostly to downside results as we seem incapable of prevailing in this war.
Newbie software engineers would rather work on problems that they can actually solve.
IBM has also developed a machine learning tool independent of Watson that detects phishing attacks 250% faster than conventional tactics with false positives below 1%. That’s great, except that the best way to detect and prevent phishing attacks is to simply block the malicious IPs and Domains. No breach. No false positive. No human intervention required.
In spite of all of the hyperbole around Watson and other IBM soft launches, there is one bright spot where actual applied AI is going on. The guys at MIT have produced a set of algorithms that with diminishing human supervision have analyzed tons of log files and have been able to detect anomalistic behavior, only previously detectable by human security analysts with 85% accuracy, thus potentially reducing the theoretical SOC and subsequent security heuristics workload by 5x.
This same technology has been licensed by one commercial company who seeks to bring it to market. It is the ONLY case of applied AI in the cybersecurity market that I know of.
If you also look, you will also find it difficult to pinpoint which cyber security market plays specifically are integrating AI technology, as well as the specific applications they are targeting with AI. The companies advertising themselves as cybersecurity AI players will also tell you that they can’t go very deep into how their technology works or the ways in which it may be applied due to concerns about their proprietary IP. I call that a load of shitzel.
But the venture firms call it a load of gold. Some of the big raises in 2016 have gone to guys like Tanium, Cylance and LogRythm with $295 million, $177 million, $126 million, respectively. And, I think that’s great. It would be nice however, if for the sake of the harried IT manager, they would simply call it what it is instead of what it isn’t.
AI is NOT automated detection algorithms, yet that is what most of the companies that have jumped into the space are doing. Algorithmic-enhanced detection is great and can be very helpful in reducing false positives and assisting depleted SOC teams and security analysts with their burdens, but it is not going to out-smart an aggressive opponent or solve the cybersecurity problem.
Bad guys are really good at figuring out workarounds. A year ago malware didn’t change its form or appearance once it entered an edge, but now it does, and many of the edge defenders can’t handle it. Even the ones who got a ton of venture funding a year ago on that promise cannot defend against polymorphism.
Instead of addressing the real problem in cybersecurity, the actual technology value of these startups is in acquisition and I believe it informs their strategy to the detriment of the InfoSec industry. They are highly ripe for acquisition by more entrenched and conventional cybersecurity companies looking to gain AI expertise and if Symantec is any example, with their 64 acquisitions, that strategy is sound. But, it does little to nothing in the form of helping out the frustrated CISO or CIO.
It seems to me that if we spent 10% of the energy we spend on driverless, Internet-dependent cars on applying autonomous detection and eradication technology (AI) to the cybersecurity problem, we might actually make some progress here.
I don’t know about you, but since I just had to re-boot this sucker to complete this post, I think I will stick to power steering.