Artificial Intelligence and Machine Learning hold a lot of promises in security. They will help us address the problems around false positives and detecting anomalies. There is a lot of hope and a lot of promises by the vendors in that space. Microsoft invests in this technology as well and I would say we are amongst the leaders in that space.
There is actually an interesting article about this problem called Teach Your AI Well: A Potential New Bottleneck for Cybersecurity covering the upsides but the challenges as well.
Starting with the expectations, Microsoft’s Ann Johnson put it nicely:
“The goal is to reduce the number of humans required – since there aren’t nearly enough humans to do the work – and automate simple remediation, leaving humans to do more complex work,” she explains.
This is exactly, where I currently see AI. We have a challenge that in most customers (if not in all) there are simply not enough trained people to run security operations or incident response. Additionally, if security is not at the core of the company’s business, there will not be enough budget to finance the Security Operation Center. This is where AI and automation mainly in the Cloud can help. The Cloud plays a key role in there as we will not be able to build the necessary technology on prem – for performance reasons as well as for financial reasons.
But obviously AI must be trained and needs to learn. Experience has shown what can happen if you train an AI with wrong or abusive data. Remember that one: Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day? There is a lot of work to done to ensure that the protecting AI does not learn the wrong things – actually a problem we know from the intelligence sector as well. If you are interested in how we do it, here you get some good insight: Protecting the protector: Hardening machine learning defenses against adversarial attacks.
In other words, we need professionals building these AI implementations and they need to understand security as well… As Ann states:
It’s not just a question of throwing bodies at the problem — they need to be the right bodies, notes Microsoft’s Johnson. “We have learned that volume isn’t the key in training,” she says. “Diversity is the key in type, geography, and other aspects. That’s important because you have to have non-bias in training.”
Bias can include things like making assumptions about gender, social network profiles, or other behavioral markers, and items like looking for a specific nation-state actor and missing actors who are from other areas, she explains.
Even with the difficulty in training AI and ML engines, though, machine intelligence is increasingly becoming a feature in security products. “We are incrementally building the presence of AI in security,” Johnson says. “It’s not flipping a switch.”
Yes, it seems that we are just trying to remove the bottleneck in security by adding a bottleneck in AI. Even if this might be true, AI will scale. We see that in Azure Security Center. The targeted information a customer gets when using this technology helps them to focus. We did the ground work (and there is a lot more to come) and customers can leverage that and focus their limited people on doing the thing they need to do inhouse or with a managed security partner.
I guess the promise lies in the scale we can get and in the adaptability:
But machine learning is well suited to those dynamic environments. “I think that’s going to continue to be true that machine learning allows us, as defenders, to adapt much more quickly, in real time, to threats that are constantly changing,” he says.
Definitely an interesting space to watch!