Bloomberg
Last year, Microsoft Corp’s Azure security team detected suspicious activity in the cloud computing usage of a large retailer: One of the company’s administrators, who usually logs on from New York, was trying to gain entry from Romania. And no, the admin wasn’t on vacation. A hacker had broken in.
Microsoft quickly alerted its customer, and the attack was foiled before the intruder got too far.
Chalk one up to a new generation of artificially intelligent software that adapts to hackers’ constantly evolving tactics. Microsoft, Alphabet Inc.’s Google, Amazon.com Inc. and various startups are moving away from solely using older “rules-based†technology designed to respond to specific kinds of intrusion and deploying machine-learning algorithms that crunch massive amounts of data on logins, behaviour and previous attacks to ferret out and stop hackers.
“Machine learning is a very powerful technique for security—it’s dynamic, while rules-based systems are very rigid,†says Dawn Song, a professor at the University of California at Berkeley’s Artificial Intelligence Research Lab. “It’s a very manual intensive process to change them, whereas machine learning is automated, dynamic and you can retrain it easily.â€
Hackers are themselves famously adaptable, of course, so they too could harness machine learning to create fresh mischief and overwhelm the new defenses. For example, they could figure out how companies train their systems and use the data to evade or corrupt the algorithms. The big cloud services companies are painfully aware that the foe is a moving target but argue that the new technology will help tilt the balance in favour of the good guys.
“We will see an improved ability to identify threats earlier in the attack cycle and thereby reduce the total amount of damage and more quickly restore systems to a desirable state,†says Amazon Chief Information Security Officer Stephen Schmidt.
Before machine learning, security teams used blunter instruments. For example, if someone based at headquarters tried to log in from an unfamiliar locale, they were barred entry.
To do a better job of figuring out who is legit and who isn’t, Microsoft technology learns from the data of each company using it, customizing security to that client’s typical online behavior and history. Since rolling out the service, the company has managed to bring down the false positive rate to 000.1 percent.