It’s a near-certainty that your organization’s computer system has been or will be broken into. And even if you’re security-conscious and vigilant, you may be to blame.
The overwhelming majority of hacks — including the huge Sony attack of 2014, the recent intrusions into the U.S. Democratic National Committee’s network and, probably, the $81 million Bangladesh Bank heist that pointed to security flaws in the SWIFT bank transfer data network — all have one thing in common: They started with phishing, an old and tired “social engineering†technique. A link is sent out on e-mail, a social network or a messaging app, someone opens it — and hackers gain access to the network by infecting it with malware.
Organizations also have been attempting to train employees to minimize the risk. Last December, for example, the Berlin police sent a fake phishing e-mail asking 466 cops to put their passwords in a “secure password storage of the Berlin police.†More than 250 of the recipients clicked the link and 35 of them provided their credentials.
Cybersecurity
The almost universal awareness of phishing, however, doesn’t make the scam any less effective. Zinaida Benenson of the University of Erlangen-Nuremberg in Germany reported on an experiment proving just that at last month’s Black Hat cybersecurity conference in Las Vegas.
Benenson wrote a phishing
message:
Hey <receiver’s first name>,here are the pictures from the last week:http://<IP address of our server>/photocloud/page.php?h=<individual hashtag>Please do not share them with people who have not been there :-)See you next time!<firstname of the sender>
She sent it to university students in 240 Facebook messages and 158 e-mails; 56 percent of e-mail recipients and 38 percent of those on Facebook clicked on it, but when Benenson got back to them, only 17 percent admitted clicking. Benenson then designed another study to get more questionnaires. In the new study she didn’t use students’ first names, sending a similar message to 975 e-mail and 280 Facebook recipients. This time, 20 percent of those who got the e-mail and 42 percent of those who received the Facebook message clicked: Phishers know that personalization works.
Once again, many of those who clicked wouldn’t admit it, but now Benenson had enough answers. More than 80 percent of her respondents said they knew that clicking on a random link such as this could have dangerous consequences — but that knowledge turned out to be uncorrelated with actual behavior. Of those who clicked the link, 40 percent said they acted out of curiosity, even though they hadn’t attended a party the previous week: They just wanted a peek at some private and supposedly funny pictures; 20 percent opened the message to find out more about the sender.Of those who didn’t click the link, half didn’t do so because the sender’s name was unfamiliar. That’s the correct reaction, of course — but it’s not a safeguard, because if the name had looked familiar to more people, they might have clicked, too. Just 5 percent of those who didn’t click explained that they thought they’d recieved a private message by mistake and didn’t open the link to protect the sender’s privacy — or just because they weren’t interested.
“That is, these people were protected from the would-be threat by their decency or lack of curiosity,†Benenson wrote in her yet-unpublished paper describing the experiments. She added:
By a careful design and timing of the message, it should be possible to make virtually any person to click on a link, as any person will be curious about something, or interested in some topic, or find themselves in a life situation that fits the message content and context. Expecting from the users error-free decision making under these circumstances seems to be highly unrealistic, even if they are provided with effective awareness training.
It’s easy to become pessimistic about cybersecurity in the face of such behavior by advanced internet users who are well aware of the threat. Ordinary users, just because they are curious or easily distracted, appear to be the most vulnerable element in any computer system, and they are the one that cannot be fixed. As Benenson wrote, “human traits such as curiosity will remain exploitable forever, as humans (hopefully) cannot be patched against these exploits.
Benenson, however, doesn’t want her work to contribute to anti-technology sentiment. “Computers are for users,†she told me. “If we do not have users, we do not need computers. Users should be protected from attacks by secure computer systems. They should not be required to be permanently alert of possible attacks.â€
The German researcher is certain that a smart attacker can almost always fool a security expert as well as an ordinary user, so it’s important to have an attack detection and recovery plan:
It almost makes more sense to think about mitigating the effects of an attack than to concentrate on preventing it.
I don’t fully agree: Users do have to be constantly on the lookout. If a home system is broken into, there’s usually no way to detect an attack until it’s too late and your computer is part of a botnet, held for ransom or used to make spurious financial transactions.
As a general rule, we shouldn’t click on links in any messages that we didn’t expect or whose source we haven’t verified. It doesn’t take long to ask a friend or colleague: “Did you really want me to open this link?†Benenson is right, though: Especially in a big organization, someone will always slip up — and then probably refuse to admit it. So for security professionals, detecting attacks quickly, before they do too much damage, is a priority.
—Bloomberg
Leonid Bershidsky is a Bloomberg View columnist. He was the founding editor of the Russian business daily Vedomosti and founded the opinion website Slon.ru