Human error. It’s one of the things that make life…interesting. Just ask the residents of Hawaii, who were alerted to the imminent arrival of nuclear missiles from North Korea. Thankfully, it was a false alarm, caused when someone who thought he was working in a test system launched an alert in the real system instead. Oops!
Most human errors aren’t nearly that dramatic—it’s rare that someone mistakenly (or deliberately) scares the entire population of a U.S. state half to death. But, as many businesses and other organizations have learned the hard way, even small human errors can have major consequences in the realm of data and network security.
Humans: The Weak Link
People continue to be the weak link in the data security chain. Whether it’s a distracted IT technician setting up a piece of network equipment and forgetting to change the default admin password, or a naïve non-technical user opening a malware-laden attachment in a phishing email, human mistakes are what hackers count on to get their foot in the door. Throw in other human failings, such as sabotage, revenge, and general malfeasance, and you have an environment that gives data security personnel fits.
At the same time, whether the security team likes it or not, humans—the users who depend on computers to do their jobs—are also often the last line of defense against cyberattacks. Specifically, it’s their ability to recognize cyberthreats and deal with them promptly and appropriately that can mean the difference between just another day at the office and a day (or more) spent retrieving, repairing, and restoring compromised data and systems, all while losing productivity, revenue, trust, and customers.
They have good days and bad days. They may lie, cheat, and steal. They get their feelings hurt. They can be jealous or vindictive or greedy.
In short, they may not always be particularly interested in protecting their employers’ assets.
Preventing Human Security Errors
How can we bring the humans in the data-security loop to a state where they can be relied upon to act appropriately?
It’s a question that has befuddled data-security experts for many years. Technology goes only so far, and even new approaches such as artificial intelligence can’t catch every instance of phishing and other social-engineering attacks.
Because technology can never be 100% reliable, humans will be in the loop somewhere. You can’t ignore them or wish them away or hope that they are never confronted with a phishing attempt. You have to deal with them directly and make them, as much as possible, part of the solution instead of part of the problem.
Ideas on how to do this include:
- Train them: Training needs to be given to new hires on Day 1 and at least annually thereafter. Training should cover the different types of attacks that users are likely to encounter and what to do about them. Users should know how to recognize phishing and other social-engineering attacks.
- Enable them: One of the most important ways that users can help with data security is to report suspected attacks, including phishing in all its forms. Make it easy for them to do so. There are add-ons to both native and web-based email clients that add a simple “Report Phishing” button to the interface; all users have to do is click the button to report a security issue—no need to memorize or look up the security team’s email address.
- Motivate them: In addition to making it as easy as possible to report security issues, you’ll get better results if the users have some form of motivation, some kind of reward and recognition for being active participants in protecting the company’s data.
- Test them: Training and motivation aren’t enough; users need to practice what they’ve learned in a real-life setting. Throw a fake phishing message at them from time to time and reward the ones who successfully recognize and report it.
Just as no amount of technology will filter out every phishing message, no amount of training will drive human error out of everyone. But with these approaches, an organization can minimize its human-caused risk.