This article was originally published on the Forbes Technology Council.
We’ve arrived at the point in the story where Dr. Seuss is about to reveal the moral. You know, that familiar parable where we all latch on to the idea of “zero trust” and start applying it everywhere, then all of a sudden we no longer trust anyone or anything and everyone is isolated within their own lonely bubble—then the story breaks with a cliff hanger and we’re left to consider the meaning of it all as our tongue unwinds.
Over the last 10 years, we’ve watched a ridiculous perversion of the ideals of zero trust unfold. Starting from the very real and meaningful lessons learned from the Operation Aurora attacks at Google, the cybersecurity machine has charged relentlessly down the path of zero trust. At the start, there were some real improvements: don’t extend trust based on the network locality; use a strong source of authentication; validate context of request; authenticate every request; authorize every request. Essentially, don’t take shortcuts when it comes to authentication and authorization. Good stuff and makes sense. Frankly, a real improvement for all of us. Gone were the days of wondering why we couldn’t access systems across the internet. The authentication and authorization steps would be robust enough to handle a request from a coffee shop and a request from the terminal in the data center—and not assume one is more trusted than the other because of its origin.
But then, things changed—the zero trust message was selling and the audience was ripe to receive. I’m not one to overly generalize the security community but if I were, the word “cynical” might come to mind. The message of zero trust became an excuse for a generally distrustful disposition. Quickly, the narrative became all-inclusive:
Can I integrate an open-source package into our product? No! Zero trust!
Can I use the hotel wifi? No! Zero trust!
Can I use this mail plugin? No! Zero trust!
Can I access corporate mail without a VPN? No! Zero trust! (See what happened there?)
The other day I saw a security practitioner’s comment on LinkedIn: “When we started talking about Zero Trust, the premise was that the human as a control was no more to be ‘trusted’ than your antivirus program or that web content filter.”
It’s not really the same definition or origin story of zero trust as found on any trusted reference source—but it is a great example of how easily the phrase is used and co-opted.
Using zero trust as a guiding philosophy for authentication is a great strategy, but applying it broadly to employees is a big mistake. It is completely fair (and very necessary!) to design a program where an employee is not solely responsible for the security of your organization, but it would be a huge mistake to design a program where an employee cannot add to the security of your organization. When employees feel their choices have been constrained, or that they are being controlled (even for benevolent reasons), they start to push back—see the psychological principle of “reactance.” Many cybersecurity controls already cross this line (“this website has been blocked by your IT administrator”) and overextending the misappropriated idea of zero trust exacerbates the problem.
The ideal we should be aiming for is a scenario where we can engage with employees to improve security outcomes. This has been well-practiced by organizations like Yahoo, where they established a security program that effectively engages their employees to improve the organization’s cybersecurity posture. The driving philosophy behind this approach is to avoid presenting their employees with “impossible questions” (Does this website have malware?) and instead focus on meaningful incremental improvements (Can you store this password in our corporate password management system?).
At any organization, humans are our most valuable resource. Even when we’re young, we can do things computers simply cannot. Security leaders would be wise to remember that fact. Rather than wrongly apply the concepts of zero trust to our employees and become the moral of the Dr. Seuss story, it’s time for us to trust them to be the upside in our cybersecurity programs.