Protection via Deception: How Honeypots Confuse – and Defeat – Hackers

To tweak a traditional saying, you can do a better job catching flies with honey than vinegar.

In this case, I’m talking about the “honeypot,” set up to catch the hacker “fly.” To summarize and elaborate upon various definitions over the years, honeypots are computer systems that lure attackers by simulating real systems within a network. The systems appear to be authentic, but the data and additional cyber resources within bear little-to-no actual value to the organization behind the effort. In many instances, the honeypot entirely fabricates email accounts, ports, or even an “active” website. In others, contained data is authentic, but it’s relatively inconsequential information that the organization is willing to “give up” to the adversary.

Various incarnations of the concept have been around since the mid-1980s. Then, cybersecurity legend Fred Cohen – who, among his many achievements, first defined the term “computer virus” – created the first publicly-available honeypot with his Deception ToolKit (DTK) in 1998. “If (DTK) increases uncertainty for the attackers, it can do a lot to reduce attacks – both inside and outside,” Cohen wrote in an FAQ about his toolkit. “DTK does not have to fool all the people all the time. It is doing great if it fools some of the people some of the time and scares some of the people some of the time.” (Not surprisingly, Cohen is fond of the Sun Tzu quote, “All warfare is based on deception.”)

Despite its established history and fascinating capacity for discovery, enterprises have been slow to adopt honeypots. According to a SANS survey, less than one-quarter deploy them in their networks today. Yet, of those that have honeypots deployed, 45 percent have triggered at least 10 honeypot-enabled events within the last year, and 38 percent have done so more than 15 times during the same period. And there’s building promise for increased deployments as the global deception technology market is expected to grow to $2.09 billion by 2021, up from $1.04 billion in 2016, according to a forecast from MarketsandMarkets.

For the projected spike in adoption to take hold, chief information security officers (CISOs) and their teams will need to overcome the following challenges:

  • Scale. If your goal is to create an illusion that overwhelms attackers with numbers, it requires the fabrication of perhaps 100 hosts – all of which the cybersecurity team must manage. This requires significant commitment from the team.
  • Authenticity. Numbers alone aren’t enough; the honeypot system needs to look and “feel” real to attackers or they won’t take the bait. Cybersecurity professionals must do this manually, as there are currently no automated solutions that do this for them. A non-active honeypot – like a port – doesn’t involve an extensive effort. But a more ambitious invention – like a fully-functioning, but fake, website with working apps – demands considerable planning, delivery, and oversight. In addition, everything must be updated periodically, again, in the interest of realism and relevancy. When the previously mentioned scale and manual processes are taken into account, all of this can require a significant amount of time and effort.

Once these challenges are addressed, organizations can benefit on multiple levels:

  • Threat Trend Analysis. CISOs and their teams get a front-row seat to what hackers are after and the tools/methods they use – deploying open source or customized products, exploiting publicly disclosed vulnerabilities or unknown ones, etc. As a result, honeypots present teams with the opportunity to learn about intrusion techniques and the potential vulnerabilities of their own, actual systems.
  • Elimination of False Positives. There is no legitimate reason to enter a honeypot – anyone who interacts with one is a suspect. Thus, the honeypot triggers fewer alerts, but those alerts are of a much higher quality. That’s why it is an especially effective way to identify insider threats – because if employees snoop around outside of their authorized areas and manage to access a honeypot, it’s fair to conclude that they are up to no good.
  • Depletion of the Adversary’s Resources – and Patience. If the scale challenge is successfully addressed, then organizations “invent” dozens or even hundreds of false doors for hackers to break into. They are forced to use more of their capabilities, all while showing more of their hand. Hackers are traditionally “Davids” attempting to take down large corporations, or “Goliaths.” If you make it that much more difficult for enemies to get to their intended targets, they’ll give up and move on to victims that offer an easier path.
  • Confusing the Adversary. As Sun Tzu stated, warfare is about deception. The deception results in calculated confusion, designed to degrade the accuracy and effectiveness of an attack. Hackers can’t tell the difference between what’s real and fake because everything looks like a system they’re trying to access. To further muddle the picture, cybersecurity teams may, for example, “disguise” a Windows server as a Linux one, so the threat actors use the wrong tools, leading them to conclude (for the moment) that their job is done. In his FAQ, Cohen described this as “(attacking) the weakest link of the attackers. Their fears of being caught, their uncertainty about whether they are being detected and watched, and the bursting of their egos when they find out they have been fooled.”

At LookingGlass, we’re intrigued by deception technologies, and are immersing ourselves into developing next-generation solutions to camouflage and defend our customers’ real systems and cyber assets while exposing, confusing, and exhausting the capabilities of their adversaries. If you’d like to know more about how we are leveraging deception capabilities, contact us.