Enterprises battling cyber threats to their own organization and their third parties continuously invest in approaches that may give them the advantage over their adversaries. This blog highlights how deception-based network defense is one such approach gaining interest. Let’s start by defining what deception is.

Deception is an act or statement which misleads, hides the truth, or promotes a belief, concept, or idea that is not true. It is often done for personal gain or advantage. 

The challenge with security systems leveraging deception is how those systems can effectively use it to stay ahead of the adversary. Adversaries armed with intelligence on your organization and a set of playbooks and tools will use their intelligence (both human and machine) to execute their goals. This leaves organizations often playing defense without knowing why; how or what the adversary might do next.

Can we change this dynamic?

This blog focuses on an approach in which security organizations can leverage Threat Intelligence with deception to improve their effectiveness while responding to attacks.

Step 1: Threat Intelligence Identifying Behavior

Before you consider leveraging threat intelligence in a deception-based cybersecurity strategy, it is important to understand how relevant the intelligence is to your organization security posture and risk profile.

Make sure you invest-in or leverage a Threat Intelligence Program.

To learn more about either building or qualifying vendors providing intelligence check out this blog: 5 Insights to Successful Threat Intelligence Programs.

Threat Intelligence can drive deception provided it captures several important characteristics. Here are some important questions to initiate improvements in the intelligence being used.

  • Does the intelligence identify actor characteristics that would provide early detection to allow dynamic deception to be applied?
  • Does intelligence capture Behavioral Context not just for detection but as the adversary advances through a kill-chain?
    1. For example, does the intelligence capture adversarial playbook information such as MITRE ATT&CK provides?
    2. Does it highlight the various kill-chain phases that can be mapped to different defensive tools you have for different phases within the kill-chain.
  • Does the intelligence have sufficient detail?
    1. Within behavioral context does the intelligence identify specific tools; infrastructure; software, malware and use of that software?
    2. In particular what specific responses inform the response playbook(s) you may construct or leverage to deceive or delay the adversary.

Step 2: Applying Intelligence to Behavioral Detection

A key part of deception response is understanding behaviors beyond simple network, user, application or process indicators.

Machine learning techniques can play a vital role in identifying patterns within behaviors associated with nefarious actions or anomalous behaviors that can drive selection of those actors executing those behaviors for deception investigation.

At a high-level the security team could apply the following decision chart to behavioral response:

Figure 1: High-Level Decision Logic for Deception

The figure above captures the following rules.

  • If behavioral pattern identified as malicious with high-level confidence and activity represents critical impact, then mitigate immediately
  • If behavioral pattern identified as suspicious with mid-level confidence, then consider employing deception tactics to gather more information
  • If behavior does not match any pattern and is anomalous based on statistical threshold then consider employing deception tactics to the session(s) to gather more information
  • If behavior does not match any pattern and is anomalous below the statistical threshold for deception, then consider additional session capture for deeper analysis

There are other options for deceptive responses than the 4 mentioned above. The key message is that security teams should consider deception as an important part of their investigative, mitigation and response playbook.

Example: Gathering Intelligence by applying Intelligence to Transparent Decoy

Imagine a situation where the security team have intelligence that has identified certain network behaviors that are typical of an adversarial TTP. However, due to nature of the behaviors, the false positive rate continues to be too high to automatically mitigate sessions that are matched against the behavior.

In this case, a security team may choose to selectively inspect the sessions and source of these behaviors for further analysis and collection by redirecting them transparently to a decoy. In the figure below we show a high-level SDN network where the SDN controller is able to apply policy to specific network sessions that are then redirected to the Decoy server instead of the original server.

Figure 2: Gathering Intelligence via Redirection to a Decoy

This decoy will appear to be identical to a system that the actor was attempting to use. An important issue with deception is to ensure that the actor is not able to determine or recognize that they have been redirected nor are able to determine the decoy from its behavior or setup.

This orchestration occurs under the control of the security operations team and can be terminated at any time but the more behaviors gathered by the network and the decoy server the more the security operations team are able to make a determination on the actor’s intent, objectives and ultimately what information can be gathered to mitigate the threat from the actor.

In Summary

Deception is an important strategy that can complement an organization’s cyber defense. When deception is integrated with a threat intelligence practice driving organizational visibility and automated mitigation the organization can better stay ahead of adversaries.

Aeonik ™ Security Fabric provides organizations with both the visibility and intelligence-driven mitigation that enables deceptive responses to attacks.

If you would like to learn more please contact LookingGlass.

Contact Us