Nonstandard ports and port hopping (network security). Evasive applications are one of the key factors leading to the demise of traditional port-based firewalls. However, traditional IPS and threat products also rely heavily on port to determine which signatures or analysis to apply to the traffic. This weakness is magnified by the fact that APTs are often communicated from the inside of an infected network back to the remote attacker outside. This gives the attacker full flexibility to use any port, protocol, and encryption that he wants — fully subverting any portbased controls in the process.
SSL encryption. Malware creators rely heavily on various forms of encryption to hide the infection of traffic, as well as the ongoing command-and-control traffic associated with botnets. SSL is a favorite, simply because it has become a default protocol for so many social media sites, such as Gmail and Facebook. These sites are coincidentally very fertile ground for social engineering and malware delivery. As a result of SSL encryption, many IT security teams lack the ability to see malware traffic on their network. Other types of encryption have also become popular for hiding malware traffic. Peer-to-peer applications provide both infection and command-and control capabilities, and often use proprietary encryption, again allowing malicious content to pass through the traditional network perimeter undetected.
Tunneling. Tunneling provides yet another tool for attackers to hide malicious traffic. Many applications and protocols support the ability to tunnel other applications and protocols within them. This lets attackers disguise their communications as allowed services or applications to get past traditional perimeter security solutions.
Proxies. Advanced malware and hackers use proxies to traverse traditional firewalls. TDL-4, the “indestructible botnet” (refer to Chapter 2) installs a proxy server on every host that it infects. This allows the bot to not only protect its own communications, but also to establish an anonymous network that anyone can use to hide his tracks while hacking or conducting other illegal activities.
Anonymizers and circumventors. Tools such as UltraSurf, Tor, and Hamachi are purpose-built to avoid network security controls. Unlike most of the other technologies discussed in this section, circumventors have almost no legitimate use in an enterprise network. These applications are updated on a monthly (and even weekly) basis to avoid detection in a perpetual cat-and-mouse game with traditional security solutions.
Encoding and obfuscation. Malware almost always encodes transmissions in unique ways. Encoding and obfuscation not only help them avoid detection signatures, but also hide the true goal of the malware. This technique can be as simple as converting strings to hexadecimal, or as sophisticated as developing custom algorithms for detailed translations.
Port-based firewalls are often used as a first line of defense, providing coarse filtering of traffic and segmenting the network into different password-protected zones. One drawback to port-based firewalls is that they use protocol and port to identify and control what gets in and out of the network. This port-centric design is ineffective when faced with malware and evasive applications that hop from port to port until they find an open connection to the network. Such firewalls themselves have little ability to identify and control malware. Solutions that have added anti-malware capabilities to portbased firewalls either as a blade module or as a UTM (Unified Threat Management) platform have typically suffered from poor accuracy and severe performance degradation.
IPSs provide a step in the right direction, in that they look much deeper into the traffic than a firewall does. However, IPS solutions typically don’t run a complete set of IPS signatures against all traffic. Rather, the IPS attempts to apply the appropriate signatures to specific types of traffic, based on port. This limitation means that malware or exploits on unexpected or nonstandard ports are likely to be missed. Additionally, IPS solutions lack the depth of malware detection needed to protect networks — most IPS solutions only look for a few hundred types of common malware — well short of the tens of thousands that exist.
Proxy solutions are another means of network traffic control. But they too look at a limited set of applications or protocols and only see a partial set of the network traffic that needs to be monitored. By design, proxies need to mimic the applications they are trying to control so they struggle with updates to existing applications and new applications. As a result, although proxies understand a few protocols in depth, they typically lack the breadth of protocol support needed to control the tunnels and protocols within protocols that hackers use to hide their true traffic. A final issue that plagues proxy solutions is throughput performance, caused by the manner in which a proxy terminates an application on the proxy and then forwards it on to its destination. The challenge with any of these network controls is that they do not have the ability to accurately identify applications and malware; they look at only a portion of the traffic and suffer from performance issues. Security policies must be based on the identity of users and the applications in use — not just on IP addresses, ports, and protocols. Without knowing and controlling exactly who (users) and what (applications and content) have access to the network, enterprise networks may be compromised by applications and malware that can easily bypass port-based network controls.
Given that advanced threats most often use the network for infection and ongoing command and control, the network is an obvious and critical policy-enforcement point. With application-enablement policies in place, IT can shift its attention to inspecting the content of allowed traffic. This inspection often includes looking at traffic for known malware, command-and-control patterns, exploits, dangerous URLs, and dangerous or risky file types. When possible, policies that focus on the content of traffic should be coordinated as part of a single unified policy, where the rules (and the results of those rules) can all be seen in context. If content policies are spread across multiple solutions, modules, or monitors, piecing together a coordinated logical enforcement policy becomes increasingly difficult for IT security staff. Understanding whether these policies are working once they are implemented will likewise be difficult. The goal should be to create written policies that reflect the policies’ intentions just like someone might describe them orally. For example, “only allow designated employees to use SharePoint, inspect all SharePoint traffic for exploits and malware, disallow the transfer of files types X, Y, and Z, and look for the word confidential in traffic going to untrusted zones.” Another key component of network policies is the absolute need to retain visibility into the traffic content. SSL is increasingly used to secure traffic destined for the Internet. Although this may provide privacy for that particular session, if IT lacks the ability to look inside the SSL tunnel, SSL can also provide an opaque tunnel within which malware can be introduced into the network environment. IT must balance the need to look within SSL against both privacy requirements for end-users and the overall performance requirements of the network. For this reason, it is important to establish SSL decryption policies that can be enforced selectively by application and URL category. For example, social media traffic could be decrypted and inspected for malware, while traffic to financial or healthcare sites is left encrypted.
The end-user’s machine is the most common target for advanced malware and is a critical point for policy enforcement. Endpoint policies must incorporate ways of ensuring that antivirus and various host-based security solutions are properly installed and up to date. Although targeted attacks are becoming more common, the majority of threats today continue to be known threats with known signatures. Gartner, Inc. predicts that known threats will comprise 95 percent of all threats through 2015. As such, these endpoint solutions must be kept up to date and must be audited regularly. Similarly, you need to have a method for validating that host operating systems are patched and up to date. Many malware infections begin with a remote exploit that targets a known vulnerability in the operating system or application. Thus, keeping these components up to date is a critical aspect of reducing the attack surface of the enterprise. As with employee policies, desktop controls are a key piece to the safe enablement of applications in the enterprise. Desktop controls present IT departments with significant challenges. Careful consideration should be applied to the granularity of the desktop controls and the impact on employee productivity. The drastic step of desktop lockdown to keep users from installing their own applications is a task that is easier said than done and, if used alone, will be ineffective. Here’s why:
✓ Remotely connected laptops, Internet downloads, USB drives, and e-mail are all means of installing applications that may or may not be allowed on the network.
✓ Completely removing administrative rights is difficult to implement and, in some cases, severely limits end-user capabilities to an unacceptable level.
✓ USB drives are now capable of running applications, so a Web 2.0 application, for example, can be accessed after network admission is granted.
Desktop controls can complement documented employee policies as a means to safely enable Web 2.0 applications.
Author : cialfor
Updated : 9/26/2016
- image credit to google
- introduction to cyber security – Data64