Network security is continuously under threat in today’s time. The threats are no longer from one source or do they belong to the same type, they have become very fluid in nature. The top network security challenges envisaged by the experts for 2016, include:
- Securing control industry interfaces in critical industries like utilities and defense
- Security in the cloud
- Protecting (IOTA) Information Overload Testing Apparatus,
- Protecting aging internet infrastructure-wherever maintenance is forgotten or keeps getting deferred (this is more frequent than most of us will like to believe)
- all areas that create open source vulnerabilities just waiting to be exploited.
The network perimeters that used to be sufficient for cyber security are no longer holding ground. The traditional perimeters had very simple responsibilities – either permitting access or blocking access. They were not needed to be ‘context’ or ‘application’ aware. Today, the threats come from multiple sources at the same time, they have the capability to morph themselves and a seemingly benign application can be hijacked to behave in a completely carcinogenic manner. This is on account of emergence of multiple perimeters. These multiple perimeters are emerging due to the continual adoption of new technologies like mobility, cloud and virtualization. Hosted perimeters and the so called “secured internet gateways” are changing the nature of traditional perimeters and their boundaries as they have been defined traditionally. In the mobile network there is a lack of visibility into the packet core network. Mobile Internet infrastructure and Mobile technologies are making distributed workforces a norm and most of corporate information and application is travelling on fragmented extended networks. Network edges are slowly losing meaning. The whole world is poised to migrate to the cloud and the cloud is becoming a new network perimeter. Cloud as a perimeter provides and increased attack surface. The cyber criminals can now access the cloud-hosted applications from any device and location. Cloud hosted third party apps and API’s are difficult to monitor and secure. Cloud is also pressurizing the traditional network perimeters like the secure web gateways, firewalls, intrusion prevention systems and data loss prevention platforms because of the increased traffic. Most of this traffic is encrypted. This encryption is allowing hackers to hide their intentions from the security devices acting at these perimeters. Lot of these devices are not equipped to decrypt encryption like SSL due to various reasons like being a legacy system and due to their physical location in the network.
There is a dire need of new security framework that defines the new perimeters and also dynamically decides how to guard them and secure them.
All the existing methods for most enterprises lack in detection and response capabilities. The situation is further exacerbated by multi-vendor products that are in their own silos and do not talk to each other. To add to the pain, too much of threat intelligence data is generated without the time to act on it.
All these problems underline the need for a whole new kind of enterprise security architecture which not only defines the network perimeters but also identifies new emerging perimeters, and has the capability to monitor and analyze at big data scale in near real time.
It should have some essential features like application visibility and control, firewalling based on identity, web security that takes into consideration contextual awareness. It should be able to define access restrictions based on the way applications are supposed to be used or should be used. It should have the capability to access information from beyond the perimeter secured by the Intrusion Prevention System (IPS) to make access denial decisions or to make changes to the blocking algorithms. It should be able to use directory integration to link to user identities and make full use of three key types of information – vulnerability related, patching state related and geo-location related. Hypothetically stated, it should be able to clearly make blocking decisions based on where the information source is, and where it is supposed to be. It should take care of threat defense based on reputation as well. It should be able to secure data that is passing between virtual servers without it ever leaving the physical one. It should be content aware as well with abilities to re-scan executable and PDF files which have already been scanned at the fire wall level. It should be able to scan out bound communications also in almost real time.
To be describing it at the top most level, proactivity or pro-active capabilities is the name of the game in adaptive security. The platforms should be designed keeping in mind the future use, advanced analytics capabilities with embedded machine learning that can spot any suspicious or ‘out of the ordinary’ pattern. It should also build a flexibility that can accommodate all the possible implications of the extension of use. For example, it should be able to monitor and counter threats if the cover is extended to more end points and back-end services, even if they are outside the direct control of IT.
A lot of sharing and correlated functioning is also needed in the adaptive security network so that it can look at real time data from multiple sources, spot anomalous behavior and communicate with the firewall. Industry experts have also suggested building intelligence in the architecture so that it can modify the firewall based on the perception of the threat or intrusive abilities. It should be able to look through dark areas of the network and be very good at producing false positives. Threat intelligence is increasingly being looked to provide a reliable backbone to Adaptive security.This means that companies are scouring for a vector or a knowledge item that is based on evidence, context, mechanism, indicator, implication or actionable advice on a threat to assets that is about to emerge. This vector or knowledge will help enterprises to respond and remediate threats before vulnerabilities are exploited and before the occurrence of a security incident.
The security engines like the IDS/IPS correlation engines will be geared to look at multiple vulnerabilities…right from knowing what systems and applications can have vulnerabilities, to understanding the surrounding network states and subnet level programs. Network behavior Anomaly Detection techniques can be used to establish a baseline state for all the elements in a network. This baseline is then continuously monitored for any deviation from the usual behavior. This behavior monitoring hinges on three main components. First, analyzing the traffic attributes and flow patterns of the network flow data. Second, looking at the traditional network performance data for red flags. Any downtime in VoIP services or unusual spikes in network traffic can be due to a security threat like a Denial of Service (DoS) attack or a malware. Third, using the passive traffic analysis tools in a different way to analyze application layer traffic. This will help the Intrusion prevention systems with the necessary context about the relative states, positions and communication that is happening in the systems and applications on the network. The intelligence from the security engines should be correlated to the feedback from internal systems that track actual user identities. This will help figure out policy violations from inside the network and flag the users, workstations or vulnerabilities that might be responsible from an internal attack. In any security countermeasure, it is essential to weed out all the false alarms in order to react to the threat at the earliest. This can be done very quickly with enough data on vulnerabilities by analyzing the attack surfaces for the vulnerabilities for which the alert has been raised. Just to look at an ultra simplified scenario, suppose alerts have been raised for a number of DMZ web servers, correlating the vulnerability characteristics with the surface characteristics (the server characteristics in this case) can immediately throw the conclusion that either the servers are already been patched or they belong to a class that doesn’t have that vulnerability. Once false positives are eliminated it is easier to focus on the real threats much more quickly. Simple steps grading the systems and potential attack surfaces in terms of criticality (5 being most critical and 1 being least or vice versa) will also help in prioritizing the attack areas and taking quick remedial action. Grading will also go a long way in eliminating false alerts. Additionally, in cases of distributed attacks, the response team will be clear about what to protect first. The more attacks are detected and documented, the easier it is to analyze the incidents later. This documentation will also ensure that most known loopholes will get closed and threat characteristics will get recognized leading to a fine-tuning of the security apparatus making it more robust. The success in resisting attacks will also feed in the management response of enforcing these practices.
Adaptive security is slowly evolving and it does show promise as a potent defense to the increasingly sophisticated attacks that enterprises are facing today. It is, however, still quite far from being a silver bullet or a vaccine against the growing cyber threats. Only time will tell how effective it is going to be, but one thing is for sure, it will depend a lot on how the standards governing cloud, mobility, social and ubiquitous connectivity (read the internet of things (IoE)) because any chain is only as strong as its weakest link. The hackers today are not looking at destroying the security layers, they want to steal content and data, just one vulnerability, one unknown, un-plugged threat vector and we are as good as 1970’s. Hopefully, in due course of time we will reach a stage where our systems are continuously monitored and assessed and improved to take care of incipient security incidents.