There are many views as to what constitutes Cyber Security, usually based on the three accepted basic premises:
Provide protection using traditional, provable methods:
- Education – continuous education of staff in the best practices of basic security
- Technology – Up-to-date Antivirus and patching regimes, constantly-updated firewall rules
- Physical – Best practice for securing devices both on- and off-premise
- Use of 2FA and VPN technologies
- Use of cryptographic techniques to secure data at rest and data in transit
- Implementation of a Cyber Essentials variant
Using a variety of ‘exception’ tools to detect cyber attacks or anomalies, including
- Intrusion Detection / Prevention
- Network monitoring and management
- Breach Detection
- APT (Advanced Persistent Threat) detection
- Detection sensors across the attack chain
- Beaconing to external Command and Control Centres
- Exfiltration of non-approved data
When a cyber threat is detected, use of response and remediation tools and methods to contain and address the issues, including
- Network forensic analysis
- Containment / Damage limitation
- System recovery, including roll-back / restore if necessary
- Incident analysis resources e.g.
- Port Lists
- Packet Sniffers and Protocol Analysers
- Appropriate documentation review (e.g. SIEM, Malware Protection, Log Analysis)
- Critical Asset review
The lists above are by no means comprehensive, but unfortunately, there appears to be no common understanding of what a cyber security incident really is, with organisations holding a wide variety of interpretations.
With no agreed definition of what constitutes a cyber attack – and many organisations adopting different views in practice – clearly, it can be very difficult for organisations to plan effectively and determine the type of cyber security incident response they require or the level of support they need.
However, there is one element missing from this – organisations are only taking measures to protect what they think is their computing estate.
The fluid network edge
Previous iterations of computing networks had closely-defined perimeters; mainframe and minicomputer networks were bound by internal connectivity, bound to known premises, and strengthened by closely-controlled communications protocols, such as IBM 3270 or ICL C0x.
With the growth of TCP/IP as the communication method of choice, network connectivity, the number of potential connected devices has grown exponentially.
Today, the Internet, BYOD, guest networks, e-commerce, WiFi, WiMax and other network connections, means that the network really has no defined perimeter, and network control and management has to itself, by definition, to become fluid to address this.
Now, new threats are arising – Zombies / Bots / Beacons / Malware open ports as well as the threats highlighted above. One quote from the UK Government paper “Cyber Security Incident Response Guide” is especially pertinent:
“Nearly anyone who can use a web browser can create and control a botnet.”
So, the threats are real, and real steps are being implemented to address them. So far, so good, but there is one element that seems to have been overlooked.
You can’t protect against the things you don’t know about
You may have your own thoughts about Erwen Rommel, but I have always found his most famous quote to be invaluable when undertaking any enterprise – “Time spent in reconnaissance is seldom wasted”. I’m sure most people in charge of networks and Cyber Security do spend time looking at their estate and planning for what many believe to be the inevitable. But this is my point: they’re only looking at what they know about.
If we accept that the network edge is fluid and that there is no such thing as a fixed estate, that at least some web browser users are creating and controlling botnets (or worse), and there is a proliferation of (non-enterprise) devices within our domains, then by definition there are definitely things that we don’t necessarily know about.
What we need is a way to be made aware of the things in our networks that we don’t know about, and then take the appropriate actions. These could include:
- Remediation by the tools we already have at our disposal
- Use of external resources
- Containment / Deletion
- Revision of rules etc
There are many other routes available, but they all rely on one thing – we actually know what it is we’re dealing with. Network Situational Awareness is all about the structure and content of your network as it really is – which may not coincide with what you think it is.
So what tools are available to provide this essential information, and how do they do it?
There are many network management tools available that purport to offer insights into the estate, map dynamic change, highlight outages, offer end-point mapping – the list is extensive, and not included here. However, there appears to be one singular deficiency – most rely on some sort of software agent on the end-point. So again, we’re only protecting what we know about ….
So what can we do?
What’s needed is a solution that doesn’t rely on software agents. The Internet Mapping Project which was started by William Cheswick and Hal Burch at Bell Labs in 1997, collected and preserved traceroute-style paths to some hundreds of thousands of networks, and included visualisation of the Internet data, and the Internet maps.
Fortunately, now there are some software tools available to map networks that don’t rely on agents. These include Lumeta Inc. (the spin-out from Bell Labs), Cytellix and Device42. This list is by no means complete, but these solutions offer real-time monitoring of your network, providing information showing it as it really is, which is not necessarily what you think it is.
Some, such as the suite offered by Device42 go further, offering application mapping and application dependencies – invaluable when considering application roll-outs or cross-system upgrades.
On the other hand, the Cytellix suite also offers Cyber analytics capabilities, highlighting C2 vulnerabilities, whether known Dark Web (TOR) exit nodes are accessible from anywhere inside the network edge, and potential RDP and FTP usage violations
So, you should really take steps to know what threats you’re potentially facing.
The 80/20 rule
It seems that in business, the so-called “80/20 Rule” holds true in many facets: 80% of business comes from 20% of our customers, 80% of customers pay on time, 20% don’t – we can all think of at least one scenario where this rule applies.
This applies to networks too. It’s estimated that dynamic network changes are oblivious to most host vulnerability assessment (VA) scanning tools, which leads to a 20% gap in network situational awareness – the 80/20 Rule again.
It seems that too many organisations are unaware of the unseen threats within their environment, or worse – choose to ignore them. So in the interests of (WW2) balance, I’ll close with a quote from Winston Churchill.
“Men occasionally stumble over the truth, but most pick themselves up and hurry off as if nothing had happened.”
Don’t be one of the majority.
Share this Post