Despite our best efforts, cyber security is often a reactionary process. We take steps to prevent breaches, but mostly wait for something to happen before we jump into action. If we detect and react quickly enough, we avert a problem or perhaps minimise its impact. The “bad guys” also know their time is limited and behave accordingly. Yet we don’t always prioritise our efforts to tighten the time frames, meaning we aren’t doing everything we can in the area of prevention.
As with most professions, there is a wide disparity in the abilities of hackers at the top of the field and the majority of them who are not as competent. The most highly trained and well-resourced hackers often are state-sponsored* and it is doubtful that many of you have the ability to completely protect yourselves from them, but it is also unlikely that you will be their target. Most breaches have their roots with the mid-level and lower-level hackers. It therefore makes sense to do what you can to thwart them.
While the top tier perpetrators research and discover their own methods and available software weaknesses, the largest number of incidents make use of known anomalies in systems. Vendors discover holes in their products and create updates to close them. The moment the update is publicly released, hackers analyse the update to reverse engineer the original bug. Essentially the vendors inform the hackers of how to take advantage of the problem. Then it is a race between the hacker who tries to utilise the security hole and the users who need to apply the patch.
The best hackers rapidly develop ways to take advantage of the newly found flaws and attempt to attack as many systems as possible as quickly as possible. After a short time (usually weeks or months), the attack code is monetised and distributed through the broader hacking world. Within a few months the method is bundled into generally available hacking tools which themselves may include hundreds of infiltration methods.
The chart on the right shows a typical life cycle of an individual hack. There may be a few breaches accomplished prior to a flaw becoming publicly disclosed, but shortly after it becomes public knowledge, a rash of attacks takes place. Over time, as companies update their systems fewer attempts are successful, although vulnerabilities seem to remain for long periods of time suggesting that implementing patches is often a chink in our security armor.
There are often legitimate reasons why updates are not applied immediately upon their release, but at the root of those delays is often a fundamental problem in an IT infrastructure. Perhaps it is old hardware that can’t support the change or a lack of resources to test the change. The faults are almost always out of the control of the security team, yet they are the ones held responsible when the breach takes place. It isn’t hard to imagine a CISO losing their job when it is discovered that a breach took place when a long-known defect wasn’t fixed.
The obvious point of this is to encourage patching in a timely manner, but more important is the idea that information security extends beyond the security team. It needs to become a part of the entire organisation. When an ROI is calculated regarding the replacement of hardware, a value should be put on the additional security it may bring. Having adequate staff to test new software should be seen as contributing to the overall security of the organisation. Technology evaluations should include a security review. Updated software is not just about adding new features.
While cyber security is often seen as an entity on its own, it is more commonly a reflection of the effort and investment made throughout an organisation.
* This is not to disparage the very talented people in our industry. There are a large number of very skilled cyber professionals who work for corporations who are using their powers for good instead of evil, but they are not the individuals we are worried about here.