top of page
  • Neil Hare-Brown

Controls Degradation

After a 40 year career in cyber risk management, I have had the privilege to work with a range of professionals specialising in law enforcement, governance, cybersecurity, regulatory compliance, risk analysis, cyber insurance and cyber incident response. In the 2000’s, the Blackthorn GRC system I designed centred on risk models which were in a state of flux driven by ‘Activities’ that were either proactive, e.g. assessments, audits, tests etc., or reactive e.g. incidents, cases etc., in relation to risk events.

These risk models comprised a collection of ‘objects’ each type with specific properties and arranged logically in a tree. The apex objects being Assets with the next level objects being Threats; that act upon the assets (generally drawn from a library), and objects representing the Risk Management options; Reduce/mitigate, Transfer, Avoid and Accept.

The Exploit objects identified in the Threat analysis automatically generated Vulnerability objects under the Reduce/mitigate object and under each of these lay Control objects, the properties of which reduced each vulnerability.


    |-Threat Summary

    |       |-Threat A

    |            |-Threat Actor

    |       |-Exploit Summary

    |              |-Exploit A

    |             |-Exploit B

    |       |-Impact Summary

    |              |-Impact A

    |             |-Impact B

    |~Risk Management





                 |-Vulnerability A

                         |-Control A

                         |-Control B

                 |-Vulnerability B

                         |-Control A

                         |-Control D

It was in the design of these models that I got to thinking about control degradation. This concept began to take shape after I read both Charles Darwin’s ‘On the Origin of Species’ and Richard Dawkins 'The Selfish Gene’, two books that I consider to be works of genius and which changed my life.

In Origin, Darwin explains the constant evolutionary battle between the development of predatory characteristics and those developed to evade or defend against predators in the struggle for survival. 

Dawkins observed that this constant evolutionary development could be viewed (or analysed) from different perspectives; predator or prey. He states in his excellent TV documentary ‘The Genius of Charles Darwin’, “The environment in the form of Lions is getting systematically worse from the point of view of a Zebra, and from point of view of a Lion, Zebra’s are getting systematically worse, they are getting better at running away”. The Lion is becoming more deadly. Its claws sharper, it’s eyesight keener. The Zebra is becoming more aware of danger and faster to evade it.

This arms race, seen in life forms through a macro lens, applies also to information; in fact it is information that life is driven to protect. Ultimate information security!

Evolution is, as Darwin realised, the process of survival and reproduction. Dawkins explains what Darwin could not know at the time, that the ultimate reason for this process is the replication of DNA down through the generations. DNA is information. Life is ultimately the secure vessel that, through replication, transmits this information through time. Life therefore, exists to provide information security. 

Both Darwin and Dawkins explain that predators tend to identify either the young or old as targets because the risk of attacking prey in its prime is greater. They infer that the young are targeted because their offensive and defensive controls have not fully developed and can be more easily overcome. The old fall prey because their controls have degraded and can no longer protect them from a sustained or surprise attack. 

The process of evolution by natural selection also plays out in human society in detailed variations. Whilst the complexity varies, the fundamental theme remains. Online fraudsters attack elderly victims. Scammers seek out the young in sextortion rackets. 

Cybercriminals are looking for target organisations who mimic the young or infirm, those who do not invest in security controls or who lack the skills to maintain them. Who operate legacy or unpatched systems or have weak access control and whose untrained staff can be easily deceived. Cybercriminals seek to limit their risk by operating remotely and targeting victims less likely to detect attacks or raise an alert.

This got me to thinking about control development and degradation. All life forms, including Homo Sapiens, where natural selection rules their lifecycle, are constantly developing ‘exploits’ to enable predation, and ‘controls’ that defend the life form against predation. 

Both exploits and controls give each life form a better chance of survival and ultimately, reproduction. This happens both across generations and within each life span where both offensive exploits and defensive controls develop from birth to prime maturity and then degrade to death.

All life both predates and is prey. Even those creatures that we consider to be apex predators are still prey to viruses, bacteria and most often, to themselves. Humans are inextricably part of this, as apex predators we have significant exploits that few other life forms can combat. We predate on each other in many different ways mainly  to kill or otherwise victimise or manipulate prey so as to transfer wealth and obtain power. Human predation is particularly developed and occurs in often subtle ways that appear as non-violent acts including the complexity of cyberattacks.

In continuance, I will refer to exploits and controls just simply as 'controls' because essentially the same thing also applies to exploits.

Information security in the world of human-made technology (although to a large extent we are applying it to information in our brains and ultimately the preservation of our own DNA), is about protecting information as data from predators. 

Our laws are supposed to be an altruistic method to protect legitimate good actors such as the general public and organisations including staff, stakeholders, customers, etc. and prevent them from becoming the prey of illegitimate bad (threat) actors such as cybercriminals, hostile nation-states etc.

Cybercriminals are essentially predators and we are their prey. 

In cybersecurity, we protect ourselves from predators i.e. threat actors, with security controls.  These security controls are generally considered to be effective once implemented. However this is not the case. Even in our human-created technology bubble, controls are still governed by the laws of nature.

All security controls both improve and degrade at different rates. Control effectiveness is a measure of their ability to prevent the use of exploits to obtain unauthorised access to information. As in nature, we (should) arrange controls in layers so that if some are overcome or become ineffective, others will still protect our data and ultimately the continuity of our business.

In the evolutionary arms race, predator and prey capability is constantly developing meaning that control effectiveness degrades as predator (threat actor) capability to overcome defences increases. 

Control effectiveness is generally thought of in a simple way; we implement a control and assume it will remain effective. We audit to discover and report a lack of controls. However, this simplistic approach is actually much more nuanced and this can lead to a false sense of security because, in actual fact, controls degrade. 

Let’s look at example controls that may generally have a steep degradation rate; people controls. We deliver some cyber threat awareness training to staff and for the following hours, days, ideally weeks, these staff recall and follow good practice and our information and money remain safe and secure.  However, before long, memories begin to fade, old habits return and vulnerability to attack and fraud returns. In this example, threat actor capability need not have improved; although in reality new deception schemes are always being developed. So in this case, we can refresh our security awareness controls on a more regular basis to counter the steep rate of degradation.

Technical controls may have slower degradation rates. However, controls complexity can also produce a degradation in overall security. An example might be an access control rule which, in isolation, achieves the desired level of segregation and/or blocking. One may expect a firewall rule to have a generally low rate of degradation. Then, over time, additional access control rules are added which have the effect of degrading the effectiveness of the original control. A firewall turns into a router, email file attachments subvert segregation of data in network shares.

Examples where threat actor capabilities overcome defences include the gradually increasing ability for attackers to overcome MFA (such as Evilginx2), which has the effect of degrading the MFA control over time. Conversely, the work by defenders on security keys reduces the control degradation. Another example is encryption.  The control effectiveness of encryption to keep data confidential is constantly degraded by new decryption techniques developed by predatory adversaries. The advent of quantum computing is raising question which may cause encryption controls to either degrade rapidly or the become extremely resilient.

As an apprentice electrician in my teens, I learned about planned maintenance. Planned maintenance is a technique used to assure the continuity of electro-mechanical operations. It involves first understanding the degradation rates of components and then systematically replacing those components before they fail. I used to be tasked with visiting every switch panel to proactively check and replace components. 

Subsequently, as an information systems auditor, I noticed that, whilst audit reviews are arranged using a schedule; usually focusing on criticality, this was not the same as planned maintenance.

Which approach do you think best suited to cyber risk management?

I shared my thoughts on control degradation with my friend, the brilliant Jack Jones who, at the time, was working on the early stages of development of his Factor Analysis of Information Risk (FAIR) modelling; of which I came to be a huge supporter, and we discussed the concept.

In developing FAIR, Jack has worked on a range of factor analysis activities in his groundbreaking work to help organisations objectively measure cyber risk and manage it much more effectively. 

As critical factors in risk management, Jack and I are certain that controls are an area where significant improvements can be made and both specific and overall risk reduced. Importantly, controls analysis is key because the defenders can more easily influence the factors it comprises. It is generally easier to improve your own defences than it is to improve those of others and both are easier than degrading the capabilities of an adversary, especially in cybersecurity risk management when threat actors can operate from locations that are jurisdictionally remote.

Determining control degradation over time, selecting and arranging controls with degradation rates in mind and using planned maintenance techniques to review and refresh controls before they degrade below threshold are ways in which resilience to cyberattacks can be enhanced and maintained. 

We don’t need to mimic nature. We are part of it. The complexity we introduce in our societies are just our translations of predators and prey, altruism and self, good and bad, legal and illegal, legitimate business and cybercriminal operation.

Control degradation is just part of this wonderful complexity and to survive and flourish we must embrace it. Give some thought to the controls you manage and maintain. Use the concept of control degradation to improve your business resilience.


Subscribe to STORM
cyber security insights

Stay informed on the latest trends in digital security, cyber insurance, incident response and more with our industry leading insights, blog and webinars.

bottom of page