I like to live by my simple adage in cybersecurity, "Machines Don't Do Bad Things, People Do." When you look the potential vectors of cyber, physical, and personnel threats: the vulnerabilities, the mistakes, and the attacks, can all be traced back to a person. Using this adage in building a cyber defense strategy, provides a new kind of framework to measure and reduce threats. The challenge: even though you may see a machine going awry, it is really, really hard to find the "bad guy" before the vulnerability is exploited or the attack is in play. So, In an effort to come at this problem a new way, let's examine "Brent's Inverted Corollary of Cybersecurity" (breaking news), "Machines Don't Do Good Things, People Do".
I know, there are some folks out there saying, "...but a hurricane taking out my data center" or "...but AWS S3 buckets failing" are not caused by people. But, there was a person who didn't design, prepare, or execute the failover strategy. Similarly, a person clicking on a malware link, or giving their badge to a visitor to "hit the head" is not wanton mischief - just errors in judgement and/or training. This is still a person-caused event; we can't be perfect, but we do need the breadth of understanding of the vulnerabilities and risk. How do we build our physical and cyber security programs so no bad things happen? ...make sure only good things are happening. First, let's go back to the bad things that can happen. I super-classify those bad things into the following basic clusters so we can assure that people we think are good are not otherwise encumbered:
- Evil-doer: External individual or group, consciously going after you or your company to exploit a vulnerability.
- Havoc-wreaker: External individual or group, going after any person or company, without specific targeting, to exploit a vulnerability.
- Malicious-insider: Internal (i.e trusted) individual or group, consciously creating a vulnerability to you or your company such that it can be exploited by Evil-doers.
- Evil-insider: Internal (i.e trusted) individual or group, consciously exploiting a vulnerability, and maybe also creating the underlying vulnerability such that it can be exploited by Evil-doers.
- Careless-insider: Internal (I.e trusted) individual or group, exposing a vulnerability either through lack of compliance, poor training, or error in judgement.
- Careless-outsider: An individual or group (i.e a company) who provide you or your company a system, system component, or service that introduces a vulnerability.
- Unwitting-coconspirator: Internal (i.e. trusted) or external individual or group who pass along something (file, piece of hardware), or even do a service (janitorial service, shredding) in the natural course of business (part of the supply chain) who would not normally have any visibility or responsibility for badness or goodness, but would be in the flow, obfuscating either.
By extension, insiders who go awry can be generally classified as one of the following:
- Ideologically-compromised: Set out with, or converted to, the intent to do evil, but appear trusted.
- Duress-compromised: Set out as trusted and then forced to conduct untrusted acition because of external pressure or exploitation of personal vulnerability.
- Stress-compromised: Set out as trusted and then conducts untrusted acition because of self-imposed stress (financial, emotional, psychological) whether willful or not.
- Unwittingly-compromised: Set out as trusted and then conducts untrusted action because they are "duped" into it.
In my final extension (for I fear being accused of hyper-hyphenation), the "supply-chain of trust" means that the evil-doer or havoc-wreakermay be the malicious- or evil-insider employed by the careless-outsider or unwitting-coconspirator. If you get that last sentence, then you are waaay down the path to getting this already. This is really deep stuff made for Big Data.
In the evolution of cybersecurity efforts, those who knowingly, or unwittingly, cause or exploit vulnerabilities block evolution of function and capabilities of our systems.
Historical cybersecurity has evolved from:
- Nothing, to:
- Disaster protection, to:
- Barrier protection, to:
- Defense-in-depth and threat detection, to:
- Vulnerability identification and remediation, to:
- Automated integration and visibility of vulnerability and threat detection, to:
- Artificial Intelligence pattern finding, to:
- Offensive threat mitigation, and finally:
- Global Thermonuclear War (just added this one to see if you kept reading)!
In almost ALL of these cases, we are measuring machines or groups of machines. (IP addresses, MAC addresses, DLL files, traffic flow, packet contents, certificate vulnerabilities, etc.). The next logical step in the evolution is to stop trying to see the threat as a bad file or device, but as a bad person who is hiding dehind and across hundreds of machines and other people.
This is again where I get on my high horse about hyper-accurate, real-time matching of data. Like Internet of Things (IoT) theory, a person is the amalgamation and disambiguation of "signals" originated by all of the devices with which they interact (or don't). Those devices, no matter where they are in the information supply chain, come with a wide variety of attributes, differing addressing and enumeration schema, temporal or dynamic values, and even configuration/use errors - associating them back to a single person is critical and only possible using real-time, referential matching and linking (see Verato page and video if you need more background).
Wrapping it all up, but still at a high enough level so you have to call me to figure out how to do it, evil-doers and havoc-wreakers worth their salt practice trade-craft to obfuscate the linkage back to the individual. We need to identify the "signals" of bad guys by reducing all of the "noise" of good guys. Further, we need to disambiguate the "signals" of bad guys embedded among the "signals" of the good guys (like in private-space routing with DHCP behind a service provider's Internet gateway that hosts a million good guys and one bad guy). Using the sub-classification system described above, referential matching, and solid nodal approach to Big Data analysis (not BI), we can better understand the patterns created by good machines (and good people) and stop the behavior observed from "bad machines" by linking them back to people and the human behaviors classified above (and even further segmentation).
Let me know your thoughts on the high-level strategy and ideals about sub-classifications I have created thus far...