illurity-logo
Log in

Site menu:

Categories

Tags

Site search

July 2018
M T W T F S S
« Dec    
 1
2345678
9101112131415
16171819202122
23242526272829
3031  

Links:

Archives

If I knew then what I know now

The eternal recursive human lament. If you’re a post-adolescent with a functional left-brain, the thought has surely occupied your mind. And it goes through stages, progressing from incompetent ignorance (“I know it all, so don’t you try to teach me anything”), to personal agnosticism (“I know enough to know that there’s a lot I don’t know”) to acceptance of inherent epistemological limitations (“not only can we never know everything there is to know, but there will always be unknown unknowns”). Call it an ironic effect of cognitive maturation, but the more we know, the more we realize we don’t know; to assert otherwise generally serves to do nothing but betray one’s own obstinacy, unconsciousness, or naiveté.

Despite this, we’ve all known people who pretend to omniscience. We expect this from the inexperience of youth, and we counsel the young to learn to appreciate the value of experience. But we’ve also known people who’ve grown beyond their youth but who persist in the pretense. And it’s toward these people that we wag our fingers, shake our heads, and with all the certainty of the imperfection of our knowledge caution “mark my words” or “just you wait and see”. Further, when such a disorder of critical faculties is exposed, the credibility of these obstinate omniscients begins to erode. The more they betray their poor judgment and inadequacies of understanding, the more they irredeemably mar their own reputations. Simply, it is ill-advised to take advice from the pathologically unenlightened, intransigent, or delusional.

But ill-advised as such misplacement of trust may seem, oddly it endures in worship to certain conventions. Specifically, I am referring to the community of thriving information technology purveyors who purport to deliver comprehensive security in the form of a piece of software or an appliance. Indeed, the forces of economics and natural selection work to eliminate the inferior, unaffordable, or ineffectual, leaving the market with the most fit. But there is often an ironic consequence to this: the more fit a security product is perceived to be, the more likely it is to recklessly embolden those whom it is employed to protect, having the paradoxical effect of instilling a deleterious illusion of security. Yet this is but a small problem compared with the fact that attacks evolve according to the same forces driving the advancement of defensive countermeasures. The fittest attacks adapt to their adversaries, becoming increasingly stealthy and subtle, both in their delivery and the perceptibility of their payload. Detection at the time of occurrence relies on intrusion detection/prevention systems that have ever-reduced visibility into increasingly covert attacks. Security Event/Information Management (SIEM) platforms can only report on and respond to the specific events they have been configured for. Log aggregation, analysis, and correlation tools can only act on the specific meta-information about the set of events that they have been configured to recognize. Application layer gateways, proxies, and their derivatives can only operate on the well-known protocols, procedures, and methods they were written to handle. Deterministic (pattern or signature based) methods of detection have mounting difficulty dealing not only with intentional obfuscations, but also with the the inevitable window of exposure that exists between the introduction of an attack and the development and deployment of antidotal signatures. In response, some defensive systems are moving—often in unequal measures of practice and marketing—toward a cocktail of deterministic and non-deterministic (behavioral/anomaly based) methods of detection; unfortunately, the latter, because of its deficient certitude relative to the former, cannot currently be employed with sufficient aggressiveness to achieve comparably material effectiveness, lest it introduce insufferable false-positives. But these technologies will, of course, mature. And naturally, once these hybrid systems become sufficiently pervasive, the survival of the attacks will depend on their fitness at simultaneously impersonating “normal” behavior while minimizing detectability.

On accepting this scenario of reciprocally adaptive, coevolutionary equilibrium (the Red Queen hypothesis), instantly exposed is the unfortunate tendency of the infosec industry to consider foremost currently manifest (i.e. un-adapted) threats, barely acknowledging the inevitability of the unknown. More regrettable is that the victims of such imperfect criteria are IT buyers and implementers, who largely depend on the guidance of these head-in-the-sand, static assessments when building their defenses. For all the fervor the industry has for the term “arms race”, why does this seemingly willful deception persist? The phenomenon is understandable considering the psychological implications of the alternative. Active countermeasures (firewalls, UTMs, IPSs, SIEMs, etc.) regardless of their actual effectiveness or ineffectiveness, also create an illusion of control, much the same way taking our shoes off at the airport is intended to create an illusion of control. This is best described by Bruce Schneier’s concept of “Security Theater”. The benefits are not entirely illusory – the firewall will stop known attacks, just like the TSA will prevent the next Richard Reid. In effect, this is defense by deterrence: we expect that the competent attacker knows what the defender is looking for, and will therefore not waste resources on that particular form of attack. But there is a big difference. The cost to an airline attacker trying the old shoe bomber attack is a plane ticket and life in prison (or worse), while the cost to an anonymous internet attacker trying the old SQL Slammer attack is virtually nil. Economically, the internet attacker has an effectively limitless resource with which to launch attacks, so he can launch it a million times at no cost, and even if only effective .001% of the time, it still hits 10 victims. The notion of deterrence does not apply when there is no cost. And as mentioned earlier, unlike certain instances of real-world security where the illusion of security is potentially beneficial (e.g. to minimize panic, or to keep shoppers shopping) this sort of illusion is entirely detrimental in the case of information security.

Other simple principles of game theory also apply. It is obvious that once a well-informed, rational attacker knows what the defender is looking for, his method of attack will evolve. It is a repetitive, sequential game where the attacker has a clear second-mover advantage for a number of reasons: The attacker chooses both who and when to engage in a round of play, allocating resources only when perfectly advantageous, and maintaining the element of surprise, while his opponent is obligated to participate in every round (i.e. attempt to defend against all known attacks from all attackers). The attacker has full access to study the systems of defense (his opponent’s move) for as much time as is needed to mount an effective attack, while his opponent lacks all visibility into the attacker’s operation. The attacker does not have quality control standards or other impediments to moving quickly (i.e. expeditious “product” deployment), whereas his opponent must methodically abide by company, market, and regulatory standards for product releases. It can seem unbalanced almost to the point of futility.

OK, let’s pull ourselves together. We need to take some action, don’t we? Psychologically, not employing active countermeasures (imperfect as they might be) in our own defense would be an admission of defeat, an exhibition of learned helplessness… merely planning to fail. Of course we need active defenses to the extent that we’ve assessed our risk and have identified appropriately practicable (effective and affordable in terms of direct, indirect and opportunity costs) countermeasures. But at the same time, we need to consider that security is composed not only of technology, but also of people and processes; that these components of security are each, on their own, imperfect; that we rely on these imperfect pieces on try to compensate for the others’ imperfections, that our adversaries’ technologies evolve as quickly as our defensive technologies; and that the very rules of the game are unfair. In short, even after mounting or best defense, we must still expect failure. As the unattributable (yet oddly Sphinx-like) saying goes: “he who fails to plan, plans to fail” – but it could be extended with “and he who fails to factor failure into that plan, plans to epic fail”.

We humans have a very difficult time with the concept of failure. We can regard its occurrence as an almost unbearable indignity, and any inadequacy in planning to avoid it as a tragic character flaw (that is, unless we have the strength to persevere, learn, and transcend – then it becomes a positive thing, at least until the next time). Failure is a very emotional experience. Fortunately, automata do not yet have quite the same response to failure, so when today’s firewall inevitably fails to defend against tomorrow’s attack, it doesn’t slip into a funk. We should aspire to such pragmatism, and embrace failure. We should accept that our emotions are at odds with our rationality, and work to correct such human emotional maladaptations as blindly ignoring the potentiality of failure, focusing disproportionate resources on often illusory active defenses, and building system after system without failure as an inherent component of design.

Ultimately, failure is not a matter of “if”, it is a matter of “when”. Pride should not prevent us from building this certainty into our systems, and we should question the faith we have in those who commit such a sin against us.

Share: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Twitter
  • LinkedIn
  • Facebook
  • email
  • Google Bookmarks
  • del.icio.us
  • StumbleUpon
  • Reddit

Comments

Pingback from Worth A Glance » Negative Day Threat Detection
Time: 2008-11-05, 18:28

[…] efficient prevention is still usually far more useful that detection, but since failure is inevitable why shouldn’t we employ tools to aid in incident response? With the elements of storage getting […]

Pingback from Worth A Glance » Quackery
Time: 2009-01-06, 21:09

[…] interference with a system committed to protect a victim only makes that victim weaker” (e.g. the illusion of security), this argument is typically only invoked when convenient to the rhetorician, and only to the […]

You must be logged in to post a comment.