Log in

Site menu:



Site search

August 2020



Hunting the Chimera

Whatever side you’re on, an undeniable effect of the ongoing debate over the reality of cyberwarfare is the infiltration of the term “cyberwar” into our vernacular. We have all gradually come to accept “cyber” as the fifth present or potential domain of warfare after land, sea, air, and space. We are becoming increasingly aware of such previously arcane terms as “˜SCADA’ (Supervisory Control and Data Acquisition systems), and what these computer systems monitor and control, namely, our “˜critical infrastructure’, defined ominously by the Department of Homeland Security as the collection of assets that “our society depends upon such that if it were damaged or destroyed, it would have a significant impact on our ability to function. Think of the nation’s power grid or banking system. The Internet. Water treatment facilities. Nuclear power plants. Transportation. Our food supply chain and agriculture.” And whatever side you’re on, we are wondering at the costs””economic, political, and not least of all, social””of abstaining or engaging in such a war.

Both sides of the debate are replete with pundits and other outspoken personalities from intelligence, defense, legal, and information security backgrounds, and both sides make cogent, albeit familiar arguments. Cyberwar deniers argue that while there is certainly rampant cyber crime, it is a far stretch from qualifying as warfare; after all — if it is war, where are the casualties? More injuriously, they claim that the hype is plain scaremongering with thinly veiled financial and power ambitions. Financial beneficiaries would be the defense-industrial base players (Lockheed Martin, Northrop Grumman, Raytheon, Boeing, General Dynamics, Booz Allen Hamilton, etc.) with their mushrooming cyber services divisions, as well as commercial information security vendors. Power beneficiaries, far more worryingly, would be governments, as they legislatively wrest Internet freedoms away from the public, coercing people to trade civil liberties and what little online privacy is left for the promise of increased security. Substantiating these warnings are programs such as the NSA’s recently announced “Perfect Citizen”, a surveillance program intended to protect primarily privately owned critical infrastructure systems, and the 40 pieces of cyber-legislation currently circulating on the Hill.

Apologists warn of public ignorance of the commonness and frequency of ongoing attacks, and a gross underestimation of the consequences of the looming “electronic Pearl Harbor.” They offer vividly traumatizing movie-plot examples from collapsing water treatment facilities, oil pipelines, air-traffic control systems, and electrical grids to financial armageddon scenarios such as the one presented at an Intelligence Squared cyberwar debate on June 8, 2010 by former Director of National Intelligence and of the NSA, Mike McConnell:

“Let me give you just a way to think about it. The United States economy is $14 trillion a year. Two banks in New York City move $7 trillion a day. On a good day, they do eight trillion. Now think about that. Our economy is $14 trillion. Two banks are moving $7 trillion to $8 trillion a day. There is no gold; they’re not even printed dollar bills. All of those transactions, all those transactions are massive reconciliation and accounting. If those who wish us ill, if someone with a different world view was successful in attacking that information and destroying the data, it could have a devastating impact, not only on the nation, but the globe. And that’s the issue that we’re really debating.”

Despite the nightmarish portrayals, and in dismissal of the denier’s “where are the casualties?” cry, exponents generally acknowledge that while we are not in the midst of a cyberwar, if a real war were to erupt, cyber is certain to be a theater, and that we are currently ill prepared for such engagement, both defensively and offensively. One thing all parties can agree on is that we’re facing colossal levels of cyber threats and cyber crime, and that something must be done to mitigate the epidemic. And here emerges the fundamental difference between the two camps: whether or not any degree of governmental surveillance or militarization of the Internet is necessary to accomplish this. At its core, the debate is about openness, transparency, anonymity, and privacy; it is a question of trust.

When considering the issue of privacy versus security on the Internet, technologies such as DPI (deep-packet inspection, capable of scanning and performing pattern-matching on all the content of all network traffic, a centerpiece of EINSTEIN3 program) and trusted identity systems (such as the recently proposed NSTIC – National Strategy for Trusted Identities in Cyberspace blueprint) immediately spring to mind.  Skeletons of earlier Big Brother initiatives, such as Clipper, Carnivore, Total Information Awareness, and Echelon are dragged out of the closet by the ACLU, EFF, and EPIC, a ritual that some claim is little more than eye for an eye scaremongering disguised as education. Bandwagon accusations of everything from government incompetence to outright evil are made, and emotional terms like “fascist” and “mark of the beast” are flung by many who clearly never bothered commenting on or even reading any of the proposals.

Take, for example, NTSIC, which was drafted in collaboration with civil liberties and privacy communities. NTSIC’s Identity Ecosystem proposal is voluntary, and among its guiding principles is adherence to the eight Fair Information Practice Principles (FIPPs): Transparency, individual participation, purpose specification, data minimization, use limitation, data quality and integrity, security, and accountability and auditing. Regardless, the din of accusations of a government power grab have smothered this evolving and open-program’s well-deserved positive reviews. Much to our detriment, facts are among the most recent casualties of popular distrust for government.

Have there been bad government actors throughout history who have betrayed the public’s trust? Yes, but despite such past events and our deeply engrained negativity bias, it is neither reasonable nor in our best interest to mechanically and uncritically distrust all government. Instead, we should seek to restore trust. Only actions and their effects can achieve this, so we should endeavor to collaboratively move forward rather than to filibuster and stifle, after all “any jackass can kick down a barn, but it takes a good carpenter to build one.” Cyber legislation is new terrain, and we owe ourselves participation as co-navigators. By taking the time to evaluate government initiatives, we can assess whether or not they have in sufficient degree the requisite ingredients of trust: political process, oversight, and public accountability.

Further, we must be reasonable in accepting that governments must operate with a certain level of secrecy as a means of ensuring national security. McConnell illustrates this point well:

“The equivalent of the National Security Agency was breaking Nazi Germany’s code in World War II. Historians argue that that probably shortened the war by 18 months to two years, saved countless lives and incredible resources. Did the American people have the right to know that NSA was breaking Nazi Germany code in World War II? Because if they had known, the Germans would have known, and all they had to do was take it away by changing the rotors. Secrecy gets a very bad name in our society. American citizens don’t like spies in spite of the fact that the first spy master was George Washington. Secrecy is a necessity.”

What are the government’s goals in proposing such controversial technologies as identity systems, data collection, and DPI? To spy on their citizens? To foment public discontent? To squander taxpayer dollars? To implement a stopgap until fMRI‘s are finally embedded in all of our smartphones?

No. The goal of these technologies is to hunt a chimera, a mythical creature composed of multiple parts: one part “Attribution” and one part “Situational Awareness”.

Attribution, the accurate identification of an actor or agent, is an elusive beast in the cyber domain. In the real-world, sources of actions can generally be traced because few of us have mastered the arts of astral projection or telekinesis. The Internet, however, provides the perfect environment for virtualization, abstraction, and indirection. IP addresses are not trustworthy as traffic can be tunneled through proxy servers and onion routers, either to conceal identities and location, or to maliciously implicate other parties in an act. Worse, even if hosts can be identified, there is no reliable connection between actor and host, whether due to the attacker employing a botnet, or simply because of our inability to know who was really at the keyboard.

Situational awareness (SA) may be defined as an attempt to develop a comprehensive and intelligible common operating picture of complex, dynamic, multivariate systems across multiple commands. Or it may be defined, more tersely, as omniscience. It is the military equivalent of Laplace’s Demon, described by its 19th century inventor, Marquis Pierre Simon de Laplace, as follows:

“We may regard the present state of the universe as the effect of its past and the cause of its future. An intellect which at any given moment knew all of the forces that animate nature and the mutual positions of the beings that compose it, if this intellect were vast enough to submit the data to analysis, could condense into a single formula the movement of the greatest bodies of the universe and that of the lightest atom; for such an intellect nothing could be uncertain and the future just like the past would be present before its eyes.”

The thought experiment is an interesting one, but its conjectured strain of determinism has been largely refuted by such discoveries as quantum mechanics and psychopathology. Reality, it turns out is simply not that predictable.

While not as evasive as attribution, SA is the more formidable quarry. We pursue it through combinations of DPI and log, netflow, and statistical analysis. But these methods are as imperfect as Laplace’s Demon, at once empowered and thwarted by determinism. We identify events of interest with finite automata, heuristics, and algorithms, but these all rely on signatures, rules, pre-classification, and prediction. So long as we can describe events, we can detect and prevent them, but the moment they escape the realm of the predictable (as the more highly evolved adversarial attacks are wont to do) they become invisible.

A good way to illustrate the shortcoming of such purely predictive security models, and of omniscient SA aspirations, in general, is to contrast the difference between the seemingly similar terms “no sign of infection” and “sign of no infection”. The first term ,”no sign of infection” means that our current methods of classifying and detecting an infection have resulted in negative results (no infection). The second term, “sign of no infection” means that there is unequivocal evidence of negative results (no infection). No doctor in his right mind would ever use the second term. Why? Malpractice aside, because that sort of certainty is unattainable. At best, we can hope for “no sign of infection” where, to the best of our predictive abilities, we can be confident that results are negative, but we know that this might be a false negative (infection). This is how all situational awareness initiatives work, by saying “we are as confident as we can be that infection results are negative, but don’t get too comfortable because it might be a false negative.”

And it is for this reason””that we cannot predict everything””that situational awareness must also have a retrospective component. This allows us to concede that we are not omniscient and that we cannot know everything in advance, but to have the ability to go back and reexamine the past once we have the benefit of future knowledge. This is why data collection, and the persistence of surveillance information, is critical to any serious security program.

Are perfect attribution and situational awareness achievable? No, they are lofty illusions. But there are valuable incremental gains to be had here, so neither should we allow the perfect to be the enemy of the good, nor should we allow an irrational fear of government to deny us the potential of improved defenses. On the contrary, we should instead do what might, in our imperfect awareness, seem counterintuitive and support the hunt for the chimera. Rabid privacy advocates, minarchists, and those who would commit the all-too-common informal fallacy of making a slippery-slope argument about the perils of ceding rights to the government are advised to steel themselves by referring to the part in the Constitution’s preamble about “provide for the common defence”, and apply it to the 21st century. Government does not consider us the enemy, but we must accept that in our virtualized, interconnected, malware infested cyber-dependent world, the enemy is among us, and it is government’s charter to defend us.

Share: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Twitter
  • Reddit
  • Slashdot
  • LinkedIn
  • Facebook
  • email
  • Print

You must be logged in to post a comment.