illurity-logo
Log in

Site menu:

Categories

Tags

Site search

November 2018
M T W T F S S
« Dec    
 1234
567891011
12131415161718
19202122232425
2627282930  

Links:

Archives

Where’d that firmware come from?

The word “hacker” is very frequently misused, insomuch as jargon can be misused. But who would dare argue with an RFC? This venerable 15 year old document incontrovertibly defines a hacker as “a person who delights in having an intimate understanding of the internal workings of a system, computers and computer networks in particular.  The term is often misused in a pejorative context, where ‘cracker’ would be the correct term.” Whether it’s your iPhone, your Tivo, your XBox, or your WRT, there are plenty of hobbyist sites dedicated to these sorts of delightful hacks, and others. And that’s fine – if you buy a consumer electronics product, you should be able to modify it, enhance it, or destroy it in whatever fashion if you so choose. Worst case, you’re out a few bucks. Or maybe you go to prison because some rabid IP attorney heaps DMCA or EUCD copy protection violation ambiguities on you in just the right Goldilocks configuration.

If the only control a product vendor offers to protect the original/authorized state of a hack-worthy product is a “please don’t change stuff” EULA, the effectiveness will hover around zero. If the vendor goes a little further by putting in place a relatively weak technological control, the effectiveness will increase, but it will be somewhat offset by the fact that a difficult but surmountable challenge generally makes a successful hack even more delightful for the hacker. So given that nothing is unbreakable, to what extent should security or mission-critical vendors go to protect the integrity of the code of their products?

According to this, further than they do. Call it phlashing, or malicious hacking, or bricking, this exploit describes loading intentionally non-functional code onto a target as a “permanent denial of service attack”. Although loading naughty firmware can often be done remotely, thus making this a “remote exploit”, the act of loading firmware generally requires administrative access. As long as 1) the default username:password has been changed, 2) admin credentials have not been compromised, and 3) firmware is not loadable through some other insufficient validation backdoor or exploit, then this becomes far less threatening.

Still, responsible vendors could (and do) take further steps to defend against adulteration, such as:

  • Enforcing the use of digitally signing firmware images
  • Performing hash verification of the images to protect against tampering/fuzzing
  • Verifying that the loaded image is intended for the target platform (to ensure that ‘wrong’ versions of firmware aren’t loaded)
  • Providing an unalterable bootstrap method of recovery, i.e. a SafeMode at a ‘brick-proof’ level below the running firmware, in the unlikely event of firmware corruption
  • Using Secure Compact Flash and/or encrypted disk file systems to protect against code being directly written to a boot device

Sure, some of these measures are designed to protect against only physical attacks, and it’s common to hear the argument that “once the attacker has physical access, the battle is lost” but what about before the product is put into production? A big part of the Common Criteria EAL certification process, includes such points as “who has access to your source code?”. “who controls access to the source code?”, “what compilers, toolchains or development environments are used in the creation of your products?”, “by whom and how is firmware loaded onto the appliances?”, “how is the finished product transported from manufacturing, through distribution, to the consumer?”

A truly secure product will defend against all of this.

Share: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Twitter
  • LinkedIn
  • Facebook
  • email
  • Google Bookmarks
  • del.icio.us
  • StumbleUpon
  • Reddit

The Risk of Productivity

Last month’s RSA and Interop shows really demonstrated our industry’s penchant for the (sometimes seemingly incognizant and exploitative) overuse of the word “risk”. Being so beaten over the head with the word serves as a reminder that the measurement of risk isn’t easy. First, it’s strongly affected by situational context. A dignitary on a turbulent flight would be more likely to exaggerate the risk and anxiety of a plane crash, whereas a person surfing off the California coast would probably be thinking less of plane crashes and more of the risk of sharks. And this sort of variance in the measurement of risk is a good thing, otherwise either no risks or too many risks would be taken, and things would either stagnate or self-destruct.

Next, in addition to this contextual, and often emotional weighing of the imminence of a risk (the “what if the bad thing happens”), there is also the need to weigh the expected utility of taking the risk (the “what if I try this, and the effort is successful?”). Here, people can fall into two camps: those who more value gain, and those who more value the avoidance of loss. For example, would you wager $40 for a 50/50 chance (say a coin toss, heads you win, tails you lose) to make $100? If you say “yes,” then perhaps you more highly value gain, and if you say “no,” then you more highly value the avoidance of loss. But was this a fair question? Yes, because it serves to illustrate another important point, the distinction between expected utility and expected value. Expected utility is, again, often contextual and emotional, while expected value is mathematically calculable. In the above example, the expected value of the wager could be calculated by using the law of large numbers: over 1,000,000 tosses, we should begin to approach a distribution to 500,000 heads and 500,000 tails. For each head, the gain is $60 and for each tail, the loss is $40. Multiply 60 by 500,000 and you get a total probably gain of $30,000,000. Multiply 40 by 500,000 and you get a total probable loss of $20,000,000. Divide by a 1,000,000 (the total number of tosses) and the expected value of each toss is $30-$20 or a gain of $10 per toss. Did you decide to take the wager? Does your expected utility agree with the expected value? Should you have taken the wager? Let’s just say you won’t find this game in Vegas.

The recognized start of this sort of thinking was the 1730’s, when a paper was presented to the Imperial Academy of Sciences in St. Petersburg which carried the following quotes:

  • The value of an item must not be based on its price, but rather on the utility that it yields.
  • The utility resulting from any small increase in wealth will be inversely proportionate to the quantity of goods previously possessed.

The author was Daniel Bernoulli, regarded by many as the father of risk analysis. He went further to describe that wealth is perceived more as a factor of productive capacity than of assets. From these principles, one might make the following extrapolations:

  • Reducing the value of utility reduces the amount of risk that will be taken in its pursuit.
  • Reduction of the value of utility can be achieved by increasing its availability (supply), independent of the context of that risk.

Mapping all these concepts to information technology:

  • The tendency for people to take risks with the security of their data must at least partially motivated by their pursuit of the “wealth” of increased productivity.
  • We should then be able to decrease the tendency for individuals to take such risks by decreasing the value of productivity.
  • Therefore, it is not entirely unreasonable to suggest that organizations could reduce their overall IT risk posture by rewarding employee productivity less.

Decrease productivity? Crazy, sure… But less risk and an increased chance for employee life/work balance ought to be worth something in offset to the shareholders.

Share: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Twitter
  • LinkedIn
  • Facebook
  • email
  • Google Bookmarks
  • del.icio.us
  • StumbleUpon
  • Reddit

More $ = More Security?

Perhaps… if it’s the illusion of security rather than real security that’s being qualified (assuming the all-too likely case that illusion and reality are, in fact, different, and that it’s illusion that is more popularly important). At least that’s a conclusion that one could easily arrive at by extension of this study on wines. Accusations of pretense and snobbery aside, it’s really no great surprise that the value of a thing is not based solely on its intrinsic properties, but that expectations of value are manipulable by changes to external properties (price for wines, loudness for music, contrast for video images, etc.) What is surprising, however, is that it’s not merely expectations but actual perceptions that are subject to manipulation.

But the wine study participants were not actually paying for the wine they were tasting, they were simply informed of the (alleged) prices. Which suggests that the mere knowledge of the usual price of a thing might be enough to create an exaggerated expectation/perception effect. What does this mean to security? For vendors selling products it could mean “charge more money and network/security admins will perceive your product as providing better security”, or it could mean “employ a deliberate strategy of giving your products a very high suggested retail price and then discount deeply to simultaneously create the illusion of superiority while remaining price competitive.” Sound like this might be describing anyone in the industry?

This sort of manipulation of perception, however, would seem to end with the “consumer” of the product – in the case of security products, that would be the network/security admin rather than the collection of end-users that are protected by the product. And since one might posit that an illusion of security can actually weaken the overall posture of real security (by emboldening people to behave more recklessly), that its intentional intensification would be a bad thing. Weaving these pieces of data together, there’s at least potential for us to put it to good use: I’d be curious to see the effect of admins delivering a message to end-users along the lines of “we just installed a new network security appliance that cost 30% less than the competition, but it actually provides better security.” If such an attempt at a benevolent manipulation of expectations and perceptions alone could succeed at eliciting more responsible and careful IT behaviors among users, imagine how much more practically valuable the net effect would be if the product claims also happened to be true?

Share: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Twitter
  • LinkedIn
  • Facebook
  • email
  • Google Bookmarks
  • del.icio.us
  • StumbleUpon
  • Reddit

Tough Love

Techno-eschatologists rejoice! The first sign that the end of days is nigh has come to pass. Lo, we have suffered what the professional fomentor convocation has declared the first significant hypervisor-level virtual machine security exploit: A VMWare Shared Folders Directory Traversal Vulnerability. And with that they reveal that contrived validation is no less sweet than the real thing. Blasphemer! An allegation of exaggeration? Yup. First, because the event has been sensationalized, and second because it’s only a virtualization vulnerability by correlation, not causation.

First, to call this a hypervisor level attack against VMWare is not entirely accurate because it only affected the Workstation, Player, and ACE platforms. Even on affected platforms, it required that shared-folders be enabled, which by default they are not on current versions. It did not affect Server, Fusion, or ESX Server, and since ESX Server is categorically the only true hypervisor (type 1, “bare-metal” hypervisor, the rest are type 2 virtual machine monitors, living atop an existing OS) then this was not a hypervisor vulnerability. Why am I mentioning this? Not to split hairs, but to underscore the fact that the vulnerability exists through an interface of convenience exposed by the VMM application running on the host’s general OS.

Second, to suggest that the augurs were correct and that we’ve seen a failing in virtualization security commits the all-too-common logical fallacy cum hoc ergo propter hoc. Although this occurred on the VMWare platform, it was not caused by virtualization. Rather, it was caused by inadequate input validation – the same root cause of very nearly every exploit ever recorded. The fact that it struck VMWare is coincidental, and the only commentary it makes on virtualization is that it’s been very widely adopted.

So how can we protect against these sorts of weaknesses? Code analysis and fuzzer testing have become table stakes. Any developer not employing such tools should be pilloried for criminal negligence and misfeasance. How about going further, like employing good design principles from the start rather than trying to catch the problems exposed by bad principles in analysis or QA. Anyone writing software should know Saltzer and Schroeder’s eight design principles which summarily prescribe:

  1. Economy of Mechanism – “Perfection is achieved, not when there is nothing left to add, but when there is nothing left to remove.” – Antoine de Saint-Exupery.
  2. Fail-Safe Defaults – Access should be denied by default, and only allowed when an explicit condition is met.
  3. Complete Mediation – Every access to every object should be checked.
  4. Open Design – The strength of the mechanism should not come from its secrecy. Protection mechanisms should be separate from the protection keys. Don’t rely on security through obscurity.
  5. Separation of Privilege – When feasible, access should only be granted when multiple conditions are met.
  6. Least Privilege – Only the minimum necessary access privilege should be granted to users, programs, or any other entity.
  7. Least Common Mechanism – As few entities (functions, programs, users, etc.) as possible should share the protection mechanism to minimize the chance or extent of rejection or corruption.
  8. Psychological Acceptability – Usability to increase the chances of the protection mechanisms being used, and used correctly.

It’s the first one, Economy of Mechanism, that I’d like to focus on here. Did VMWare really need to provide the shared folders feature? Sure, it’s a convenience, but is mapping a drive using the Microsoft/Samba provided and maintained CIFS interfaces that much more difficult? Rather than caving to the demands of an increasingly impatient and entitled base of users (of which I, too, am one), maybe it would make sense for developers to resist the temptation to provide yet-another feature, convenience, interface, exposure, and attack surface to have to protect?

Superfluous convenience breeds dependency and weakness. Maybe someday something will trigger a maturation whereby we become less inconvenience-averse, and we realize that removing functions, machinery, and complexity is a surer path to simplicity and security than irrationally continuing to layer them on.

Share: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Twitter
  • LinkedIn
  • Facebook
  • email
  • Google Bookmarks
  • del.icio.us
  • StumbleUpon
  • Reddit

Banners Are Poor Liars

In response to a risk assessment and security audit being performed by one of the proliferating lot of peddlers of such services, a friend recently asked my position on obscuring the banner on our application platforms. This “best-practice” of concealing the true identity of web and FTP servers, SMTP engines, et al, has been around for some time, built on the premise that the less an attacker knows about a target, the less specific and effective the resulting mounted assault.

This logic is certainly true in the event of targets with known, exploitable vulnerabilities. For example, advertising “220 mail.wherearemysocks.com ESMTP Sendmail 8.6.1/8.5.0” (unless it’s a honeypot) is probably inviting trouble. But if the server is well-maintained and fortified by both the producer (vendor, author) and employer (implementer, admin), then the attack surface is minimal. One might even go so far as to argue that advertising up-to-date versions of platforms with relatively unblemished reputations might even serve as a deterrent to less ambitious attackers.

Responsible and diligent maintenance and hardening of publicly accessible interfaces will always provide more legitimate security than dissemblance, both against large sweeping recon scans, and particularly against targeted probing. After all, criminals tend to be an untrusting lot, so why would they believe what a banner tells them, especially when fingerprinting tools like HTTPrint and SMTPScan are so readily available?

Rather than relying solely on the information provided in a banner, application fingerprinting tools try to induce predictable responses, behaviors and patterns in their targets so as to make an educated guess about the engine. At first this might seem a lot like the OS detection feature of NMAP, which relies on certain platform specific TCP behaviors. The difference, however, is that most TCP/IP stack authors got wise to the way this predictability was being exploited, and began to introduce elements of randomness within TCP’s tolerance, largely neutering the effectiveness of TCP-based OS detection. Unlike the sort of flexibility afforded to TCP stack authors to add randomness, application authors are generally strongly bound by parameters, syntax, and behaviors.

A few examples of the ineffectiveness of obscuring HTTP banners:

The www.wellsfargo.com web server advertises itself as “KONICHIWA/1.0”, but HTTPrint rather confidently detects it as “Netscape-Enterprise/6.0”:

C:\temp\httprint_win32_301>httprint.exe -P0 -h www.wellsfargo.com -s signatures.txt
httprint v0.301 (beta) – web server fingerprinting tool
(c) 2003-2005 net-square solutions pvt. ltd. – see readme.txt
http://net-square.com/httprint/
httprint@net-square.com

Finger Printing on http://www.wellsfargo.com:80/
Host Redirected to https//www.wellsfargo.com:443/
Finger Printing Completed on https://www.wellsfargo.com:443/
————————————————–
Host: www.wellsfargo.com
Derived Signature:
KONICHIWA/1.0
9E431BC86ED3C295811C9DC5811C9DC5811C9DC594DF1BD04276E4BBC184CB92
7FC8D095AF7A648F2A200B4C811C9DC5811C9DC5811C9DC5811C9DC52655F350
FCCC535B811C9DC5FCCC535B811C9DC568D17AAE2576B7696ED3C2959E431BC8
6ED3C295E2CE6922811C9DC5811C9DC5811C9DC56ED3C2956ED3C295E2CE6923
E2CE6923FCCC535F811C9DC568D17AAEE2CE6920

Banner Reported: KONICHIWA/1.0
Banner Deduced: Netscape-Enterprise/6.0
Score: 105
Confidence: 63.25

My Qwest provided DSL router advertises its web server as “-“, but HTTPrint claims it is “thttpd”, and telnetting into busybox on the device confirms that:

C:\httprint_win32_301>httprint.exe -h 192.168.0.1 -s signatures.txt
httprint v0.301 (beta) – web server fingerprinting tool
(c) 2003-2005 net-square solutions pvt. ltd. – see readme.txt
http://net-square.com/httprint/
httprint@net-square.com

Finger Printing on http://192.168.0.1:80/
Finger Printing Completed on http://192.168.0.1:80/
————————————————–
Host: 192.168.0.1
Derived Signature:
811C9DC5811C9DC5811C9DC5811C9DC5811C9DC594DF1BD04276E4BB811C9DC5
0D7645B5811C9DC5811C9DC5CD37187C811C9DC5811C9DC5811C9DC5811C9DC5
811C9DC5811C9DC56ED3C295811C9DC5E2CE6927811C9DC56ED3C295811C9DC5
811C9DC5811C9DC52A200B4CE2CE6923E2CE69236ED3C2956ED3C295E2CE6923
E2CE69236ED3C295811C9DC5E2CE6927811C9DC5

Banner Reported: –
Banner Deduced: thttpd
Score: 83
Confidence: 50.00

————————————————–

BusyBox on (none) login: admin
Password:
BusyBox v0.61.pre (2006.07.03-16:17+0000) Built-in shell (ash)

# ps waux

PID Uid VmSize Stat Command
1 admin 1320 S init
[snip]
44 admin 1228 S /usr/sbin/thttpd -d /usr/www -u root -p 80 -c /cgi-b…

So why don’t more application/server platform developers turn the tables on the scanners by looking for predictable scanner behaviors and modifying accordingly? There’s already evidence of this in some IPS platforms which can detect the fingerprints of well-known scanners (such as NMAP’s use of the WNMTE TCP options, or Netstumbler’s probing idiosyncrasies, etc.) But, alas, if this begins to happen widely, the better scanner-authors will just introduce their own trickery… typical arms war stuff which, on a positive note, warrants a mention of ModSecurity for its ability (among many other things) to defend against HTTPrint.

I’ve managed to avoid the use of the “security through obscurity” catchphrase to this point, but I will close with it. One instance where obscuring a banner could be useful is between that period of time when an exploit is discovered and when it is remedied. Again, if the platform’s producer and employer deserve their jobs, the window of exposure should be so small as to be virtually statistically unexploitable, particularly if additional defenses such as dynamically updated Unified Threat Management is in place. But it would still not hurt to conceal the vulnerability from the unlikely event of detection by superficial banner grabs during this short window (concealing entire login screens is a different matter). As such, we’ll provide banner configurability on our platforms in future firmware releases.

Security cannot be achieved through obscurity, but obscurity can be a piece of the bigger security picture.

Share: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Twitter
  • LinkedIn
  • Facebook
  • email
  • Google Bookmarks
  • del.icio.us
  • StumbleUpon
  • Reddit

NG-HD-FW-FUD

It must have been the striking dearth of jargon that led the security industry to introduce a new term for an existing technology: High-Definition (or Next-Generation) firewalls want you to ask yourself the question: “Is the traffic on your network a wolf in sheep’s clothing?” In other words, let’s say you’ve configured your firewall to allow inbound HTTP (TCP port 80) to your web-server, but how do you know it’s really an HTTP client that’s connecting to it? Without deep-packet inspection, or application-level validation (App Firewall or App Proxy), you can’t.The typical threat that springs to mind is “bad-guy connecting to a web-server on port 80 (sheep’s clothing) and using that allowed connection to transport covert badness (wolf)”. It’s the traditional perspective of “we must build a strong perimeter to keep the bad guys out” that makes it simple to disparage the alleged disconnect between DPI and firewalls with the claim that “[deep packet inspection] has been limited to a small set of applications being run through an intrusion prevention system.” But what is the risk in this case? If it’s a real web-server listening on port 80, then it’s only going to be able to do web-server things (handle HTTP methods like GET, POST, HEAD, etc.). If the client tries sending something over the port 80 connection that is outside of the capabilities of the web-server (say an FTP put, or some sort of Remote Procedure Call), the attempt will fail because it will not be interpretable by the server. The exception would be if it were specially crafted payload designed to exploit a vulnerability, but if it were a ‘detectable’ attack (i.e. through signature, behavior, format/structure/content validation, etc.), it would be preventable by any competent intrusion prevention system (IPS) or deep-packet inspection (DPI) engine; if it were undetectable, it would be, well… undetectable.

So consider this simplified flow of events:

moz-screenshot-15.jpg
Five decision points, two points of potential risk. Risk susceptivity: 40%

Bottom line in this case is “Does the detection engine work?” (i.e. “can it accurately detect intrusions without a material occurrence of false positives or negatives?”) If the answer is “yes”, then DPI works wherever it is. If the answer is “no”, then it doesn’t work regardless of where it is. For some, a question might linger: “Why do we even need port-based access rules to govern access to the server – why not just use a next-gen firewall to configure an ‘allow only valid HTTP’ rule?” While this would mean “drop any traffic that is not HTTP, or that can be classified as invalid”, it would also mean “allow enough traffic through to enable the classification of HTTP, and then to determine the absence of invalidity”. Since the question itself implies a transcendence beyond dependency on port-based classification of traffic, the following simplified sequence might then have to occur:

moz-screenshot-16.jpg

Eight decision points, four points of potential risk. Risk susceptivity: 50%

While this is a dramatization, the point is not simply that there is greater risk for failure, it’s that the model is more complex (and thus more computationally costly) than it needs to be. But clearly, the web-server example isn’t the only use case. More interesting than the outside-in perimeter defense model is the inside-out or inside-inside models.

Inside-out would typically refer to hosts on a LAN accessing resources on the internet. In draconian firewall configurations, access might be limited to a couple of necessary ports like TCP 80, 443. Using port-based access controls without DPI, it would be simple to evade these controls through anonymizing proxies, tunneling, port-redirection, etc. Port-based access controls with DPI would allow for a rule that says “if you see traffic that is not HTTP/HTTPS on ports 80/443, then block it”, and this would catch traffic such as IM, P2P or unyielding Skype-like protocols running covertly over port 80/443 – but this still needs a port for classification. Without the port as a classifier, it would not be possible to determine whether traffic is masquerading, because there would be nothing defining what it shouldn’t be.

Inside-inside is a relatively new concept, not for the IPS, but for DPI based validation of traffic. IDS/IPS have been employed within networks for quite some time, seeking out malicious agents in the realm of the trusted, such as a PC infected with some sort of contagious malware. Historically, there hasn’t been much of a need to validate the traffic beyond “it isn’t an intrusion, it isn’t a virus, etc.” while on the trusted network. As NAC emerged as a new trusted network security model, so emerged the limitations of its early incarnations. The pre-admission control model evaluated hosts to determine their security posture before allowing them access to the network. Once access was granted, however, they could do as they wished – so if they managed to contract some badness once on the network (e.g. through the internet, removable media, etc.) then it was up to the IPS to handle things since the NAC platform had already done its job. Not a problem if the IPS solution was properly deployed and up to the task.

The problem, however, became evident through NACs exception model: since NAC largely depends on software agents for posture assessment, network devices unable to run said agents (e.g. incompatible OSs, printers, VoIP devices, switches/routers/firewalls, etc.) were excluded (by MAC/IP address, VLAN, etc.) from NAC’s enforcement. While this was fine in cases of legitimate device-after all, the current risk of a printer doing bad things is virtually non-existent-it was the illegitimate devices that were the problem. In a simple example:

1. A printer was granted an exclusion by its IP address and MAC address
2. Miscreant determined IP/MAC of the printer
3. Miscreant assumed IP/MAC of printer on laptop
4. Disconnected printer from network
5. Connected infected laptop to network
6. Miscreant played WoW while resident malware attempted 3,000,000 TCP 135 connections to random IP addresses, and sent 1 billion pieces of spam.

Of all the use-cases, the inside-inside scenario stands out as the one seemingly most in need of DPI for the purposes of validation. The kind of validation, though, is not “this application must only run on this port” but rather “this device should only be communicating using these ports and/or applications.” If anything, this suggests that DPI needs to be more tightly coupled to NAC. Since direct integration is precluded both by the multi-gigabit speed requirements of LANs, as well as by the understandable desire to maintain solution/component flexibility, perhaps the best model is a correlation API that can enable cooperative event and network intelligence sharing between devices on the without a compulsory dependency on an additional external platform like a SEIM.

Just about any IPS today can detect and classify port-hopping, productivity-robbing applications such as IM, P2P, and streaming media. The good ones can even catch notoriously tricky applications such as Skype and Winny. UTM platforms integrate these classification capabilities into the policy capabilities for which the traditional firewall is best known. When a UTM solution is well-architected, the result is a security platform that can:

  • Detect malicious traffic such as application-based instructions, viruses, and spyware
  • Accurately classify even obscured traffic application-by-application
  • Finely control traffic and applications with basic log/allow/deny rules, bandwidth management, or identity-based access methods.
  • Be managed centrally, and provide meaningful and actionable reporting on utilization

All while meeting the demanding performance and reliability requirements of today’s networks.

Share: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Twitter
  • LinkedIn
  • Facebook
  • email
  • Google Bookmarks
  • del.icio.us
  • StumbleUpon
  • Reddit

Domain Hijacking Made Easy

Apologies in advance for randomly picking on Yahoo…

The domain (illurity.com) on which this site lives was registered with Yahoo Small Business (YSB). In addition to registration, YSB also provides DNS services through a convenient web-interface. Logging in to the YSB admin portal uses the same credentials as other Yahoo services, such as Yahoo IM and Yahoo Mail. Simple. Convenient. Of course, this sort of service is commonly available through many other providers, so what is described here is in no way unique to YSB.

Looking at the whois information for illurity.com, you will see that my Yahoo email address appears under the “Admin Email” field. Under the “Tech Email” field, you will see the Yahoo account domain.tech@YAHOO-INC.COM. According to the available whois report, there are some 2.7 million records registered to that email address. Lots of folks seemed to have registered their domains with YSB.

So let’s say someone procures a list of all the domains whose “Tech Email” field is domain.tech@YAHOO-INC.COM, effectively providing a list of all the domains registered through YSB. A simple whois on those domains would then provide the end-user/registrant email address via the “Admin Email” email field. Simple address harvesting of a focused target.

Now assume that this someone then registers a similarly named phishing domain, something like yahoo-smallbusiness.com (and rather than registering it through YBS in a twisted recursive gesture, registers it for use in a fast-flux fashion). And then they start sending targeted form-driven emails to the harvested addresses, something like:

Dear [harvested real name],

As the registrant of the [harvested domain name] with Yahoo! Small Business Solutions, you are invited to enroll in our new Strong Authentication service at no charge, and under no obligation. Strong Authentication will help to protect you against identity theft by requiring a secondary proof-of-identity, beyond your Yahoo! ID and Password, in order to login to your Yahoo! services. This second-factor of authentication will help to protect the confidentiality of your account even in the event of credential theft.

Get started now!

To learn more about the service, click the “Get Started Now!” button above, or type the URL http://login.yahoo-smallbusiness.com/login.html into your web-browser if your mail reader does not support embedded links.

Thanks again for choosing Yahoo! Domains!

Best regards,
The Yahoo! Small Business team

So if an unrealistically modest 1% of 1% (.0001%) of the 2.7 million targets falls for that (where the form submissions are caught and redirected with something simple like this), it will net 270 people, or 270 hijackable domains. Of those domains, some set of them will likely provide services of a sort, which could then easily become targets (through redone DNS records and site replication) for further phishing attacks.

Seems a pretty good reason for these sorts of services to really offer two-factor authentication.

Share: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Twitter
  • LinkedIn
  • Facebook
  • email
  • Google Bookmarks
  • del.icio.us
  • StumbleUpon
  • Reddit

The Lobby is Strong with this One

Imagine a botanicals company inventing a variety of eggplants whose seeds which, when roasted and eaten, are not only tasty, but could also produce all of the desirable psychological and physiological effects experienced by smoking its nightshade sibling, tobacco. Imagine also that after years of FDA testing, it’s found to have none of tobacco’s undesirable effects – in fact, it’s found to have no deleterious effects at all.

But just as the seeds are brought to market, the FDA (in association with Homeland Security) puts an unusual condition on their continued sale: Any grower or merchant distributing or selling the seeds will face fines of up to $300,000 in the event that an unreported sale is made to a known or suspected terrorist, or to anyone linked with a known or suspected terrorist organization. The rationalization being that a high percentage of terrorists smoke, that authorities use cigarette sales to track terrorists, and that the unregulated or unreported sale of the seeds would disrupt that traceability.

Merchants, once excited about the prospect of selling the healthful alternative to tobacco products, begin to pull the seeds from their shelves for fear of exposure to penalties. After all, with the way terrorist is defined, how could they possibly know who is, or might someday be suspected as, or associated with one?

A few of the large national market chains persist in selling the product, but after a few suits monopolize the airtime on Fox for 96 straight hours, these markets, too, announce that they will be discontinuing sales. Soon, the few remaining, once undaunted, independent merchants pull the product, too.

Sound ridiculous?

Put on your conspiracy theory caps and replace “tobacco” with “telco”, “eggplant seeds” with “WiFi”, and “terrorist” with “illegal images“, then read the text of the SAFE Act of 2007.

There’s a good summary at News.com, as well as a response from the bill’s author.

Better update your CFS subscription.

Share: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Twitter
  • LinkedIn
  • Facebook
  • email
  • Google Bookmarks
  • del.icio.us
  • StumbleUpon
  • Reddit

2008 Predictions

Strange ritual this is…

1) Virtualization security will hybridize – As the virtualization juggernaut marches on, the question “what do we do about virtualization security?” is heard with increasing frequency. In many ways, virtualization security is no different from physical security: best practices still apply, and at some point, the data that we endeavor to secure typically make it to the physical world where our traditional security mechanisms (i.e. firewalls, IPS/IDS, UTM, NAC, DLP, SIM/SEIM, NBAD, etc.) can interact with them. There are, however, two noteworthy complications…

First, virtualization breaks the physical “data context” model, meaning that schemes which rely on one-to-one relationships between the logical and physical world must adapt; an obvious example of this is IP or MAC based access controls, or many of the port-based security schemes currently implemented on switches. Moreover, the ability to dynamically migrate virtual machines from one physical host to another, across data-centers or across continents makes the value of a physical context even less meaningful as a means of delivering security.

Second, what happens in the virtual world sometimes stays in the virtual world. As server hardware becomes more powerful, it becomes possible to load more virtual machine (guest) instances onto single physical hosts. Although network traffic from a virtual machine destined for some other external host will be subject to inspection by network security gear upon network traversal, traffic from one virtual guest to another on the same physical host will remain within the physical host, and will never makes it onto the physical network, thereby precluding the traffic from inspection by physical network security gear. A number of companies including Blue Lane, Reflex, and Catbird address this with offerings that run a dedicated virtual machine linked to a promiscuous port on a virtual switch running on the physical host, allowing visibility into the traffic between commonly hosted virtual machines. While this solution is functional, it has its drawbacks such as placing a heavy burden on the physical host, robbing it of resources that could otherwise go to running the virtual machines (content inspection is extremely computationally intensive and is generally best left to dedicated, purpose-built hardware), as well as necessitating the creation of a redundant security implementation, with all its associated procurement and operational costs, rather than using the existing security controls.

Rather than placing this undue burden on each physical host providing hypervisory services to a collection of guests, and rather than squandering existing physical security apparatus, we will soon see solutions enabling physical security models to be easily bridged into to virtual environments. Simultaneously, as the physical crosses the boundary into the virtual, we will also see virtual security models move from the laboratory (i.e. honeypots and forensics) into the commercial mainstream.

And, of course, we will also continue to see more and more of the relatively latency-insensitive security appliances (shallow packet inspection, routing, anti-spam, backup, strong authentication and identity, pre-admission control NAC, remote access, proxies, web-application firewalls, vulnerability scanners, etc.) made increasingly available as downloadable virtual appliances, while the heavy-duty work (deep packet inspection, post-admission control NAC, data leakage protection, etc.) will continue, by necessity, to be done for the next few years by dedicated gear powered by ASICs or multi-core CPU/NPU platforms. Beyond the issue of performance, which will slowly abate as hardware speeds and capacities increase, another factor hindering the seemingly inevitable world-domination of virtualized appliances are those environments requiring levels of certification and assurance such as FIPS or Common Criteria, where current models have justifiably entangled physical dependencies such as cryptographic or Target of Evaluation boundaries.

It shouldn’t be an argument between physical vs. virtual security camps, it should evolve to a model of coexistence and sharing. Eventually, the physical and virtual security components will do what they do best (horsepower and ubiquity, respectively) and will share their knowledge with each other, enabling the whole to be greater than the sum of its parts.

2) Smartphone governance models emerge – Apple’s decision to make the iPhone a mostly closed-platform will not guarantee security. This has been evidenced by the number of exploits we’ve seen in the short time that the platform has been available. Conversely, the fact that Google’s Android platform is relatively open (it will require consent for execution) does not guarantee insecurity, but its lack of application lock-down does remove a single layer of defense.

Lost or stolen device recovery – With the amount of data (corporate emails, or mass-file storage) that can reside on smartphones today, and with the utility of these platforms approaching that of portable PCs, losing a smartphone can be just as bad as losing a laptop. And for as every lost or stolen laptop that makes the news, consider how much easier it is for a mobile device to fall out of a pocket or holster, or to easily be lifted and concealed? Nearly every phone available today has a GPS in it, why not allow enterprises to put it to use as a security control? The scope of Security Suites will expand just a little further as platforms such as Where Is My Phone (http://www.wimp-software.co.uk/) and Lock My Mobile (http://lockmymobile.com) are assimilated into their ever-growing bulk.

Remote data destruction – Blackberry can do it. Period. Other vendors will realize that this is a factor in the Blackberry’s enterprise adoption, and they will start to do it too. Within a year of introduction, there will be incidence of some management server being compromised, resulting in self-destruct sequences being sent to loads of hapless users. Queue law suits and new breeds of enterprise mobile computing Security Standards and Qualified Auditors bodies emerging from the woodwork.

Proximity-based encryption – It’s not just conspiracy theorists that bristle at the idea of sub-dermal RFID implants for the purpose of identification. Yet it would be so darned handy for well-intentioned security applications. So how about something slightly less apocalyptically sinister like a ring that can be worn (and removed) by the user for the purpose of proximity-based decryption of and access to content on a paired mass-storage device? With a dedicated scope of application and the user’s retention of full control of the identifying token’s operation, even CASPIAN might approve.

3) Defending against the incomprehensible – Just as users began to adopt terms like “firewall”, “spyware”, and “botnet” into their vernacular, along with a conceptual grasp and acceptance of these concepts, the threat landscape keeps shifting. While terms and concepts like “XSS”, “CSRF”, “DNS rebinding”, “iframe / ad-delivered malware” are hardly new, they remain relatively foreign, despite that fact that we are seeing them employed with renewed vigor, and in vicious combination with one another as well as with increasingly sophisticated and persistently effective social ploys.

How does the security industry convincingly demonstrate the need to secure against threats that are becoming more complex and more difficult to describe to an ever-expanding base of information technology consumers? We appear to be on the verge of “you might not be able to appreciate this today, but this is for you own good” styles of security. A necessarily imposed model of authority that reminds us that information technology is just reaching adolescence.

4) Competitiveness turns destructive, so long as security is not a key factor in competitive evaluation – Globalization. Outsourcing. Operational Efficiencies. And other big business words, as well. Not so much a trend as an all-encompassing reality, the fact that all manner of competitive cost-reduction tactics pervade our modern economic existence guarantees an impact.

Consider it from the perspective of security technology producers:

Competitive pricing pressures force vendors to reduce costs of goods – There are different grades of components, there are different levels of skill in design and manufacture, and there are different classes of design and engineering verification testing. Although similar, glycerin is not the same as diethylene glycol. But testing and inspection doesn’t happen for free. The cost burden must be borne by someone. The vendor? Only if all the other vendors bear the same cost. The consumer? Only if they have no choice but to pay the premium. So caveat emptor will reign until there emerges regulation prescribing a standard. Not this year.

Competitive feature pressures compress development and quality assurance cycles – As competition intensifies, pressures to introduce more features, more quickly follow commensurately. Along with the accelerated schedules comes an increasing intolerance among user and financial communities for missed deadlines, often demanding releases that, in a perfect world, might be considered premature. The result? Bugs. A defanged euphemism, often the subject of public ridicule when occurring on a platform like a Microsoft Windows desktop OS, but which becomes far more significant when running critical systems. Ever hear of the Therac-25? Is there a standard for QA? Is there any prescribed penalty for releasing software or firmware with bugs?

And consider it also from the perspective of the security technology consumers:

We need to stay within our procurement budget – While it’s romantic to think that there’s widespread employment of a model of risk assessment wherein there’s a quantified or qualified calculation of exposure, of the cost of a loss, and of tolerance to risk, it simply doesn’t occur in the majority of businesses, particularly in the small to medium enterprise. Therefore staying within budget often means buying what is affordable rather than buying what’s right for the job. Some vendors design with this in mind, and are able to provide solutions that can be both. But unfortunately, there are other vendors who design with nothing but their hegemony and profit maximization in mind.

We need to reduce operational complexity and cost – Security is hard. Sometimes doing something securely steps on the toes of ease-of-use, particularly when the standards for ease were built on a foundation of insecure principles, in a day when there was much less bad on the Internet. Rather than dealing with the burden and costs of trying to break bad habits, or trying to mend weak controls and systems, it can be tempting to give in and to simply defer security. Moreover, given our natural tendency to prefer that which is familiar, upgrades or replacements (to or with more suitable or capable technologies) might be stalled because of perceived adequacy of the current solution, training costs, or simple complacency. Recognizing this, service providers (mere ISPs today, Converged Data Services Providers tomorrow) will fill the gap left here by vendors, and will mandatorily begin to layer tight (read: restrictive) security controls into the services they provide, extending all the way to the desktops.

Sometimes it takes as external event or force to begin the process of change. Accepting that both from the perspective of producers and consumers of information security technologies there is a natural resistance to change, perhaps rather than preparing for, we should simply begin to expect some imminent external factor of influence. Since it seems economically infeasible for either the producers or consumers to materially raise the bar in our own defense, it might come to some unforeseeable event to necessitate the legislation or regulation that elevates our current standards (or robs us of our networking civil liberties, depending on your perspective). Politically motivated cyber-terrorism might not bring about infrastructural system collapse in 2008, but expect to hear about it just a bit more frequently as the year passes.

Share: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Twitter
  • LinkedIn
  • Facebook
  • email
  • Google Bookmarks
  • del.icio.us
  • StumbleUpon
  • Reddit

Securing SaaS

With last month’s news of a Salesforce.com employee falling prey to a phishing attack, resulting in the SFDC database being mined for subsequent targeted phishing attacks against some number of SFDC’s nearly 1,000,000 users, there’s been a lot of interest in securing Software as a Service platforms.

The first wave of solutions is the conventional lot: user education, anti-spam/anti-phishing technologies, and IP range limiting combined with VPN access for remote users. This is usually followed by the somewhat esoteric second wave: Pure VPN based access (where the SaaS provider offers premium secure access through a massive SSL-VPN platform with some set of its security features), pre-selected image-based site authentication (which has its own demonstrable vulnerabilities), or site-specific dissolvable security agents (e.g. while the user is in a SaaS session, an ephemeral anti-malware agent loads). All useful technologies, but their practicality in a million+ user SaaS environment is limited by scalability and cost.

Where is Two-Factor or Token-based Authentication? Right here and here. Totally available, and almost totally impracticable by users and businesses who were looking for the simplicity of a hosted service in the first place.

So token-based SSO (with or without 2FA) is ostensibly the most secure method for accessing SaaS platforms, but it is also generally prohibitively difficult for the typical SaaS subscriber to implement. How to solve the catch-22? A SSO appliance.

First to mind is the capable OneSign product from Imprivata. While not specifically built for SaaS SSO, it’s effortless application support model is perfectly suited to automating and securing SaaS sign-on, in addition to its broad support for strong authentication options and physical/logical security convergence.

Next is the challengingly named Sxip Identity, an appliancized uber password/form manager (variant available as a firefox add-on). And they even had a a dedicated For Salesforce page long before the SFDC breach. Bonus points for being foresightful rather than just shameless reactive opportunists.

Share: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Twitter
  • LinkedIn
  • Facebook
  • email
  • Google Bookmarks
  • del.icio.us
  • StumbleUpon
  • Reddit

Ad Revenue Bubble

Madison Avenue used to mean something. Advertising once had a kind of exclusivity. A cold-blooded adaptation of Freudian psychology by Bernays and other demagogues for the purpose of controlling the uncontrollable. It had unique social, economic, and political value. The machines of advertising, along with its henchmen, marketing, PR, and consumerism, were long alone in their secular ability to reach, distract, and pacify the masses. And advertising’s effectiveness has continued to grow commensurate with the rate of populations and our modern banking system’s iniquitous conjuring of money through credit.

Just when the age of consumerism thought it had been beaten by the emergence of the me-generation and the self-actualizing, SRI adapted Bernay’s Engineering of Consent principles to the individual’s need for self-expression with their VALS system. Almost simultaneously, advances in automation, and the tragic smothering of reason and responsible sustainability by personal and corporate greed introduced abundantly willing and affordable off-shore labor, generating a glut of personalized products. And because it was all being driven by insatiable desires rather than practical needs, it ensured that supply could never catch up to demand.

So what’s the only thing that could top the perfectly orchestrated storm of predatory lending practices, the cheap manufacture of an endless stream of unessential artifacts, ever-escalating and news-worthily communicated global tensions and crises, and a geometrically growing population whose increasing unease could only be pacified and controlled through obedient gathering and consumption? Google.

Another product of Stanford, Google was an efficient collection of algorithms that found its calling in advertising. Prior to that, the search business, for all its practicality, was barely able to sustain itself, and saw its share of casualties and consolidation (Altavista, HotBot, Infoseek, Inktomi, DogPile, etc.). Now Adwords/Adsense, Yahoo’s Panama, Microsoft’s AdCenter, Amazon’s Content Links, Kontera, IntelliTXT, GoClick, LookSmart, Miva, Kanoodle, and an army of sub-syndicators, affiliates, and bottom-feeding get-rich-quick opportunists are amassing quite a bit of wealth. Google rose to dominance with their string of advertising-related acquisitions (Applied Semantics, dMarc, Adscape, Doubleclick), but even their other acquisitions (YouTube, Grandcentral, and Dodgeball, in particular) have become or likely will become yet-another platform for delivering targeted advertising. And it’s only a matter of time before their mapping technology + GPS gives us Google location-based advertising. They just can’t stop. And you really know you’ve made it when your ecosystem evolves its own criminal element, clickfraud, estimated to be worth somewhere between $1.37 and a few billion dollars, depending on who you ask. True, online advertising pre-dated Google, and Google has a lot of aspiring company today from other media, games, services, and mobile offerings (News Corp, Microsoft’s Massive, Second Life, There.com, Apptera, EnPocket, AdMob, mFoundry, Jingle’s free411, and countless others irresistibly, if not occasionally reluctantly but necessarily, drawn the current web-advertising model), but Google is its undeniable archetype.

So how might this Third Age of Advertising play out? Madison Avenue used to mean something, but it doesn’t anymore, not because they’ve lost their knack for getting people to buy tokens of meaning and self-worth, but because they’ve become a voice lost in the din of a crowd – a victim of a denial of service attack launched against their audience by the online model. Ad creation and placement used to be pricey, there was a barrier to entry, and relative scarcity. That is no longer the case. While it might be impossible for supply to outstrip desire-driven demand for goods, the same is not true for the infinite stream of online advertisements for those goods.

Online advertising is socially acceptable spam that also happens to be a magnet for get-rich-quick schemers and criminals reminiscent of the sub-prime mortgage industry. 99% of it is filtered out automatically by the lateral intraparietal area of the brain and the second-pass filter of conscious discretion, so to make the remaining 1% effective it must increasingly litter the landscape of the Internet, and consumes vast amounts of its most precious resource, bandwidth. It is the environmental villain of the Internet whose profligate consumption is rivaled only by illicit email spammers and P2P sharers. And how have we responded to the latter? With a multi-billion dollar anti-spam and traffic-control industry. Although we today tolerate online advertising, it is by some measures only marginally less pernicious than these other offenders. At it’s rate of growth, it’s only a matter of time before it catches up to email spam on the annoying-scale, and we start seeing the emergence of anti-ad appliances. The whole model seems outrageously inflated, and is bound to burst, or at least go through a heavy correction.

Many argue that online advertising is what allows many free web-sites to stay free, but perhaps we could use a bit less “free” content and some more meaningful content. Despite its having been smothered to death by the ad revenue model, the subscription model is respectable. Things are often worth what they cost.

Share: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Twitter
  • LinkedIn
  • Facebook
  • email
  • Google Bookmarks
  • del.icio.us
  • StumbleUpon
  • Reddit

Syndicated Malware

It’s virtually impossible to browse to a web-page these days without embedded advertising. Most of this sort of content gets included through the use of javascript retrieved from the ad syndicator’s network (such as Google’s show_ads.js or Yahoo’s ypn.js). Similarly, most web-sites also employ some form of analytics, where the tracking is often achieved in a similar fashion (e.g. Google Analytics: urchin.js

The fact that these sorts of externally-hosted scripts are included in just about every web-page is what makes this event so alarming. And while it’s not highly likely the Google or Yahoo (or any of the other of the hundreds of similar services) will have their content compromised the way 24/7 Media did, it’s still possible for an attacker to spoof DNS (particularly in public wireless environments), or use DNS Rebinding (AKA Anti-DNS Pinning) to cause clients to retrieve the “wrong” javascript.

One way for site operators to decrease the risk of compromised third-party javascript is to host it locally, as SonicWALL does for its Eloqua analytics. If a DNS-based attack is launched against a visitor, it would affect the entire session (rather than just a single element), and would be more difficult for the attacker to arrange or conceal.

Javascript pervades the web and web-based interfaces because of its boundless versatility, but it can do some scary stuff. Users can protect themselves against the potential evils of javascript-gone-bad with something like NoScript, but it’s unreasonable to expect adoption by the masses. To mitigate the bound-to-be-increasing risk of ad-based attacks specifically, it might be simpler (and more palatable overall) to use aggressive ad blocking.

Share: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Twitter
  • LinkedIn
  • Facebook
  • email
  • Google Bookmarks
  • del.icio.us
  • StumbleUpon
  • Reddit

Avoid High-Availability Hypocrisy

As an industry, Information Security concerns itself with the Confidentiality, Integrity, and Availability of assets. One manifestation of this is a series of technologies dedicated to providing high-availability, or some kind of assurance of redundancy or scalability of critical components or services. The current trend is to employ layers of the diversity of products and services designed to provide different service-levels of fail-over, load-balancing, and capacity handling to nearly every aspect of our information systems. All except one: our people.

In fact, it seems that in response to ever-increasing competitive pressures, more and more operations, even those in the business of security, are “right-sizing” – not cutting-to-the-bone (as they are often quick to assert) but rather operating as leanly and efficiently as proves to be endurable. This often means reducing headcount to a level of minimal sustainability, which is good for the bottom line, but which is not so good for availability. The trend toward the elimination of redundancy is creating work cultures where all individual roles become increasingly critical to the operation of an organization (even those roles furthest removed from the executive ranks) yet the traditional attitude toward those ‘peripheral’ roles is slow to shift from that of relative dispensability. It’s creating environments where it is disruptive for any member of an organization to be detached from work for any period of time, it is difficult for workers to take any “real” (i.e. extended or fully disconnected) vacation time, and it is operationally threatening when a non-redundant agent leaves the company. It can also have the second-level effect of creating a general reluctance to dismiss/replace sub-optimal performers because “something is better than nothing”, and because costs (direct, indirect, and opportunity) are often driven to perceived impracticability within these self-perpetuating situations.

Smart operations won’t allow themselves to fall into this kind of trap. This is simple risk management. It’s better to make the necessary investments in an “organic security system” comprising some reasonable level of people high-availability than it is to skimp on the infrastructure and find yourself in a crippling network (or function) down emergency. Security companies in particular ought to know and practice this essential principle of safeguarding assets. Does yours?

Share: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Twitter
  • LinkedIn
  • Facebook
  • email
  • Google Bookmarks
  • del.icio.us
  • StumbleUpon
  • Reddit

Sanctioning Services

(The following was prepared in response to a recent invitation to describe “two dangerous but common security mistakes companies make”)

Rather than looking at this in terms of mistakes, I’d rather take this as a chance to describe two simple things network administrators can do to have an immediate positive impact on data security:

1) Don’t get frustrated with the perceived ineffectiveness of training. Commit yourself to remain the tireless herald of best practices; every individual who adopts even a single good behavior or habit helps the fight, and can spread the knowledge.

2) Since we’re all well aware that malware today is economically motivated big business, we should be looking at simple and broadly implementable ways to incapacitate pieces of the widespread deviant use of technology that underlies their business model. While a Fortune 500 company might be able to implement the latest multi-core deep-packet inspection and NAC technologies, most home-office and SMB networks cannot, yet the irony is that because the latter group is typically unmanaged (i.e. lacking an IT staff or well-fortified layers of defense), they are the ones who are most susceptible to being ensnared by today’s biggest threat: the proliferation of botnets. And what, economically, is it that drives botnets? Five major things:

a) the ability to capture credentials or other resalable confidential information
b) the ability to send spam to sell products, stocks, etc., or as a lure to sites with some kind of malicious payload
c) the ability to host DNS and HTTP sites (fast flux networks) to serve up the content visited by victims of “b” above
d) the ability to launch a DDoS attack for extortion or against an enemy (e.g. anti-spam/virus company, or political)
e) the ability to propagate itself so that the army becomes stronger and more able to do “a” through “d”

Assuming that failure to prevent infection had occurred, there is still the opportunity, even with inelaborate networking equipment, to disable 3 of the 5 items above. More importantly, it can almost always be done with no noticeable or deleterious effects on users. How? Sanctioned services.

The idea of sanctioned services is effective but simple: Only known, approved hosts should be talking certain protocols, all other hosts using these protocols should be considered anomalous. For example, if your network has an SMTP server, only that server should be allowed to send SMTP through the gateway; any other ‘unsanctioned’ outbound SMTP activity should be dropped and logged, and should be an immediate red-flag that the host might be infected with a spambot. Other examples of easily detectable, often suspicious traffic that should similarly sanctioned are inbound (Internet-to-internal host) HTTP and DNS, and outbound (internal host-to-Internet) NetBIOS, SMB, and RPC traffic. Who can effect this? Anyone who controls a gateway – business or residential.

In today’s age of UTM, NAC, deep-packet inspection, and next-generation firewalls this technique might seem antiquated – but that doesn’t make it any less effective.

Share: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Twitter
  • LinkedIn
  • Facebook
  • email
  • Google Bookmarks
  • del.icio.us
  • StumbleUpon
  • Reddit

The modulating value of Common Criteria

Whether this claim that Maxtor shipped drives with a pre-installed virus turns out to be true or not, this is precisely the sort of event that makes Common Criteria a valuable security certification.

Disparagers of CC would argue that you could probably get chocolate cookies EAL-4 certified if you spent enough time and money, but at least you’d know they were more secure than this box of cookies. Although if you or I really liked cookies (or the Internet) and we had a food allergy (or ran IE6), we would still probably rather take the risk of anaphylaxia than pay a $145 amortization premium for a “secure” chocolate cookie. Not to mention the fact that as soon as you dipped the secure cookie in milk (or installed a service pack or a firmware update), it would lose its certification. And there is perhaps the strongest claim of those who assert that CC has nothing to do with security: who in their right mind would want to eat a cookie without milk?

Share: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Twitter
  • LinkedIn
  • Facebook
  • email
  • Google Bookmarks
  • del.icio.us
  • StumbleUpon
  • Reddit