Log in

Site menu:



Site search

February 2019
« Dec    



Content Filtering vs. Censorship

Google Alerts is a very handy tool. It allows you to specify a search term, and if the all-seeing Google eye encounters that term in the web, newsgroups, news items, or blogs it will send a scheduled alert to a defined email address. Like many at SonicWALL, I have a comprehensive alert set for the term “SonicWALL”.

Not a day passes where there isn’t blog activity bemoaning that indecent, undemocratic, fascistic iniquity in the eyes of God: SonicWALL censorship. On one hand, I’m glad to see our services so widely-deployed and recognized, and that we are becoming synonymous with effective content-filtering. On the other, I’m troubled to see an entire generation of bloggers embittered toward the name “SonicWALL” because of a misconception; the same sort of misconception that puzzlingly leads some to consider Samuel Colt a mass-murderer.

Content-filtering is not a content censor. Content-filtering has two major components: a classification engine, and a policy engine. Classification refers to the evaluation of content (i.e. a web-page) by a human or by a machine resulting in placement of the content into a category, for example “politics”, “sports”, “news”, “adult”, or “shopping”. The SonicWALL content-filtering service currently has 56 categories into which evaluated content can be placed (see this link, or the more current “URL List” tab on this link).

Occasionally, faults occur resulting in the clear misclassification of content. This kind of clear-cut error is fairly rare, and is easily corrected by submitting the site for reclassification through the Rating Review Request page. More often, the classification discrepancy is interpretive, where content could subjectively fall into more than one category, for example, could be “Computers and Internet” just as easily as it could be “News and Media”. Any site that aggregates content, or allows for user-provided content (such as blogs) is particularly subject to allegations of misclassification because it comprises diverse and eclectic content.

But classification by itself does not determine whether or not a content-filter allows or disallows content. It is the second component, the policy engine, that governs this. While policy is enforced by the content-filter, it is not defined by the content-filter. Policy is defined by the network operator. So while SonicWALL content-raters might correctly, incorrectly, or arguably classify a certain web-site as “Violence, Hate and Racism”, SonicWALL does not decide to block content that falls into that category. The decision to block or censor content of a particular category is made by the network operator, and merely enforced by the SonicWALL.

Steering clear of the debate over content-filtering on “public” networks (schools and libraries) where battles rage between the EFF/ACLU and the FCC over the latter’s CIPA, it remains difficult to argue that if the network access is being provided by a private interest (such as your local coffee house or airport) that it is the prerogative of that private network operator to define how those services can be used. If they choose to limit bandwidth, types of traffic (e.g. P2P network usage), or categories of content, so be it. As private entities providing network access (for fee or free) it is their right, just as it is your right to define what you do and don’t allow on your network.

And if your content-filter isn’t blocking YouTube

Share: These icons link to social bookmarking sites where readers can share and discover new web pages.
  • Twitter
  • LinkedIn
  • Facebook
  • email
  • Google Bookmarks
  • StumbleUpon
  • Reddit


Comment from Wrieck
Time: 2007-09-08, 12:20

I don’t know if I am taking a huge departure from the topic or not. if I understand what you have written you are suggesting that enabling someone to do something wrong, either to themselves or to others, by providing them the means is acceptable?

You must be logged in to post a comment.