Website firewall

Free firewall for your .NET website.

Benefits Features Download now!

The first firewall for your .NET website that actually watches your traffic. Now you can easily manage your traffic without complicated logic. By using a mixture of various rules, you can maximize security and control your traffic with little intervention. Unlike a server firewall, our solution works at the application level. This allows website owners who use a shared server to control access much more finely. Also, we allow you to respond with custom pages, redirects, delays, etc. This also actively builds a banned list based on violating rules. This identifies bad traffic and prevents it from doing harm to your resources.

WSfirewall is constantly monitoring for threats and immediately responding to them. We can't remove the need for human intervention, but we make your job much easier. We watch for bad behavior and automatically ban IP addresses that exhibit that behavior. What is "bad behavior?" Poking around for files that are known exploits. Sending too long a request string. Making too many requests. Lying about what robot you are saying you are. Following a "nofollow" link. Reading robots.txt and downloading a restricted file planted in there.

When looking in your server logs, you will find traffic hitting your site looking for exploits. Your logs will be full of entries with 404 errors, these entries are robots looking for exploitable files. Once the robots find something, they will then look for other files. Our solution identifies these robots by adding entries for known exploits. Once an IP address is seen looking for any of these exploits, the IP address is immediately banned from your server. We can't stop robots from making requests, but we can watch for them and block them from ever finding anything on your site. Later we can punish bad traffic by forcing them to wait as long as we want.

Proxy servers won't stop us from banning bad robots. While proxy servers can hide known IP addresses, once robots start poking around your site using a proxy server, that proxy server IP address will be banned. It is certain the robot will be hunting for exploits. This will also prevent users in banned countries from using that proxy server address.

You can easily control access on a country by country basis. If you only want traffic from select countries, you can eliminate traffic originating from any country. Or you can block access to all countries and selectively allow access to specific countries. We leverage the web service to lookup the country hosting the IP address.

Our use will not overload their web service since we build a local database of IP address and countries. This means we only access once per IP address. After that, we use our local database. This means you don't incur lookup overhead each time and won't have to worry about being banned (unless you have millions of unique users per day).

Hot linking is when other sites post links to your resources. This uses your bandwidth to host resources so others can consume them without visiting your site. Now it's simple to block this. You can also block traffic originating from specific sites just as easily.

Limit access to specific resources. We can't automatically stop screen scrappers from getting your data, but we can limit them by imposing reasonable access attempts to your stuff. And if they continue attempting to scrape past your limit, then we ban their IP address.

All of this is done at page request time. Our overhead adds an average 20ms to perform all the checks required. Your users will not notice any performance issues due to this. We cache all the data during the full application lifecycle so checking is very quick. The file updates are done when IIS does a recycle so performance is never an issue.

We've turned robots.txt management into an easy understandable process: we eliminated it! Seriously, there is no more robots.txt file. This is just another rule and we create the file on the fly when a crawler requests it. It never exists on your server. It's also global to your site: anytime a robot asks for robots.txt, it gets one at every folder.

Honey pots can easily be setup by adding a target file to robots.txt. Good robots will request robots.txt and not search for the file. Bad robots will look for this honey pot file, which will then immediately ban the robot's IP address.

Two simple lines will create a honeypot that will ban bad crawlers (see below)


Adding rules simply means adding an XML node in the following pattern:
text="Access denied"


agent - <add type="agent" action="abort|allow|ban|block" value="user-agent string" />
Using the browser user-agent property, you can ban robots and browsers based on the value of the user-agent.
<add type="agent" action="ban" value="" /> <!-- ban any IP address with no agent string -->
<add type="agent" action="allow" value="MSIE 10"/>
<add type="agent" action="allow" value="MSIE 9"/>
<add type="agent" action="block" value="MSIE" text="We don't support your version of Internet Explorer"/>

banned - <add type="banned" action="delay|abort" value="000" />
While you can't block bad traffic from making requests, you can cause them to wait a long time before they can resume. This may have the effect of making the load on your server lighter. If an IP address is in the banned list and they keep coming back, you can cause them to wait a few seconds or you can abort the connection immediately.
<add type="blocked" action="delay" value="2000"/>
The line above will cause any banned or blocked IP addresses to hang for 2 seconds before returning. One note, using this feature could cause the number threads to be used up as it locks the thread down for the timeout period. If a robot uses multiple threads, it could slam your server with 20 or more requests. This would remove that many threads from your thread pool as they are all now busy holding blocks. Just a consideration for you. Of course, once the request is completed, the threads will go back to the pool.

country - <add type="country" action="abort|allow|ban|block" value="all|aa" />
The country rule controls access by country of origin. The country codes are 2 character ISO codes. To only allow access to a single country, you would add a block all rule and an allow a specific country. To block a specific country, simply add a block rule with the specific country code. This rule initially does a country lookup of the ip address, then stores the result in a cache for faster lookup later. Next time the IP address comes in, it uses the cache to determine the country.
<add type="country" action="block" value="all" code="403" text="Access denied"/>
<add type="country" action="allow" value="zz" />
<add type="country" action="block" value="zx" redir="http://google.zx" />

domain - <add type="domain" action="abort|ban|block" value="domain" />
Do a reverse DNS lookup of the IP address and follow the rule if the domain matches.
<add type="domain" action="ban" value="" />

hotlink - <add type="hotlink" action="abort|allow|ban|block" value=".aspx" host=""/>
If the browser is accessing a resource on your server and the host is not allowed.
Host specifies the host where your resource is hotlinked. If this is specified then you can allow or block individual hosts.
<add type="hotlink" action="block" value=".jpg" file="nohotlinking.jpg"/>
<add type="hotlink" action="block" value=".jpg" file="nohotlinking.jpg" host=""/>
<add type="hotlink" action="allow" value=".jpg" file="nohotlinking.jpg" host=""/>

ip - <add type="ip" action="abort|allow|ban|block" value="all|" />
Control access by the incoming ip address. The rule will match any ip address starting with the value.
<add type="ip" action="allow" value="" />
<add type="ip" action="allow" value="0.1.2" />
<add type="ip" action="block" value="all" code="403" text="Access denied."/>

rate - <add type="rate" action="abort|ban|block" value="10" unit="ms" />
If the request rate of the IP address exceeds the threshold you set, you can either ban or block the IP address.
<add type="rate" action="block" value="1000" unit="min" />
<add type="rate" action="ban" value="5000" unit="day" />

robots - <add type="robots" action="allow|ban|block|delay|host|verify" value="file/folder to not follow" domain="robot to verify" robot="robot"/>
The robots rule manages robots or crawler access by creating a robots.txt file on the fly.
The robot attribute specifies specific robots to control. The default is *.
Using the verify action causes the rule to reverse DNS lookup the robot and if there is no match, it is banned.
The host action sets the main host you want indexed. Use this when you have mirrors and you don't want all your mirrors indexed.
The delay action tells the crawler how many seconds to delay between each query.
<add type="robots" action="block" value="abc.file" robot="*"/>
<add type="robots" action="block" value="/private" />
<add type="robots" action="verify" value="google" domain=""/>
<add type="robots" action="delay" value="2" />
<add type="robots" action="host" value="" />

url - <add type="url" action="abort|ban|block" value="protected.aspx" max="0" unit="ms|sec|hour|day"/>
Control access to your resources by specifying a resource in the value attribute.
Max attribute specifies how many times an IP address can access a resource. 0 is the default.
<add type="url" action="ban" value="exploit.js" max="0" unit="day"/> <!-- ban any IP that tries to access exploit.js -->
<add type="url" action="block" value="protected.aspx" max="10" unit="day" text="Too many requests. Try again tomorrow."/> <!-- block after 10 access attempts -->
<add type="url" action="ban" value="protected.aspx" max="100" unit="day" /> <!-- ban the IP after 100 attempts -->

verb - <add type="verb" action="abort|ban|block" value="get|put|post|delete|header|" />
Most websites only respond to GET and POST verbs. You can ban/block access to anyone trying to use other verbs you don't use.
<add type="verb" action="block" value="delete" text="Method not supported"/>


abort – bans the IP address and force closes the low level socket connection immediately.
allow - grants access to a resource when action="ban" value="all" was used previously.
ban - adds IP address to ban list and blocks access to resource.
block - blocks access to resource.
delay - cause a banned IP address to hang for a while.
verify - causes a reverse DNS lookup of the IP address and then bans the IP address if no match.

Other attributes

code is the http response code you can have the rule set. These are the standard codes defined by the RFC.
<add type="ip" action="block" value="" code="403"/>
file includes a specific file as a response to the block or ban. This can be any resource: graphic, html file, text file, etc.
<add type="hotlink" action="block" file="myad.jpg"/>
redir redirects the browser to another resource after it has been banned.
<add type="refer" action="block" value="" redir=""/>
text displays the text to the response for each rule.
<add type="ip" action="block" value="" code="403" text="Access forbidden"/>
unit is used for the rate rule and specifies ms|sec|min|hour|day.
max specifies the maximum number of requests for a url.

Quick start

  1. Download software
  2. Open zip file and drop wsfirewall.dll into your website's bin folder.
  3. Drop global.asax into the root of your website.
  4. Drop firewall.config into your app_data folder.
  5. Run website to make sure everything still runs fine.
  6. Edit the rules as you like.

Debugging rules

Your primary tool should be Fiddler proxy debugger. This will let you see the request/responses of your web server on your build machine.

Gradually add rules one at a time to make sure you don't break your site with overly aggressive rules.

Add a text="xxx rule" attribute that will tell you what rule triggered on your page.

Look at the log file to see what rules were triggered and adjust the parameters as necessary to make it work the way you intended.
Look at your auto banned file to see if an IP address was banned.
  1. Rename firewall.config to to turn off the firewall. Then test.
  2. Rename autobanned.txt to Then test.
  3. Comment out all the rules. Then test.
  4. Turn on 1 rule at a time. Then test.
Eventually, a specific rule will break your site when you activate it. Now you know the bad rule.

How to

Honey pots

Honey pots are used to discover bad robots. The method below uses robots.txt to trap bad robots. The idea is that good robots will respect the request not to read the file. Bad robots will use the request not to read the file, to read the file. Thus the trap.

These two lines will create a honey pot that will ban bad robots using robots.txt:
<add type="robots" action="block" value="donotread.htm"/>
<add type="url" action="ban" value="donotread.htm"/>

Another honey pot you can create uses the "nofollow" attribute of an anchor. Well behaved crawlers should not follow the provided link below: The line below will trap the crawler attempting to follow the banned page and ban its IP address.
<add type="url" action="ban" value="donotfollow.htm" />
Add the html markup in your main page. Well behaved crawlers should not follow the provided link below:
<a id="hp1" href="donotfollow.htm" rel="nofollow">Do not click</a><script>document.getElementById('hp1').innerHTML=""</script>

Your users will not see the link. The javascript will remove the link text.


<add type="agent" action="ban" value="badrobot"/> <!-- Bans the robot "badrobot" using the agent string -->
<add type="robots" action="block" value="/" robot="badrobot" /> <!-- Asks the robot "badrobot" to not index your site using robots.txt -->
<add type="robots" action="block" value="/" /> <!-- Asks all robots to not index your site using robots.txt -->
<add type="robots" action="verify" value="google" domain="google"/> <!-- Verifies the robot calling itself google is in fact google. -->

The first three entries seem to say the same thing, yet are quite different. The first entry tries to prevent the robot called badrobot from reading anything on your site. The second entry serves robots.txt file and relies on the robot's good manners to comply. The first entry works as long as the robot uses a proper agent string. The second and third one only works when the robot is compliant with robots.txt. The fourth entry actually does a reverse DNS test and looks up the domain name of the IP address. If the test fails, the IP address spoofing google is banned. When in doubt, put all three methods in.
Bad robots poke around your site looking for exploits it can use to attack your site. This is pretty easy to detect and to deal with. Download your server logs and look for 404 errors. These are either missing images, robots.txt, favicon.ico files or robots looking for exploits. You will see the same IP address over and over looking for files you don't have on your server. Look through the list of files and pick a few of them. Add these to the file:
<add type="url" action="ban" value="exploit.js"/>
The above line will ban any IP address looking for the file "exploit.js".
Even better is to simply ban all file types not on your site. Since this is a .NET site, you probably don't serve perl scripts. Simply add the line below to ban all files with a perl script extension. No need to specify each specific exploit.
<add type="url" action="ban" value=".pl"/><!-- Bans anyone looking for perl scripts -->
<add type="url" action="ban" value=".cgi"/><!-- Bans anyone looking for cgi scripts -->

Screen scrappers

Unfortuantely, there is no way to totally ban screen scrappers, but you can make life very difficult for them. It's beyond the scope of this document to go into details how you can implement something like that. If you have automated processes going after your data many times per day, you can limit them using this:
<add type="url" action="block" value="protected.aspx" max="10" unit="day" text="Too many requests. Try again tomorrow."/>
<add type="url" action="ban" value="protected.aspx" max="100" unit="day" /> <!-- ban the IP after 100 attempts -->

GEO blocking

WSfirewall can easily setup rules for controlling access country by country.
<add type="country" action="ban" value="all" /> <!-- ban all countries -->
<add type="country" action="allow" value="us" /> <!-- allow us -->
<add type="country" action="allow" value="ca" /> <!-- allow canada -->


Agent type will allow you to control specific browsers.
The lines below would only allow iPads to access your site.
<add type="agent" action="block" value="all" text="We only allow iPads on this site"/>
<add type="agent" action="allow" value="ipad"/>

The lines below prevent your site from allowing IE other than version 9 and 10.
<add type="agent" action="allow" value="MSIE 10"/>
<add type="agent" action="allow" value="MSIE 9"/>
<add type="agent" action="block" value="MSIE" text="We don't support your version of Internet Explorer"/>

The line below redirects iPads to a specific site.
<add type="agent" action="block" value="ipad" redir=""/>

IP access

<add type="ip" action="block" value="all"/><!-- Block everyone from accessing my site except -->
<add type="ip" action="allow" value=""/><!-- Allow only this IP address -->
<add type="ip" action="block" value=""/><!-- Block the entire class of IP addresses -->


The lines below prevent anyone from hot linking my images.
<add type="hotlink" action="block" value=".jpg" /><!-- Prevent hotlinking of my images -->
<add type="hotlink" action="block" value=".gif" />
<add type="hotlink" action="block" value=".png" />


The line below will send users back to if they were sent here from that server.
<add type="refer" action="block" value="" redir=""/> <!-- Prevent linking to my site from -->

The line below will prevent users from seeing your site when linked from Google.
<add type="refer" action="block" value=""/> <!-- Prevent linking to my site from -->

Error pages

All of the rules allow the use of custom error pages. There are essentially two types of error pages:
file and text. Think of text as a very simple error page that only has short messages. The file attribute may contain a complete html page.
The rule below displays a simple page telling people we don't support their version of Internet Explorer.
<add type="agent" action="block" value="MSIE" text="We don't support your version of Internet Explorer"/>
We can also serve a complete web page to do the same thing.
<add type="agent" action="block" value="MSIE" file="noie.htm"/>


All of the rules allow redirects to an external website.
The rule below blocks all US ip addresses and redirects the browser to
<add type="country" action="block" value="us" redir=""/> <!-- block us. Redirect to -->
The rule below blocks all Italy IP addresses and send the browser to
<add type="country" action="block" value="it" redir=""/> <!-- block IT. Redirect to -->


All rules support the abort action. This closes the connection at the socket level and makes it look like the server is not there. This is fine for a robot, but users will think your server is simply not there. Which may be exactly what you want people to think. The abort action also has an implicit ban associated with it. The idea is that if you're aborting the connection, you don't want that browser or robot to know anything about your denial of service.