Thursday, December 20, 2012

The Single Most Important Step to Securing a Web Site

I've worked as an employee and consultant, in a technical capacity, for tiny companies through multi-billion dollar corporations. With the exception of the companies where I've been a founder, I've noticed that the technical teams at these companies would have a hard time immediately detecting an attack or abuse unless it caused obvious damage to the functioning of their website.

For small to medium size websites (< 5-10 requests/second/web server) the single most important defense against attacks is to continuously monitor logs. The simplest way to do this is to "tail" logs for all instances of the web server, app server, and security logs (ssh, DB, etc) 24/7 on either a system admin's or developer's computer. It would be great if this could be automated, but, many times, it's going to take a human to notice something abnormal.

Most abuse/attacks on a website begin with probing the website itself (port 80/443) or via SSH (port 22). Once abuse or an attack is noticed from a centralized source, the first step should be to block that IP address and then contact the ISP of the attacker/abuser ideally with a phone call to their network operations center or via their abuse@example.com e-mail address which is usually listed when conducting a whois lookup. In either case, the offending ISP will ask for a copy of your server logs which should be cleansed of any third party data. In other words, only send the offending ISP a copy of the logs that specifically pertain to the attacker/abuser.

Typical web server logs should contain at least the following info:
Requester's Host Name/IP: ec2-75-101-189-255.compute-1.amazonaws.com
Timestamp with time zone offset: [20/Dec/2012:13:39:40 -0800]
Request: "GET /index.html HTTP/1.1"
HTML Status: 200
Response size (bytes): 16844
Referrer: "http://google.com/com"
Time taken to server the request (seconds): 1

Also, don't include too many log parameters on web requests lest you suffer from scrolling blindness. The logs have to be readable, in real time, by a human. For unusual periods of peak traffic, it helps to tail and grep the logs in real time to focus on specific log entries.

You'll be amazed at how much you'll learn by simply monitoring these server logs on a daily basis, for just a couple weeks; it will give you a sense for what is normal traffic. Errors, unusual requests, and long response times should be investigated. Some of these issues will be web app bugs or areas requiring optimization which may have never been detected (such as an internal report taking minutes to respond when a typical request/response takes less than a second).

Frequent attacks go unnoticed until there is some ramification. If you don't know what normal behavior is then you won't be able to detect abnormal behavior.
 

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.