NGinx: Accidentally DoS'ing yourself

It turned out to be entirely self-inflicted, but I had a minor security panic recently. Whilst checking access logs I noticed (a lot of) entries similar to this

127.0.0.1 [01/Jun/2014:13:04:12 +0100] "GET /myadmin/scripts/setup.php HTTP/1.0" 500 193 "-" "ZmEu" "-" "127.0.0.1"

There were roughly 50 requests in the same second, although there were many more in later instances.

Generally an entry like that wouldn't be too big of a concern, automated scans aren't exactly a rare occurrence, but note the source IP - 127.0.0.1 - the requests were originating from my server!

I noticed the entries as a result of having received a HTTP 500 from my site (so looked at the logs to try and find the cause). There were also (again, a lot of) corresponding entries in the error log

2014/06/01 13:04:08 [alert] 19693#0: accept4() failed (24: Too many open files)

After investigation, it turned out not to be a compromise. This post details the cause of these entries.

 

The Requests I was seeing

The requests I could see in the log were textbook for an automated scan, some of the URLs being requested were

  • /phpmyadmin/scripts/setup.php
  • /phpMyAdmin/scripts/setup.php
  • /phpTest/zologize/axa.php
  • /pma/scripts/setup.php

All requests seemed to be originating from 127.0.0.1, suggesting that the server had been compromised and was scanning itself (bit odd...)

 

Run of the Mill Checks

Whilst I was digging through the logs, I also set some security checks running in the background to verify the integrity of the server. All came back OK

I also placed some test requests to see if there was something odd going on with the logging (i.e. was the 127.0.0.1 source IP accurate)

It was placing a test request from another machine that pointed towards the cause of the issue

GET -Ssed http://46.32.254.153/myadmin/scripts/setup.php?bentest=true
GET http://46.32.254.153/myadmin/scripts/setup.php?bentest=true --> 500 Internal Server Error
Connection: close
Date: Wed, 11 Jun 2014 14:02:21 GMT
Server: nginx/1.0.15
Content-Length: 3695
Content-Type: text/html
Client-Date: 01 Jun 2014 13:32:21 GMT
Client-Peer: 46.32.254.153:80
Client-Response-Num: 1
Title: The page is temporarily unavailable

One request, but countless entries in the logs. Of those, one had my true client IP, the others were all 127.0.0.1

 

Amplification Attack

So, by placing a single request to my server, I could force it to then place 100's of requests to itself. Not particularly great, though those entries in the error log (and the 500 error I'd received whilst on my site) started to make sense. 

Whenever a request without a Host: header was received, NGinx was opening a socket to 127.0.0.1 port 80 - itself. That request was then proxied onto 127.0.0.1 port 80, which was then proxied on....

Basically the requests put NGinx into a proxy loop until it had opened so many sockets that it exhausted it's file descriptor pool (the limit set by the OS was also a little lower than it should have been), leading to genuine connections receiving a 500 error because NGinx couldn't open a socket to proxy the request to the back-end server.

 

Why had I commented that out?

The cause was simple, at some point (not sure why) I've commented out the default server block in my Nginx configuration. So requests made without a Host header (i.e. if you were doing it in a browser, browse to the servers IP rather than a domain name) were being proxied on.

The fix was simple, get the default server block back in place

server {
    listen       80;
    server_name  46.32.254.153;
    root /var/nginx/html;
    index index.html;

     try_files = $uri @missing;

     location @missing{
             rewrite ^ $scheme://$host/index.html permanent;
     }
}

 

Conclusion

There will have been a reason why I commented out the default server block  (perhaps testing something and got interrupted) but in doing so I opened a request amplification attack vector against myself. Thankfully, there'd have been no positive feedback to an attacker, and was no way to divert/reflect the requests elsewhere (even if it were possible, the mechanics of the issue would have meant the target only received one request anyway).

Because the logs are written after the request has completed, the first request (i.e. from the true client) was written at the end of the (long) block of entries from 127.0.0.1 - once the file descriptor limit was hit, the relevant request failed (and was logged), that behaviour will have cascaded down until it reached the original request.

So aside from a few 500 errors, some fairly large log files and a bit of time spent checking over the system, there was no harm done. It just goes to show, though, the affect that a simple misconfiguration can have.

I've also adjusted the file descriptor limits to a more realistic amount.