Showing posts with label apache. Show all posts
Showing posts with label apache. Show all posts

Wednesday, January 20, 2016

Microsoft search indexing can be so aggressive that it resembles DoS traffic

As part of my consulting business I have a number of web servers I take care of. This morning, I woke up to receive a particularly crappy message related to one of those servers:

possible DoS attack

Awesome, right? Ever notice how you never get these sorts of messages between the hours of 9 AM and 5 PM, Monday through Friday?

So I tried to SSH into the target server, and was pleased to find I was able to connect. Relieved that this was likely a false alarm, I found this in the Apache logs: - - [19/Jan/2016:19:43:15 -0500] "GET /robots.txt HTTP/1.1" 200 146 - - [19/Jan/2016:19:43:15 -0500] "GET /robots.txt HTTP/1.1" 200 146 - - [19/Jan/2016:19:43:15 -0500] "GET /robots.txt HTTP/1.1" 200 146 - - [19/Jan/2016:19:43:15 -0500] "GET /robots.txt HTTP/1.1" 403 5 - - [19/Jan/2016:19:43:15 -0500] "GET /robots.txt HTTP/1.1" 403 5 - - [19/Jan/2016:19:43:15 -0500] "GET /css/main.css HTTP/1.1" 403 5

Take a note at the timeframe on these connections: six connections from the same IP address within 1 second, five of which were to the same file. Also note that the initial connections were successful - errors only began occurring because my Apache config blocks suspicious traffic.

You've probably guessed who this IP address belongs to if you read the headline to this article:

NetRange: -
NetName: MSFT
Organization: Microsoft Corporation (MSFT)

At first I thought this IP might be part of Microsoft's cloud server system, Azure, or some other product that might be operated by customers. However, that seemed unlikely as this host was going after the robots.txt file and nothing else other than CSS. That is what search engine spiders do. And this IP very much looks like part of Microsoft's search infrastructure:

# host domain name pointer
The day after these weird connections, the same Microsoft IP came back with a more normal traffic pattern: - - [20/Jan/2016:06:53:35 -0500] "GET /robots.txt HTTP/1.1" 200 237 - - [20/Jan/2016:06:53:36 -0500] "GET /index.html HTTP/1.1" 301 245

A standard installation of mod_evasive would result in a temporary blacklist for this kindof traffic. It is unclear if this behavior was intentional on the part of Microsoft, or if more rapid requests for files can be expected. The people who make their bread and butter spreading SEO gossip seem to agree that connectivity failures & web server 50* errors can have an impact of search engine rankings. However, such reports should be taken as just that - gossip.

Both Google & Bing report errors encountered during site indexing through their Search Console and Webmaster Tools, but I wasn't able to find anything published by either Bing or Google about how such errors impact search engine placement even in vague terms. Hopefully this was a one-time error on Microsoft's part and not part of a new approach to indexing (fingers crossed).

Wednesday, November 4, 2015

An explanation of webserver logs that contain requests such as "\x16\x03\x01"

Recently I have started coming across somewhat unusual entries in the access and error logs for a few of the Apache web servers that I am responsible for maintaining. The entries look like this: - - [03/Nov/2015:13:56:23 -0500] "\x16\x03\x02\x01o\x01" 400 226 "-" "-"

Here is another example: - - [03/Nov/2015:23:48:54 -0500] "\x16\x03\x01" 400 226 "-" "-"

These errors will be generated on a website configured to use SSL - and in fact, error messages similar to these can be generated by misconfiguring SSL for your website. This error message, for instance, can indicate an attempt to access Apache through SSL while the OpenSSL engine is either disabled or misconfigured:

Invalid method in request \x80g\x01\x03

Connections that generate that error would not be successful. This post, however, assumes that your website is working normally when used normally. So what gives?

The error indicates an attempt to scan OpenSSL for the SSLv3 POODLE vulnerability. No need to panic - getting scanned is an everyday occurrence for web server administrators, and hopefully your server is already long since patched for POODLE and disabled SSLv3 connections entirely. Furthermore, many of the servers scanning the internet making these connections are researchers - the example I provided above referencing the IP address is one such example, and belongs to a group called "The Shadowserver Foundation" that reviews the internet for vulnerabilities and publishes trends in their findings. 

Still, even the connections made by researchers are done without the consent of those being scanned - and some admins might not like that. Furthermore, there are plenty of people who are doing this sort of scanning that don't have the best interests of the internet community at heart. Blacklisting IP addresses that perform these sorts of connections is possible - but I recommend avoiding the use of blacklisting based on generalized formatting of OpenSSL errors as such a policy runs the risk of banning users or even yourself during troubleshooting or maintenance.

Sunday, May 24, 2015

Secure your Apache server against LOGJAM

Some time ago I wrote a post about the dismaying history of US government attempts to regulate encryption out of existence. I had to omit quite a bit; it was a post and not a book after all. One of the details left out of the story was the DHE_EXPORT cipher suites. During the 90's, developers were forced by the US government to us deliberately insecure ciphers when communicating with entities in foreign countries (readers will remember from the last post that law makers were convinced that encryption should fall under the same rules as weapons technology, and thus could not be shared with anyone outside the Father Land). These insecure ciphers became DHE_EXPORT. The DH stands for Diffie-Hellman; the key exchange system that bears their name was first published in 1976.

Along with the cipher suite was a mechanism to force a normal encrypted transaction to downshift to a lower-bit DHE_EXPORT cipher. As so many short-sighted technology regulations have done in the past, this silly bit of Washington DC-brand programming has come back to haunt us in the form of the LOGJAM vulnerability. Until just a few days ago, all major browsers continued to support these deprecated DHE_EXPORT ciphers, as have a variety of applications as fundamental to web infrastructure as OpenSSL.

The exploit is described in detail on a website hosted by the researchers responsible for its discovery - which also hosts their paper on the same subject (PDF).

Meanwhile, patching your Apache server is simple: Apache HTTP Server (mod_ssl)
SSL parameters can globally be set in httpd.conf or within specific virtual hosts.
Cipher Suites
Disable support for SSLv2 and SSLv3 and enable support for TLS, explicitly allow/disallow specific ciphers in the given order :
SSLProtocol             all -SSLv2 -SSLv3


SSLHonorCipherOrder     on
DH Parameters
In newer versions of Apache (2.4.8 and newer) and OpenSSL 1.0.2 or later, you can directly specify your DH params file as follows:
SSLOpenSSLConfCmd DHParameters "{path to dhparams.pem}"
If you are using Apache with LibreSSL, or Apache 2.4.7 and OpenSSL 0.9.8a or later, you can append the DHparams you generated earlier to the end of your certificate file.
Reload configuration
sudo service apache2 reload

Tuesday, December 23, 2014

Apache VirtualHost Proxy Configuration - Helpful for Tomcat, Node.js and similar frameworks

I recently came across this question on ServerFault:
" I've subsonic application running of [sic] tomcat. Everything else works on apache. I don't want to write port number everytime [sic] so I'd like to set-up [sic] a simple directory where subsonic will be accessible. So, I'm trying to make virtualhost file [sic] inside apache dir. [...] I tried many variations, but cannot make anything work. "
The poor chap than [sic - ha!] provided an example of his latest go at the problem, an excerpt from his httpd.conf file:

<VirtualHost *:80>
     DocumentRoot /var/www/streamer
     ProxyPass               /       http://mini.local:4040/
     ProxyPassReverse        /       http://mini.local:4040/
Not sure a bad go of it, all thing considered. Still, it wasn't providing him with the sort of results he was looking for. Naturally I had encountered similar issues not long ago myself, with the implementation of a Ghost blogging software platform, which runs on node.js. Its been my first serious effort with node, other than some occasionally one-off one-page type of scripts for existing web sites.

So, I felt like I might be able to help the gentleman. Now, let's bear in mind his question: "I don't want to write port number everytime". He does not say, "I never want to write port number", just that it should be possible to render the page with a URL that does not append the port number. Ensuring that the port number is never written would require the solution below in addition to another step - there are a few ways to resolve it, but mod_rewrite would be the most obvious / most well known. Some of the other solutions might depend upon what version of Apache is being used, like an <if> directive for example.

In any event, here is what I provided to the questioner, which is quite similar to what I have implemented previously and serves my purposes quite well (notice I am using his somewhat strange hostname nomenclature):

<VirtualHost *:80>
    ServerName mini.local
    ProxyRequests Off
    ProxyPreserveHost On
    <Proxy *>
            AddDefaultCharset Off
            Order deny,allow
            Allow from all
    ProxyPass / http://mini.local:4040/
    ProxyPassReverse / http://mini.local:4040/

Friday, December 5, 2014

Apache Log Pong

Looking for an apache log visualization program recently, I came across logstalgia. It turns your log files into a game of Pong between your web server and the internet, with each new request being the ball going back and forth!

So cool.

Friday, March 15, 2013

Apache Startup Failures and Hostname Resolution

Upon restarting Apache, you may receive errors like this:

# service httpd restart
Stopping httpd: [FAILED]
Starting httpd: httpd: apr_sockaddr_info_get() failed for webserver-sb-1
httpd: Could not reliably determine the server's fully qualified domain name, using for ServerName

In order to resolve this issue and successfully start Apache, you will need to ensure that there is a resolvable hostname assigned to your server. This hostname does not need to be a fully qualified domain name (FQDN), it just needs to resolve. Here is how to get around it. 

#echo yourhostname.extension > /etc/hostname
#/bin/hostname -F /etc/hostname

Finally, confirm that the assigned domain name is resolvable using the host command

#host yourhostname.extension localhost

If not, check the following settings:

-does /etc/resolv.conf have the correct DNS servers listed to allow for resolution?
-If your hostname is not an FQDN, list the hostname in /etc/hosts

RAT Bastard

Earlier this week, several servers I maintain were targeted by automated attempts to upload a remote access trojan (RAT). The RAT is a simpl...