Showing posts with label ssl. Show all posts
Showing posts with label ssl. Show all posts

Wednesday, November 4, 2015

An explanation of webserver logs that contain requests such as "\x16\x03\x01"

Recently I have started coming across somewhat unusual entries in the access and error logs for a few of the Apache web servers that I am responsible for maintaining. The entries look like this:

95.156.251.10 - - [03/Nov/2015:13:56:23 -0500] "\x16\x03\x02\x01o\x01" 400 226 "-" "-"

Here is another example:

184.105.139.68 - - [03/Nov/2015:23:48:54 -0500] "\x16\x03\x01" 400 226 "-" "-"

These errors will be generated on a website configured to use SSL - and in fact, error messages similar to these can be generated by misconfiguring SSL for your website. This error message, for instance, can indicate an attempt to access Apache through SSL while the OpenSSL engine is either disabled or misconfigured:

Invalid method in request \x80g\x01\x03

Connections that generate that error would not be successful. This post, however, assumes that your website is working normally when used normally. So what gives?

The error indicates an attempt to scan OpenSSL for the SSLv3 POODLE vulnerability. No need to panic - getting scanned is an everyday occurrence for web server administrators, and hopefully your server is already long since patched for POODLE and disabled SSLv3 connections entirely. Furthermore, many of the servers scanning the internet making these connections are researchers - the example I provided above referencing the IP address 184.105.139.68 is one such example, and belongs to a group called "The Shadowserver Foundation" that reviews the internet for vulnerabilities and publishes trends in their findings. 

Still, even the connections made by researchers are done without the consent of those being scanned - and some admins might not like that. Furthermore, there are plenty of people who are doing this sort of scanning that don't have the best interests of the internet community at heart. Blacklisting IP addresses that perform these sorts of connections is possible - but I recommend avoiding the use of blacklisting based on generalized formatting of OpenSSL errors as such a policy runs the risk of banning users or even yourself during troubleshooting or maintenance.

Sunday, May 24, 2015

Secure your Apache server against LOGJAM

Some time ago I wrote a post about the dismaying history of US government attempts to regulate encryption out of existence. I had to omit quite a bit; it was a post and not a book after all. One of the details left out of the story was the DHE_EXPORT cipher suites. During the 90's, developers were forced by the US government to us deliberately insecure ciphers when communicating with entities in foreign countries (readers will remember from the last post that law makers were convinced that encryption should fall under the same rules as weapons technology, and thus could not be shared with anyone outside the Father Land). These insecure ciphers became DHE_EXPORT. The DH stands for Diffie-Hellman; the key exchange system that bears their name was first published in 1976.

Along with the cipher suite was a mechanism to force a normal encrypted transaction to downshift to a lower-bit DHE_EXPORT cipher. As so many short-sighted technology regulations have done in the past, this silly bit of Washington DC-brand programming has come back to haunt us in the form of the LOGJAM vulnerability. Until just a few days ago, all major browsers continued to support these deprecated DHE_EXPORT ciphers, as have a variety of applications as fundamental to web infrastructure as OpenSSL.

The exploit is described in detail on a website hosted by the researchers responsible for its discovery - weakdh.org which also hosts their paper on the same subject (PDF).

Meanwhile, patching your Apache server is simple: Apache HTTP Server (mod_ssl)
SSL parameters can globally be set in httpd.conf or within specific virtual hosts.
Cipher Suites
Disable support for SSLv2 and SSLv3 and enable support for TLS, explicitly allow/disallow specific ciphers in the given order :
SSLProtocol             all -SSLv2 -SSLv3

SSLCipherSuite          ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA

SSLHonorCipherOrder     on
DH Parameters
In newer versions of Apache (2.4.8 and newer) and OpenSSL 1.0.2 or later, you can directly specify your DH params file as follows:
SSLOpenSSLConfCmd DHParameters "{path to dhparams.pem}"
If you are using Apache with LibreSSL, or Apache 2.4.7 and OpenSSL 0.9.8a or later, you can append the DHparams you generated earlier to the end of your certificate file.
Reload configuration
sudo service apache2 reload

Thursday, May 7, 2015

Amazon Finally Ditches SSLv3

Amazon S3 subscribers recently received a form letter like this one:

Dear AWS Customer,

This message explains some security improvements in our services. Your security is important to us. Please review the entire message carefully to determine whether your use of the services will be affected, and if so what you need to do.

As of 12:00 AM PDT May 20, 2015, AWS will discontinue support of SSLv3 for securing connections to S3 buckets. Security research published late last year demonstrated that SSLv3 contained weaknesses in its ability to protect and secure communications. These weaknesses have been addressed in Transport Layer Security (TLS), which is the replacement for SSL. Consistent with our top priority to protect AWS customers, AWS will only support versions of the more modern TLS rather than SSLv3.

You are receiving this email because some of your users are accessing Amazon S3 using a browser configured to use SSLv3, or some of your existing applications that use Amazon S3 are configured to use SSLv3. These requests will fail once AWS disables support for SSLv3 for the Amazon S3 service.

The following bucket(s) are currently accepting requests from clients (e.g. mobile devices, browsers, and applications) that specify SSLv3 to connect to Amazon S3 HTTPS endpoints.

XXXXXXXX.XXXXXXX.XXXXXXX : XXXXXX-XXXXXX-XXXXX

For your applications to continue running on Amazon S3, your end users need to access S3 from clients configured to use TLS. As any necessary changes would need to be made in your application, we recommend that you review your applications that are accessing the specified S3 buckets to determine what changes may be required. If you need assistance (e.g. to help identify clients connecting to S3 using SSLv3), please contact our AWS Technical Support or AWS Customer Service.

For further reading on SSLv3 security concerns and why it is important to disable support for this nearly 18 year old protocol, we suggest the following articles:

https://www.us-cert.gov/ncas/alerts/TA14-290A
https://blog.mozilla.org/security/2014/10/14/the-poodle-attack-and-the-end-of-ssl-3-0/
http://disablessl3.com/#why

Thank you for your prompt attention.

Sincerely,
The Amazon Web Services Team

Amazon Web Services, Inc. is a subsidiary of Amazon.com, Inc. Amazon.com is a registered trademark of Amazon.com, Inc. This message was produced and distributed by Amazon Web Services Inc., 410 Terry Ave. North, Seattle, WA 98109-5210

Wednesday, January 7, 2015

Gogo Inflight Internet Using SSL Exploit for Customer Surveillance

For many years in the IT community, it was assumed that time spent travelling on an airplane was wasted. At best, you could make do with expensive and often-unreliable cell network coverage for connectivity. Even that was an issue, though, because of the airline's histrionic and decades-out-of-date concern that electronic devices interfered with flight navigation equipment. On top of having to pay a premium for unreliable service, you had to be sneaky about it, as well.

Alec Baldwin, Josh Wieder, cell phone, airport, airplane, headline
Some of us handled the situation better than others
So when in-flight internet services first started to become integrated to major airline fleets en masse, many tech people applauded. Those of us who had to attend trade shows, travel to meet customers or were responsible for multiple data center locations could get things done as we bounced back and forth across the country.  The bandwidth was every bit as expensive as roaming cell network charges, regularly more expensive, but the planes were being equipped with some basic antennae to improve reception, and you didnt need to hide your computer from overzealous flight attendants.

One of the services that made this possible was Gogo Inflight Internet. And the whole deal seemed pretty reasonable. Sure, it was expensive and the service was unreliable at best, but there were serious financial, technical and regulatory obstacles to overcome in making airplanes into giant wireless antennae. It wasn't perfect, but it wasn't a scam, either - and it was getting better.

But then one savvy Gogo Inflight Internet user noticed something troubling. The customer was Adrienne Porter Felt, a Google engineer. As Ms Felt attempted to access Youtube, she noticed that the SSL provided on behalf of Youtube was forged.


To help illustrate whats going on I've included some more detailed images below.. Note that the interfaces are a bit different because the first image was taken on a computer running Windows and the second image was taken on a Mac; the aesthetic differences aren't relevant.

In the first image's SSL certificate, we see the certificate is signed by Google Inc. and that the Common Name is listed as *.google.com (in the Subject line, the first item is the Common Name or CN).

In the second image, the Organization is listed as "Gogo" and the Common Name is a private IP address, 10.240.31.12.

This behavior is consistent with a Man in the Middle exploit. Requests for Youtube are being re-routed to 10.240.21.12, which is serving a forged SSL certificate for Youtube.

Youtube, Josh Wieder, SSL Certificate
This is what a Youtube SSL certificate normally looks like

Youtube, Josh Wieder, Gogo Internet, SSL Certificate,
This is what the Youtube SSL certificate looked like as provided to Ms Felt by Gogo Internet

Internet Service Providers are required by awful pieces of legislation like the Telecommunications Act of 1996 to provide law enforcement with what are referred to in the Telco industry as "lawful intercepts" at the expense of the ISP. However, what is occurring here appears to be far above and beyond the normal exercise of a lawful intercept.

For one thing, lawful intercepts are targeted at specific customers. There is no indication here that the man-in-the-middle exploit being used here is executed in a targeted fashion; if targeted traffic interception was the goal, such an exploit would be a bizaare way to go about it, because all traffic would regardless be collected. Targeting using such an exploit would involve discarding traffic from non-targeted customers, as the NSA claims it does in the company of the particularly credulous.

There is another reason to believe that something untoward is afoot here. And that is a recent FCC filing in which the nudity-obsessed Federal agency blatantly declared that Gogo Inflight Internet was cooperating with law enforcement in ways not required by law. You can review that filing here:


In their own defense, Gogo has claimed that the SSL forging and the traffic interception it is designed to cover-up has nothing to do with surveillance at all. Their CTO Anand Chari had this to say: 
Whatever technique we use to shape bandwidth, it impacts only some secure video streaming sites and does not affect general secure internet traffic. These techniques are used to assure that everyone who wants to access the Internet on a Gogo equipped plane will have a consistent browsing experience… We can assure customers that no user information is being collected when any of these techniques are being used.
Chari's excuse sounds quite reasonable to those with no experience with networking and system administration. To those that are familiar with solving bandwidth restricition delimmas, Chari's explanation is, at best, the ramblings of a man who is completely incompetent and, at worst, an outright lie.

Over the course of my career, I have had to address exactly the sort of problem that Chari claims this matter is a response to. Before I explain why Chari's response is preposterous, I should start by phrasing the problem in a way that is more understandable.

Most companies have a limited amount of bandwidth. Bandwidth, after all, is expensive. For small businesses of just a few people, its not so hard to tell that one of your workers is downloading from Pirate Bay instead of attending to his work, and in the process ensuring that no one can so much as check their email. But what if there are 500 workers? And what if the bandwidth use isnt intentional; what if its being caused by malware? Thats when a more technical response is called for.

This is a problem that has existed in commercial IT for decades; its a problem that predates streaming media, it predates the world wide web for that matter. Because the problem is so old, there are dozens of different approaches to resolving it, depending upon what kinds of resources are available and the overall structure of the network in which the problem is being addressed.

One of the many solutions to this kind of issue would be to implement a technology called Quality of Service. In a nutshell (this is a very simplified explanation), Quality of Service enables network administrators to give a priority to certain types of traffic over others. This function is extremely useful, if we think about it for a moment. Consider email and video streaming, for a moment. When you send and receive email, its not such a big deal if it takes a few extra seconds for the email to be transferred. If there is an extraordinarily long delay of many minutes, it can become annoying. But a delay of seconds is not noticeable to a user, and email applications are designed (when correctly configured) to deal with delays so that they aren't a problem. Now take streaming video. If you introduce a few seconds delay as a user is watching a video, such a delay would completely spoil the experience. If the delay is long enough, it will even crash the video player software. So we have established that delays are more important to video than email.

So let's imagine another circumstance. We are in a real world environment - an office, with a limited amount of bandwidth. One employee is playing a video, and another employee attempts to send an email with a large attachment. There is enough bandwidth for only one of these operations, but not both. What do we do?

By implementing QoS, we can give the video a higher priority than email; allowing the video to finish playing before sending the email. This ensures that both users have a good network experience, and no errors are introduced into the application layer. We can introduce QoS in such a way that we do not have to break encrypted services, as Gogo has done. Certain protocols can be prioritized, but we can also prioritize users and connections, accounting for a limited amount of bandwidth.

Not only would such a solution ensure the privacy of users, but it also tends to be faster and more reliable when scaling large amounts of traffic than what Gogo claims they are doing - which involves more than just routing and switching network "packets". Information sent over a network is divided into small packets that share certain standardized properties. This standardization allows for the packets to be handled consistently and reliably, even when the information iinside of the packets is unique. Handling packets as they travel is, in most circumstances, less resource-intensive than opening the packets up and dealing with the stuff inside of them. Consider the difference between your home wireless router, which handles the standardized packets in transit, and your home computer, which deals with the unique information inside of packets.

The gist of the story is this - information you send while using Gogo Inflight Internet is almost certainly being snooped on; its also possible, though not yet proven, that other similar services are also snooping. Do not trust SSL connections that are provided to you by Gogo; to avoid their snooping, VPN connections could help, but further research is needed to determine which VPN solutions can be compromised by Gogo's setup.

ht to read/write

Sunday, October 12, 2014

NSA Targets Systems Administrators with no Relations to Extremism

The Details

This is a bit of an old story, but I've found to my unpleasant surprise that the issues surrounding the story are not widely understood or known. Here's the gist: leaks from the US intelligence service have explicilty confirmed that the NSA targets systems administrators that have no ties to terrorism or extremist politics. If you are responsible for building and maintaining networks, the NSA will place you under surveillance both personally or professionally; they will hack your email, social network accounts and cell phone. The thinking behind this alarming strategy is that compromising a sysadmin provides root-level access to systems that enable further surveillance; hack an extremist's computer, and you track just that extremist. Hack a sysadmin's computer, and you can track thousands of users who may include extremists among them (its a strategy that is remarkably similar to the targeting of doctors in war zones).

Five years ago such a lead paragraph would be among the most wild-eyed of conspiracy theories. Now, after the Snowden leaks and the work of other sources within the US Intelligence community, the sysadmin targeting scheme has been proven conclusively through supporting documents circulated through a "wiki" style system within the NSA and explained and reported by Ryan Gallagher and Peter Maas of The Intercept. The name of the scheme is I hunt sys admins. The entire document outlining the goals and methods of the I hunt sys admins scheme is available on The Intercept (While I typically publish source documents directly on this website for ease of use, publishing these documents present unique legal concerns that The Intercept is better equipped to handle - I apologize to users for the inconvenience of having to visit a second site to confirm sources but I assure you it is well worth the effort).

There are a few excerpts worth noting explicitly. First and foremost, the document describes that the surveillance typically begins by acquiring the administrator's webmail or Facebook account username. The NSA agent then uses an Agency tool called QUANTUM to inject malware into the admin's account pages. The Intercept has put together a video outlining the QUANTUM tool's capabilities that is worth watching. The existence and capabilities of the tool are themselves also confirmed through extensive NSA documentation. QUANTUM uses a Man-On-The-Side attack to hijack user sessions and redirect traffic to one of the NSA's Tailored Access Operations (TAO) Servers. In this case, the application server used is called FOXACID. The same application is used to compromise Firefox and Tor users (a related program in place at Britain's GCHQ called FLYING PIG offers similar functionality even while using SSL).

QUANTUM has a variety of different uses besides the one outlined above. QUANTUM has a series of plugins that allows NSA agents to take control or IRC networks, compromise DNS queries, run denial of service attacks, corrupt file downloads and replace legitimate file downloads with malware payloads.

The methodology is important as it demonstrates the importance of maintaining operational security even during personal time. These are not attacks that target political or military organizations; they do not even target corporations. They explicitly target individual system administrators.

And there's more.

NSA Agents use the tool Discoroute to retrieve router configurations from passive telnet sessions. NSA documents outline how, rather than use sysadmins to target the corporations they work for, NSA is interested in doing the reverse - using corporate router configurations to target individual sysadmins. For example, using Discoroute, a surveillance agent retrieves the access-list ruleset associated with the router. Using that access-list can reveal home IP addresses that admins use to login to systems remotely. While this may seem to be an egregious security oversight, the access-lists in question are not necessarily for core routers. The access-list could just as easily be retrieved from a PIX; an IP used to allow access to an intranet website.

The I hunt sys admins documents continue by outlining some methods to identify and surveil malicious users. The author of I hunt sys admins references the NSA's access to massive untargeted recordings of SSH sessions. Perhaps we can take some security in that the author apparently does not take it for granted that the NSA can easily decrypt SSH session data. However, quite a bit can be accomplished by analyzing encrypted data. In this instance, I hunt sys admins recommends reviewing the size of SSH login attempts to determine which are successful and which are failed. IP addresses which are recorded failing multiple attempts to large numbers of IPs can safely be identified as belonging to brute force attempters.

Why You Should Care About NSA Surveillance Even if You Do Not Care About NSA Surveillance

This is a website about technology; not politics. Whatever your opinions are about the legitimacy or warrantless surveillance, the actions of the NSA and the other Five Eyes surveillance agencies are having a significant and deleterious impact on the internet and those who build and support it. Additional leaks have demonstrated that NSA provided security firm RSA with $10 million to use the flawed Dual_EC_DRBG random number generator in its unfortunately-named BSAFE cryptographic library, providing a back door to all applications relying on BSAFE. Even more disturbing are confirmations that the NSA has obtained copies of root CA certificates and used them to compromise SSL implemented by major internet services.

But why should we care? I'm not guilty and so I have nothing to hide, as the oft-used rationalization goes. Warrantless surveillance by governments is only one consequence of the actions outlined above. Chief among concerns for the admins targeted by these policies that are unconcerned with government surveillance is that actors other than the Five Eyes nations can easily engage in the same practices as explained in the I hunt sys admins documents; frankly, few if any of the I hunt sys admins guidelines were actually invented by NSA. These are techniques designed by criminals, and criminals have massive incentives to continue innovating those techniques. To protect our privacy from criminals we must follow security best practices, and by following best practices we necessarily protect ourselves against government surveillance as well.

The fact remains that sysadmins will remain a desirable target for those seeking to break into protected systems. Protecting those systems and the users who depend on them is part of our mandate as administrators. Now that we know the extent to which the security environment has changed, the question becomes whether we continue to adapt to the new environment to best protect our applications and users, or whether we disregard our mandate.

Wednesday, September 19, 2012

More Fun With PCI

I received a notification from a large security auditing firm that of the ciphers currently available, only RC4 ciphers will be considered PCI compliant.

My assumption based on the notification is that this move is intended as a rejection of CBC (Cipher Block Chaining). Well, that's fine as far as I am concerned. CBC has some serious issues as implemented in SSL v3 / TLS v1.0. In a nutshell, you can time responses for applications using the block cipher to get ranges of possible data in SSLv3 and partial payload decryption in TLS. So-called "stream" ciphers like RC4 are immune to this particular attack vector. You don't get private keys from the attack, its by no means a fast attack (minimum of three hours), and you need access to monitor the session. Further, patches for CBC exist to over-ride the timing exploit (for example the NSS libraries used by Mozilla have been patched).


I will save debunking the man in the middle hysteria for a later post. What frustrates me about the requirement of RC4 stream ciphers for PCI compliance is not that CBC ciphers are no good - they are weak - it is the notion that somehow RC4 is somehow sufficient. Some points to consider:


-RC4 exploit using SSH with null password prevention enabled

-RC4 is frequently implemented poorly within applications other than sshd, for example by using poor to no random number generation

-Successful attack vectors exist, but they have yet to be put into a helpful graphical interface for use by your neighborhood teenager (as the BEAST framework did for CBC). Paul and Maitra published on RC4 key reconstruction techniques in 2007 (Permutation after RC4 Key Scheduling Reveals the Secret Key. SAC 2007), based on keystream byte assignment biases first published by Roos in 1995. This means, unlike CBC, there is a published algorithmic approach to full private key decryption of RC4. 


It cant be stressed enough that all of these vectors assume an attacker on your wireless or local network. If that is the case than SSL is the very least of your problems. While theoretical dismissal of this or that cipher based on penetration ability is sound and valuable, the PCI standard suggests a holistic approach to security. The various levels of PCI-DSS compliance suggest admission of the reality that the goals of securing a system will differ based on the purpose of that system. Banning the use of CBC will not serve to get less sites hacked, but it will keep administrators preoccupied with yet more busy work, switching from one cipher with published flaws to another cipher with published flaws.

Thursday, April 12, 2012

Same Domain, Multiple Machines, SSL?

I saw a lot of misinformation about this on the inter-tubes recently, some of it intentional misleading of customers, some of it unintentional, so it might be remedial for a lot of readers but posting a clarification here because its worth it to help clear up the confusion. Here are some facts that should help people when first making the leap to securing multiple server environments:

Servers are domain and private key specific. They are not machine specific. You are welcome to generate multiple SSL certificates for the same domain to host on separate servers. Think for a bit, this *has* to be true. When everyone goes to https://google.com, are they hitting the same web server or SSL caching server? Of course not.*

The most common scenario where this would be valuable is with a load balanced web cluster, but I recently came across this in a deployment with web and mail component where the mail admin neglected to give their MTA a unique FQDN *and* the organization is using SSL/TLS for mail retrieval *and* the organization does not wish to use a self-signed certificate to this end.

You dont need to purchase multiple certificates to this end. Just export the certificate to a PFX and import it to the next server. In IIS6, this process is outlined here: http://support.microsoft.com/kb/313299. To use OpenSSL in Linux, here is a good guide: :http://www.madboa.com/geek/openssl/#cert-pkcs12

 (*Yeah I know they are using hardware acceleration, smarty pants. Same argument applies, plus complexity of dealing with hardware tokens)

Billing systems development now available

Good news for current and future clients of Josh Wieder Technical Consulting : customers can now retain a variety of unique services related...