Showing posts with label PCI scan. Show all posts
Showing posts with label PCI scan. Show all posts

Wednesday, September 19, 2012

More Fun With PCI

I received a notification from a large security auditing firm that of the ciphers currently available, only RC4 ciphers will be considered PCI compliant.

My assumption based on the notification is that this move is intended as a rejection of CBC (Cipher Block Chaining). Well, that's fine as far as I am concerned. CBC has some serious issues as implemented in SSL v3 / TLS v1.0. In a nutshell, you can time responses for applications using the block cipher to get ranges of possible data in SSLv3 and partial payload decryption in TLS. So-called "stream" ciphers like RC4 are immune to this particular attack vector. You don't get private keys from the attack, its by no means a fast attack (minimum of three hours), and you need access to monitor the session. Further, patches for CBC exist to over-ride the timing exploit (for example the NSS libraries used by Mozilla have been patched).


I will save debunking the man in the middle hysteria for a later post. What frustrates me about the requirement of RC4 stream ciphers for PCI compliance is not that CBC ciphers are no good - they are weak - it is the notion that somehow RC4 is somehow sufficient. Some points to consider:


-RC4 exploit using SSH with null password prevention enabled

-RC4 is frequently implemented poorly within applications other than sshd, for example by using poor to no random number generation

-Successful attack vectors exist, but they have yet to be put into a helpful graphical interface for use by your neighborhood teenager (as the BEAST framework did for CBC). Paul and Maitra published on RC4 key reconstruction techniques in 2007 (Permutation after RC4 Key Scheduling Reveals the Secret Key. SAC 2007), based on keystream byte assignment biases first published by Roos in 1995. This means, unlike CBC, there is a published algorithmic approach to full private key decryption of RC4. 


It cant be stressed enough that all of these vectors assume an attacker on your wireless or local network. If that is the case than SSL is the very least of your problems. While theoretical dismissal of this or that cipher based on penetration ability is sound and valuable, the PCI standard suggests a holistic approach to security. The various levels of PCI-DSS compliance suggest admission of the reality that the goals of securing a system will differ based on the purpose of that system. Banning the use of CBC will not serve to get less sites hacked, but it will keep administrators preoccupied with yet more busy work, switching from one cipher with published flaws to another cipher with published flaws.

Wednesday, July 11, 2012

PCI Compliance Scans and Scams

HIPAA, SOX, SAS-70 - those whose business relies on hosting a website are no stranger to the regulatory schemes of trade organizations and their acronyms.

The PCI Data Security Standard is perhaps the most well known and widely adopted. PCI DSS is a set of very general outlines of security best practices for those who process and/or store credit cards using computers. Compliance is certified by a third party corporation (a Qualified Security Assessor or QSA), and demand is created by offering lower credit card transaction fees to websites who are certified as compliant. On the whole, the initiative has had some big successes. Credit card companies win by reducing incidents of fraud as more sites adopt standard security features, merchants win through reduced transaction costs and by being able to advertise a third party certification of secure site design and companies responsible for certification get to exist and create new jobs in the process. The standards have gone a long way toward getting merchants to comply with security initiatives that had little traction before - the adoption of secure encryption keys are one such example.

As with any plan, there are shortcomings. The most commonly cited shortcoming is endemic to any security certification: PCI is only a very general review of a very specific set of security vulnerabilities; typically these include widely published operating system vulnerabilities, as well as web server overflow, injection, XSS and enumeration exploits. Many users of PCI compliant sites are lulled into a false sense of protection, not understanding that certification only covers a single point in time and that changes to the site following certification could expose the site to attack. Site owners may also feel that PCI compliance has a one to one relationship with site security, seeing compliance as a replacement for ongoing review and planning. Security is best handled as part of the initial design of the code base and server architecture, with regular updates and testing to address the latest mischief.

This leads us to the issue I want to discuss in this post: the role of the third party certification companies / Qualified Security Assessors. In theory, market forces should make such organizations reliable and trust worthy. After all, a site that is hacked publicly that is certified as PCI compliant would be quite an embarrassment for the corporation providing the certification. Further, there must be some point at which the credit card companies would revoke their relationship with a QSA. In reality, the incentives work out in a way that is a bit more counterintuitive. I have yet to read of a public credit card data breach that discusses PCI compliance liability in any capacity - the embarrassment without fail falls on the agency who was retaining the information, not the companies retained to secure the data. Further, credit card companies reserve the right to charge fines to companies that are hacked and whose PCI compliance reviews failed to identify the source of intrusion. Paradoxically, if the PCI compliance reviews illustrate the flaw but it is ignored, fines are not levied. This produces an incentive for auditors to *not* find security flaws, and for credit card companies to certify organizations that are less than thorough in their reviews. There are many auditing companies out there that do a great job, but if there is one principle of human behavior worth its weight in ink, it is that human behavior follows incentives. If it doesn't seem to at first, wait.


As an engineer, I often play a role in correcting issues caught by PCI compliance firms. For the average merchant, the compliance firm provides a questionnaire confirming logistical security procedures, and then performs a scan of the website under audit. The scanning technology used is not novel or specifically designed by auditing firms. Tenable's Nessus scanner is in wide use for this purpose (tip to IT companies: if you are using an unmodified third party application as the foundation of your business while marketing yourself as having innovated that application, at least modify the reports provided by that application ... slap a logo on it, drop it into a template, maybe even add some line breaks).


My complaint is not with Tenable or those using it - Nessus is an excellent product and if you do not use it regularly within your own organization I recommend you download it immediately. The issue I have is with auditing firms who say they are scanning when they are not.


This is somewhat of a recent phenomenon as far as I can tell (I've worked with quite a few firms over the years and have only encountered this specific issue with a handful of them within the last 6 months). Essentially the scan is performed and enumerates data that would be revealed by any website - language framework version, operating system, web server version, etc. The scan than compiles a list of security vulnerabilities based on the enumerated application versions, without checking to see if those vulnerabilities are valid. So for example lets say just as an example that IIS 7.5 can be overtaken by a specific overflow vulnerability. Then let's say that Microsoft identified the issue and released a Security Bulletin / emergency patch for it. Then let's say that you patch your system, ensuring that you are no longer vulnerable. A compliance firm of the kind I am describing would provide you with a compliance report listing your site as vulnerable for that overflow.


What this means is that the auditing firm is not reviewing your site for vulnerabilities, it is gathering basic information about the software running on your server and compiling an almost random list of vulnerabilities that may or may not apply to your server.


This might sound like a trivial distinction at first. After all, vulnerabilities are vulnerabilities, right? If some of them are already patched, just move on to the ones that are not. 


The issue is with the efficacy and value of these scans. The scans are helpful when they illustrate issues that are not already apparent to the administrator, or when they confirm compliance. False positives are the worst possible outcome because they are wastes of time that provide no value whatsoever. Worse than not scanning at all, nearly all QSAs request some sort of confirmation that the false positive does not apply to the scanned server. This creates pointless busy work for administrators as they rush to provide documentation (which is not as easy as it sounds in the absence of a specific update- how do you prove a negative?). The time spent wading through bureaucracy could be spent identifying and resolving actual problems. Because the scans and auditing services provided by QSAs are not free, injury is added to insult when merchants are forced to absorb the costs of time spent attempting to keep the scan focused on actual security issues.

NSA Leak Bust Points to State Surveillance Deal with Printing Firms

Earlier this week a young government contractor named Reality Winner was accused by police of leaking an internal NSA document to news outle...