Showing posts with label windows server 2008. Show all posts
Showing posts with label windows server 2008. Show all posts

Sunday, January 13, 2013

File Defragmentation Tools for Windows 2003/2008, Redhat/CentOS and Ubuntu

For managing fragmentation of NTFS (Windows Server 2003/2008, XP, Vista, and Windows 7):

For general disk defragmentation, the following utilities offer a substantial improvement in overall performance and efficacy over native operating system tools:
Auslogics Disk Defrag or Raxco PerfectDisk

For use on disks unsupported by the above tools, frequently executed and/or locked files or even a straightforward command line utility that can easily be used as part of a shell script:
Contig from the Sysinternals Suite
Contig has been of particular value when managing backup servers - servers storing huge files with substantial writes on a regular basis. Being able to specify the backup files allows for properly scheduling defragmentation by backup job, and in the process eliminating the need for downtime on these systems as part of this manner of disk maintenance. Can also be used for per-file fragmentation analysis and reporting.

For managing fragmentation of ext4 file systems (newer versions Redhat/CentOS, Ubuntu, Debian, etc):

e4defrag - Linux users (or at least the Linux users I know) have been waiting a long time for the use of an online defragmentation utility. We've all ignored it, pretended as though fragmentation didn't happen on our Linux machines, until the time came for a reboot after 2-3 years of uptime and read/writes forced an fsck that occurred at the worst possible time.

e2freefrag - Provides online fragmentation analysis and reporting by disk, file or mount point.

For managing fragmentation of ext3 file systems  (slightly legacy versions Redhat/CentOS, Ubuntu, Debian, etc)

Good luck! Your options are unfortunately a bit limited.

Many readers may ask: ext3 is a journalled filesystem, why even bother? Primarily, in order to increase IOPS (currently the primary performance bottleneck in terms of price per unit of measurement). Journalled filesystems have seek times just as NTFS does. Reducing those seek times improves performance. Further, unexpected system events can lead to the operating system forcing the journal to be processed. Regular maintenance helps to ensure this process is timely and that downtime is minimized as a result. I have often heard it said that this process "often takes only a second" and as a result can be safely disregarded. While I respect everyone's opinion, I have to very urgently disagree. Most of my experience has been in working in commercial data center environments with several thousand servers. At scale, the statistically insignificant becomes a regular headache. What often happens is part of my concern as an administrator, disaster recovery is just as important in my opinion - safeguarding from improbable catastrophic scenarios and reducing their impact has always been part of my agenda.

That said, let's continue: ext3 requires you to unmount your partition to defragment it. IMO, ext3 is still the most widely used Linux filesystem. I highly recommend the e2fsprogs suite, which includes the following tools:

e2fsck - its just fsck. Not a vulgar typo; performs a filesystem integrity check
mke2fs - creates filesystems
resize2fs - expands and contracts ext2, ext3 and ext4 file systems
tune2fs - modify file system parameters
dumpe2fs - prints superblock and block group info to standard output or pi pe destination of choice
debugfs - a simple memory-only filesystem that can be mounted to perform emergency troubleshooting of your primary filesystem

For defragmentation, you will be typically be using the following:

mount - used to mount and unmount filesystem (also widely known as featuring one of the more chuckle-inducing linux commands when in need of command syntax assistance, #man mount)
fsck - File System ChecK. Checks the specified file system for errors. 
[note: modifying /etc/fstab allows you to specify which devices are mounted]

Some solid non-OS included tools are:


Wednesday, January 2, 2013

Fixing Event ID 10154 - The WinRM service failed to create the following SPN

The Problem

The configuration of the system when this error was encountered is as follows:

A. Windows Server 2008 R2 Redundant Domain Controllers - we will call these and
B. Windows Server 2003 Web Server with Windows Remote Management enabled / part of the Active directory deployment - we will call this
C. For the sake of our example, let's say I have configured an OU named "Web Servers" on those domain controllers

Whenever the Windows 2003 Web server reboots, or WinRM.exe service on the Windows 2003 Web server restarts, the following error was logged into the Event Viewer:

Event ID: 10154
Source: Microsoft-Windows-WinRM
Version: 6.1
Message: The WinRM service failed to create the following SPN: %1.
Additional Data
The error received was 8344: Insufficient access rights to perform the operation.
User Action
The SPN can be created by an administrator using setspn.exe utility.

***NOTE: This issue has also been well documented as occurring while using Windows Small Business Server (SBS) 2003

The Explanation

First its important to understand what all of this means and why we should care. This error and its fix are documented in a number of websites elsewhere, however those documents lack any form of explanation to help us better understand what is occurring here. 

SPN stands for Service Provider Name. SPNs exist on the domain controller to indicate which service applications are assigned to which computers within the Active Directory forest. WSMAN means Web Services Management (notated commonly as WS-Management), which is a Microsoft protocol used to acquire information related to services and applications hosted on a remote server, and to manage those applications and services. WSMAN differs significantly from SNMP by allowing administrators to perform a more comprensive array of tasks. Whereas SNMP would simply get information, WSMAN gets information and allows an admin to remotely install and modify applications based on that information (SNMP has SetRequest, which is limited to a narrow set of predefined variables).

The WinRM service  (Windows Remote Management) is what is installed and runs on servers to listen for WSMAN commands. WinRS (Remote Shell) is the client side application of the protocol, and sends the WSMAN commands to the remote host.

Now that we understand the context of the conflict, we can return to our specific error with a greater understanding of the situation. Its important to note that I was able to verify that the WSMAN SPN does in fact exist on both of my domain controllers, so using setspn.exe to create the SPN wasn't going to help me much. I verified this was true by logging into the domain controllers and running the following command: 

setspn -L WEB 
(remember we are assuming that my webserver is named

The output contained a number of items, including the two I was looking for:

This lets me know that the SPExNs do in fact exist. Knowing that winRM.exe will try to rewrite the SPN every time it starts, and together with the Additional Data field of the error message, we now had a confirmed diagnosis and prognosis - the web server has insufficient permissions to write to the SPN, forced rewriting of the SPN at service start generates the error and while there may be no immediate server-side issues because the SPN already exists, that could change at anytime. 

The Solution

First, it is necessary to confirm that the WinRM service is properly patched and updated. For Windows 2003 servers, the subject of our discussion here, this means updating to version 2.0 provided via KB968930. 2003 does not include WinRM by default, and older 2003 servers that you have inherited may still be running the antediluvian version 1.1. Windows 2008 servers now include version 3. 

Supposing the service is fully updated, there are two ways to go about doing this. Both should accomplish the same thing, but if you have issues with one method try the other. 

The first is the easiest to perform for those more comfortable with a GUI. From your domain controller, launch ADSIEDIT.MSC. Connect to the relevant Active Directory instance (typically just the default local connection is fine), then navigate through the domain to the server we are experiencing this issue with. The order of navigation is:
OU=Variable Organizational Unit
CN=Machine Name
Using our example, I would navigate to:
OU=Web Servers
Right click on CN=WEB and select Properties. Select the Security tab, click Add, "NETWORK SERVICE". (This assumes that you run the WinRM service using the default identity settings - select the account that is relevant for your configuration). Click Advanced and Effective Permissions tab, and select "Validated write to service principal name". Then Click OK to save your changes. Reboot the domain controller and restart the WinRM service.

Once completed, use setspn -L and the Event Viewer to confirm whether the change was successful. If not, you can use the command line option as an alternative: 

dsacls "CN=Web Servers,CN=WEB,DC=ai-host,DC=com" /G "S-1-5-20:WS;Validated write to service principal name"

Same end result here as with the GUI - reboot the DC and restart the WinRM service and check the logs or setspn -L. You're accomplishing the same end result with either task - however there are a host of reasons why a GUI can be problematic. I have yet to encounter a set of circumstances where neither trick does not resolve the issue. If this does not resolve your trouble, please email or comment for me.

Extra Credit

Planning on using the WinRM IIS Extension? Launch Server Manager and select Add Features to provision the needed packages. Reboot your server and launch a command prompt, then use winrm qc to complete the configuration.

Tuesday, January 1, 2013

Display Classic ASP Errors in the Browser Using IIS7

Classic ASP works a bit differently then the .NET framework. To many administrators (like me), the subtle differences can be a bit of a nuisance. Tasks that are performed daily with .NET are just different enough with Classic ASP to force an admin to consult the Google Brain. If that's why you're here - don't feel bad and don't worry. We have all been there, and we have a solution for you.

This post will focus on error display using IIS7 and Classic ASP. In .NET, this is a simple matter of configuring the customErrors imperative in the web.config file. With Classic ASP, a bit of work needs fixin' within IIS Manager. Launch IIS Manager (Start --> Run --> inetmgr). Highlight your server name onthe left-hand menu. Then on the right side, you should be looking under the IIS heading. Select Debugging Properties and set the "Send Errors to Browser" setting to true. Back under the IIS heading, select Error Paged and then the 500 Error Page.Select Edit Feature Settings and click the checkbox for "Detailed Errors". That should be all you need.

It is possible that you continue to experience issues- where that is the case, there are problems with either IIS or your ASP install is broken. Feel free to email me or comment if you still have issues and we can figure it out.

Thursday, November 29, 2012

Scheduling Application Pool Recycles in Windows Server 2008 and 2012

TimeSpan[]Array and the TimeSpan Collection Editor

The process for scheduling an application pool to recycle at specific times in Windows Server 2008 and 2012 is a bit different then in previous versions. Launch IIS Manager, expand application pools and highlight the application pool to modify. Under the Actions menu on the right hand side, select Advanced Settings.

Scroll down to the Recycling section and expand it. You are looking for the TimeSpan[]Array entry of Specific Times section, highlighted in the example above. Click the three dots to the right of this entry.

Click the Add button under the Members window on the left hand side. This will produce a new value in the Properties window. click the new value and modify it using a 24 hour / military clock standard. Select OK and you're all set!

Sunday, November 25, 2012

List of Windows Activation Keys for KMS

Includes Keys for Windows Server 2012, Windows Server 2008, Windows 8, Windows 7 and Vista

This list of keys for KMS can be a real hassle to find in Microsoft's online documentation, so provided here in the hopes of saving you some time. Please note that these are not stolen product keys and as such publishing them is a time saver for administrators managing large deployments of fully licensed Microsoft products  - so if you are a thief or an Internet police person, sorry to disappoint but you've made it to the wrong site.


Windows Server 2012 Core

Windows Server 2012 Core N

Windows Server 2012 Core Single Language

Windows Server 2012 Core Country Specific

Windows Server 2012 Server Standard

Windows Server 2012 Standard Core

Windows Server 2012 MultiPoint Standard

Windows Server 2012 MultiPoint Premium

Windows Server 2012 Datacenter

Windows Server 2012 Datacenter Core


Windows 8 Professional

Windows 8 Professional N

Windows 8 Enterprise

Windows 8 Enterprise N


Windows Server 2008 R2 HPC Edition

Windows Server 2008 R2 Datacenter

Windows Server 2008 R2 Enterprise

Windows Server 2008 R2 for Itanium-Based Systems

Windows Server 2008 R2 Standard

Windows Web Server 2008 R2

Windows Server 2008 Datacenter

Windows Server 2008 Datacenter without Hyper-V

Windows Server 2008 for Itanium-Based Systems

Windows Server 2008 Enterprise

Windows Server 2008 Enterprise without Hyper-V

Windows Server 2008 Standard

Windows Server 2008 Standard without Hyper-V

Windows Web Server 2008


Windows 7 Professional

Windows 7 Professional N

Windows 7 Enterprise

Windows 7 Enterprise N

Windows 7 Enterprise E


Windows Vista Business

Windows Vista Business N

Windows Vista Enterprise

Windows Vista Enterprise N

Wednesday, October 31, 2012

FastCGI and Application Pool CPU Limiting in IIS7

Or, How To Fix the "Unable to place a FastCGI process in a JobObject" / 0x80070005 Error When Applying a CPU Limit to an IIS7 Application Pool

Here is our example - you have a website that uses several different programming languages running on an IIS7 server. Perhaps your main site is running .NET, and you are using PHP for the website's blog, or Python for a mailing script. You have installed the FastCGI module to speed things up and have it configured successfully.

Unfortunately, CPU utilization is overall fairly high for this site. You want to make sure that it doesn't get *too* high and crash the server, or overwhelm other applications and services you have running on the same server. This article assumes that you already have configured a dedicated application pool for your site, and per the best practices you are running the application pool under a unique application pool identity user, and not the Network Service. It also assumes that you only have one application pool configured for the site - handling both 

When you open task manager, quite a bit of the CPU utilization is being used by w3wp.exe processes - configuring FastCGI has the php-cgi.exe processes under control.
You decide to configure CPU limits for the application pool. This can be accomplished by opening IIS Manager, selecting Application pools from the left hand side, selecting your application pool, clicking Advanced Settings and reconfiguring the values under the CPU header. At a minimum, you will need to set Action to KillW3wp, set a limit (the values are assigned in 1/1000th of 1 percent so don't forget to carry your decimal point!) and assign a reset interval to ensure the application pool is reset and not left in an off state during the few hours a week that you as a server administrator are allowed to sleep.
Normally, this would work just fine. But with FastCGI applied to your site, PHP will become unresponsive, and provide the following error:

* Unable to place a FastCGI process in a JobObject. Try disable the Application Pool CPU Limit feature * Error Number: 5 (0x80070005). * Error Description: Access is denied. 

In a nutshell, FastCGI places php-cgi.exe processes inside of job objects. The Windows Process Activation Service does the same thing when CPU limits have been applied. Having both active means that Windows will try to put one job object inside of the other, which is verboten. 

Fortunately, there is a hotfix available (KB970208) that prevents this nesting behavior from occurring. Download it here. After downloading, restart the server and the error should be resolved. 

Another alternative is implementing Windows Server Resource Manager. Arguably WSRM is the preferred solution, however it deserves its own (forthcoming) post, as WSRM capabilities extend way beyond a FastCGI band-aid.

What about Windows Server 2003? Unfortunately, you are out of luck in that scenario in terms of an easy hotfix. For Windows Server 2003 users, it is necessary to segment FastCGI and non-CGI applications into different folders and create distinct application pools for both. Then you can manage CPU limiting features separately without issue.

Sunday, October 28, 2012

Changes to Windows Server 2012 Media Handling Reduce Bandwidth Requirements for Remote Desktop (RDP) and Terminal Services

RemoteFX Media Streaming Introduced

Over the years I have worked at both Internet Service Providers and server hosting companies. In both environments, customers have found thin client deployment and virtual desktop provisioning stymied by the bandwidth needs of remote desktop when used for day-to-day desktop computing style tasks. I can't remember how many times I have worked with a company whose entire network has failed or flapped because of employees downloading torrents or watching Youtube videos from a remote server. Other times, I have worked on Terminal Services capacity planning projects, and found myself impressed by the difficulty of giving reliable estimates even where good data is available.

Many companies have been completely unable to reap the rewards of hosted desktops (fast provisioning and restoring, centralized management, easy hardware replacement) because of the costs of reliable high-throughput internet connections to their office. Data center bandwidth isn't cheap, either. A number of companies have been founded (and a few, like Citrix, have flourished) around introducing appliances and applications to further compress the data on both ends of a remote desktop connection.

The rewards to the end user, then, of improving multimedia performance over RDP are huge. Microsoft is claiming to have done just that with Windows Server 2012.

Changes From Windows Server 2008 / Windows 7

Windows Multimedia Redirection (WMR) was the name for special multimedia handling in the last version of Windows. WMR had some positive innovations of its own - rendering takes place on the client side, and as a result, CPU load on the server is decreased. Under normal circumstances this is accomplished without a significant reduction in quality. There were a number of problems with the implementation - WMA, WMV, MP3 and DivX are handled, but unsupported protocols get handled without any special rendering (unsupported includes Flash, Silverlight and Quicktime - basically almost all video on the web). The client requires RDP 7.0 when connecting to take advantage of any of this. Bandwidth consumption is wholly dependent upon the bit rate of the original video. The frame rate sucks and becomes worse with scale.

Windows Server 2012 addresses the issues differently - WMR is replaced by RemoteFX. Through some secret mojo that has yet to be fully explained by Microsoft at this point, RemoteFX identifies regions of the screen that are to render video. The video content is encoded using H.264 codec and RemoteFX Progressive Codec. Audio is encoded by using the AAC codec. This is accomplished regardless of how the video is displayed - Silverlight, Flash - every protocol is supported. Because video behavior is consistent, capacity planning should become a more straightforward task, as the biggest variable for client resources finds a reduced range of possible values.

Microsoft is publishing some big claims on performance improvement. 90% bandwidth reduction claims should be greeted with skepticism, but other claims of frame rates over the WAN staying around 20 fps look promising. Testing demonstrates (I am working on embedding the video, should have it up shortly) that in a side-by-side comparison of Windows 7 and Windows 8 remote desktops using the same uplink - 2 Mbps throughput, 250ms round-trip latency, and 0.5% random loss - Windows 8 shows significant and noticeable graphical improvement, performing almost indistinguishably from a local display while playing the same Youtube video. Windows 7 struggles - several times a second, the video pauses to re-render a new image, making the display irritating and unwatchable. Keep in mind I have yet to test or see test results with multiple concurrent RDP connections, so at this point I would not recommend capacity planning using those numbers.

More testing is needed - what will be valuable is a greater understanding of the amount of resources (especially throughput) needed per RDP client, reliable maximum client per server numbers, and any additional provisos for virtual environments. If your projects are graphically intensive or involve unique image, audio or video handling, then running a few of your own stress tests is highly recommended.

When performing your own tests, note that WMR is still used for LAN connections in Windows Server 2012. Whether you are on a LAN or WAN is determined by latency - if your connection is under 30ms latency, WMR will be used. If your connection is over 30ms latency, RemoteFX is used. There are a lot of ways to control latency for testing - I am partial to NIST as Cisco's recommended WAN emulation software. Although NIST is Linux based, the previous link will take you to full installation media with detailed instructions (so you don't need to be an expert Linux administrator to get it working). That said, there are Windows-based WAN emulators too. Jperf (the java fork of iperf) and WANEM should do the trick, as well. Be sure to publish your results! Here is a link to the forums if after testing you would like to share your data with the community (I am also happy to publish your results here, or link to findings on your blog or website).

The tests so far I have seen look very promising - hopefully these changes continue to encourage the implementation of virtual desktops, as well as the adoption of Windows 8/2012 itself.

NSA Leak Bust Points to State Surveillance Deal with Printing Firms

Earlier this week a young government contractor named Reality Winner was accused by police of leaking an internal NSA document to news outle...