Showing posts with label CentOS. Show all posts
Showing posts with label CentOS. Show all posts

Friday, January 8, 2016

Setting a hostname for your Amazon AWS EC2 server running RHEL or CentOS 7

So it turns out that setting your AWS EC2 server's hostname to be persistent across reboots is a surprising pain in the ass, at least with my usual OS of choice - RedHat/CentOS Linux.

If you're like me, setting a hostname is the sort of trivial non-task that at this point you really feel like you dont need to RTFM to figure out. You know about `hostnamectl set-hostname`. You've tried `nmcli general hostname`. You've manually set /etc/hostname. None of its persists past a reboot. Which can make life very difficult for those planning to use EC2 for email or dozens of other tasks.

Here's how to do it the right way, the first time. I'll also describe some circumstances that setting your own hostname will break things, and why its such a hassle to get this done in AWS in the first place.

Amazon relies on cloud-init to manage a variety of initialization tasks for its cloud servers; cloud-init was originally built to support Ubuntu images, but it is now used for a variety of different Amazon distros, including RHEL, CentOS and "Amazon linux". cloud-init is manged through a series of configuration files and modules; you can use them to add SSH keys, setup chef & puppet recipes, install SSL certificates, and all sorts of stuff. Think of it as a very fancy kickstart script.

By default, Amazon resets your server's hostname to the Public DNS entry for the IP address assigned to your server. These default hosts look something like this: ec2-111-222-333-444.compute-1.amazonaws.com for an IP address 111.222.333.444. If you have an Elastic IP Address, this hostname can be viewed through your EC2 Console by navigating to Network & Security -> Elastic IPs. The hostname is viewable in the "Public DNS" column. Because of this behavior, all of the default methods for assigning a hostname to your server are over-ridden on reboot. There is no way to change the hostname through the EC2 Console after your server has been built.

Here's the part of the walk through where I describe some circumstances where messing with your hostname can break stuff. If you have not assigned at least one Elastic IP Address (EIP) to your server, I strongly advise against messing with your server's hostname. Without an EIP, Amazon changes your server's public IP, private IP and hostname to whatever is available at the moment in your region. I haven't tried it, but I strongly suspect that making the changes in this walkthrough without an EIP will either just not work or will break something. There may be circumstances where you would want to accomplish this; hacks probably exist but this walkthrough ain't it.

Here's what to do:


Update the /etc/hostname file with your new hostname:
    [centos@... ~]$ sudo vi /etc/hostname
Initially, this file will contain the hostname assigned by Amazon. Delete this value and replace it with your preferred hostname. With vi, you must enter "INSERT MODE" to make changes to a document by pressing the i key.
NOTE: the official Amazon walkthrough tells you to add your hostname like this: HOSTNAME=persistent_host_name - that is incorrect. The correct way is to just put your hostname in there; if you want your hostname to be www.example.com than the contents of /etc/hostname should be www.example.com. The official walkthrough also tells readers to use vim using the syntax #vim <filename>. Although installed by default with RHEL 7 & CentOS 7, vim has to be launched using #vi <filename>. 
Save and exit the vi editor. After you've made you're changes, press ESCAPE to exit INSERT MODE, then press SHIFT and : [colon] simultaneously to issue a command to the vi editor. Type wq, and then press Enter to save changes and exit back to the command prompt.

Update the /etc/hosts file with the new hostname.
    [centos@... ~]$ sudo vi /etc/hosts
Change the entry beginning with 127.0.0.1 to read as follows:
127.0.0.1 www.example.com localhost.localdomain localhost
Save and exit the vi editor.

Update the /etc/sysconfig/network file.
    [centos@... ~]$ sudo vi /etc/sysconfig/network
Update the /etc/sysconfig/network file with the following values:
NETWORKING=yes
NETWORKING_IPV6=no
HOSTNAME=www.example.com
Save and exit the vi editor.
Change your server's primary cloud-init configuration file
    [centos@... ~]$ sudo vi /etc/cloud/cloud.cfg
Add the following string at the bottom of the file to ensure that the hostname change stays after a reboot.
    preserve_hostname: true
NOTE: At the bottom of /etc/cloud/cloud.cfg, you may find a line that appears to be commented out, like this: # vim:syntax=yaml - the preserve_hostname line must go at the very bottom of the file, even beneath the commented out line, or else it won't work.
Save and exit the vi editor.
Run the following command to reboot the instance to pick up the new hostname:
    [centos@... ~]$ sudo reboot 

After you reboot your server, execute the hostname command to check that your changes have stayed put.
    [centos@... ~]$ hostname
The command should return the new hostname:
    [centos@... ~]$ hostname
    www.example.com

And that's about it, sports fans. I ripped off most of this from an Amazon KB article on the topic, with a few updates where the KB had some mistakes. This has been an issue with AWS for a while, and there appears to be a lot of confusion on the internet on how to get this accomplished, so I hope that by making this available more people will be able to get this resolved without wasting time.

Tuesday, November 25, 2014

How To Find Files Over a Certain Size Using Redhat/CentOS/Fedora Linux

Here is a quick tip for all of those Redhat/CentOS/Fedora users out there. Do you need to find all files over a certain size, either in a specific directory, your current directory, or in your entire computer/server?

No problem, just execute the following:

find / -type f -size +500000k -exec ls -lh {} \; | awk '{ print $9 ": " $5 }'

In the example above, I am looking for all files over 500MB in size (500000k, where k = kilobytes). The place where I have typed "/" in the above command indicates the path to search in. By selecting "/" I am searching in the entire filesystem; I could easily indicate a specific directory by changing my command as follows:

find /path/to/my/directory -type f -size +500000k -exec ls -lh {} \; | awk '{ print $9 ": " $5 }'

Alternatively, I could search in my current directory by replacing "/" with "." like so:

find . -type f -size +500000k -exec ls -lh {} \; | awk '{ print $9 ": " $5 }'

Easy!

Friday, September 26, 2014

Patching Your Redhat Server for the Shellshock Vulnerability

Introduction

Alright guys, this is a biggie. Shellshock allows remote code execution and file creation for any server relying on bash v3.4 through v1.1. If you are using Redhat or CentOS and the default shell, your server is vulnerable.

The patching history was sketchy, as well. If you patched immediately when the bug came out using CVE-2014-6271, you are still likely vulnerable (as of right now, 9/26/2013 12:50PM EST). Run the following to apply the patch:

#yum update bash

You need CVE-2014-7169 if you are using Red Hat Enterprise Linux 5, 6, and 7. Note that 2014-7169 DOES NOT address the following operating systems, which as of right now are still not fully patched: Shift_JIS, Red Hat Enterprise Linux 4 Extended Life Cycle Support, Red Hat Enterprise Linux 5.6 Long Life, Red Hat Enterprise Linux 5.9 Extended Update Support, Red Hat Enterprise Linux 6.2 Advanced Update Support, and Red Hat Enterprise Linux 6.4 Extended Update Support

If you applied CVE-2014-6271 and need the rest of the patch, reference RHSA-2014-1306

Diagnosis / Am I Vulnerable?

Copy, paste and run the following command from your shell prompt:

env 'x=() { :;}; echo vulnerable' 'BASH_FUNC_x()=() { :;}; echo vulnerable' bash -c "echo test"

If the output of the above command contains a line with only the word "vulnerable" you are still vulnerable. Depending on what version you are using and what patches you have applied, the command output will be different.

A completely vulnerable system will do this:

$ env 'x=() { :;}; echo vulnerable' 'BASH_FUNC_x()=() { :;}; echo vulnerable' bash -c "echo test" vulnerable 
bash: BASH_FUNC_x(): line 0: syntax error near unexpected token `)' 
bash: BASH_FUNC_x(): line 0: `BASH_FUNC_x() () { :;}; echo vulnerable' 
bash: error importing function definition for `BASH_FUNC_x' 
test

Systems patched with CVE-2014-6271 but not CVE-2014-7169 will do this:

$ env 'x=() { :;}; echo vulnerable' 'BASH_FUNC_x()=() { :;}; echo vulnerable' bash -c "echo test" 
bash: warning: x: ignoring function definition attempt 
bash: error importing function definition for `x' 
bash: error importing function definition for `BASH_FUNC_x()' 
test

Systems that used the RHSA-2014-1306  patch do this:

$ env 'x=() { :;}; echo vulnerable' 'BASH_FUNC_x()=() { :;}; echo vulnerable' bash -c "echo test" 
bash: warning: x: ignoring function definition attempt 
bash: error importing function definition for `BASH_FUNC_x' 
test

Next we have to test the file creation aspect of the Shellshock vulnerability. Execute the following command, in its entirety, from your shell:

cd /tmp; rm -f /tmp/echo; env 'x=() { (a)=>\' bash -c "echo date"; cat /tmp/echo

This is what a non-vulnerable system will provide:

$ cd /tmp; rm -f /tmp/echo; env 'x=() { (a)=>\' bash -c "echo date"; cat /tmp/echo 
date 
cat: /tmp/echo: No such file or directory

If you're extra paranoid like me you may just want to double check that there is no file "echo" in your /tmp directory. A system that is still vulnerable will respond to the command by providing the data and time according to your system clock and creating the file. The initial output will look similar to this:

$ cd /tmp; rm -f /tmp/echo; env 'x=() { (a)=>\' bash -c "echo date"; cat /tmp/echo 
bash: x: line 1: syntax error near unexpected token `=' 
bash: x: line 1: `' bash: error importing function definition for `x' 
Fri Sep 26 11:49:58 GMT 2014

Please guys, check your servers and get this wrapped up as quickly as possible. I can't stress enough how dangerous this vulnerability is, particularly given how many administrators allow direct access to their servers through one port or another. Feel free to contact me if you have any additional questions or concerns. I am happy to help.

Sunday, January 13, 2013

File Defragmentation Tools for Windows 2003/2008, Redhat/CentOS and Ubuntu

For managing fragmentation of NTFS (Windows Server 2003/2008, XP, Vista, and Windows 7):

For general disk defragmentation, the following utilities offer a substantial improvement in overall performance and efficacy over native operating system tools:
Auslogics Disk Defrag or Raxco PerfectDisk

For use on disks unsupported by the above tools, frequently executed and/or locked files or even a straightforward command line utility that can easily be used as part of a shell script:
Contig from the Sysinternals Suite
Contig has been of particular value when managing backup servers - servers storing huge files with substantial writes on a regular basis. Being able to specify the backup files allows for properly scheduling defragmentation by backup job, and in the process eliminating the need for downtime on these systems as part of this manner of disk maintenance. Can also be used for per-file fragmentation analysis and reporting.

For managing fragmentation of ext4 file systems (newer versions Redhat/CentOS, Ubuntu, Debian, etc):

e4defrag - Linux users (or at least the Linux users I know) have been waiting a long time for the use of an online defragmentation utility. We've all ignored it, pretended as though fragmentation didn't happen on our Linux machines, until the time came for a reboot after 2-3 years of uptime and read/writes forced an fsck that occurred at the worst possible time.

e2freefrag - Provides online fragmentation analysis and reporting by disk, file or mount point.

For managing fragmentation of ext3 file systems  (slightly legacy versions Redhat/CentOS, Ubuntu, Debian, etc)

Good luck! Your options are unfortunately a bit limited.

Many readers may ask: ext3 is a journalled filesystem, why even bother? Primarily, in order to increase IOPS (currently the primary performance bottleneck in terms of price per unit of measurement). Journalled filesystems have seek times just as NTFS does. Reducing those seek times improves performance. Further, unexpected system events can lead to the operating system forcing the journal to be processed. Regular maintenance helps to ensure this process is timely and that downtime is minimized as a result. I have often heard it said that this process "often takes only a second" and as a result can be safely disregarded. While I respect everyone's opinion, I have to very urgently disagree. Most of my experience has been in working in commercial data center environments with several thousand servers. At scale, the statistically insignificant becomes a regular headache. What often happens is part of my concern as an administrator, disaster recovery is just as important in my opinion - safeguarding from improbable catastrophic scenarios and reducing their impact has always been part of my agenda.

That said, let's continue: ext3 requires you to unmount your partition to defragment it. IMO, ext3 is still the most widely used Linux filesystem. I highly recommend the e2fsprogs suite, which includes the following tools:

e2fsck - its just fsck. Not a vulgar typo; performs a filesystem integrity check
mke2fs - creates filesystems
resize2fs - expands and contracts ext2, ext3 and ext4 file systems
tune2fs - modify file system parameters
dumpe2fs - prints superblock and block group info to standard output or pi pe destination of choice
debugfs - a simple memory-only filesystem that can be mounted to perform emergency troubleshooting of your primary filesystem

For defragmentation, you will be typically be using the following:

mount - used to mount and unmount filesystem (also widely known as featuring one of the more chuckle-inducing linux commands when in need of command syntax assistance, #man mount)
fsck - File System ChecK. Checks the specified file system for errors. 
[note: modifying /etc/fstab allows you to specify which devices are mounted]

Some solid non-OS included tools are:

Shake
defrag

Wednesday, November 28, 2012

Thank You!

When I started this site 6 months ago, I expected it to be a sort of notebook for quick fixes of server and router problems. I had hoped to use this as a place where the solutions to IT brain ticklers could be posted in case I forgot them and needed them later. Because almost all of my friends work in the same industry I do, it quickly became a place where I could refer people I knew who were in a bind and in return I post fixes they come across.

Other than letting a few close friends and family know about the site and bugging them for feedback and ideas every now and again, or sending out the occasional twitter/linkedin post,I haven't really told anyone about it. I certainly haven't marketed the site or engaged in any sort of 'search engine optimization'.

That's why I don't quite know what to make of the sheer number of visitors to the site. At this point, the count is in the tens of thousands, with traffic doubling every single month. Here is what the traffic looks like in a graph:
              Behold! 

Most visitors to the site stick around for a few minutes (long enough to find the fix to their problem) and quite a few are now regular returning readers. I'm really thrilled to know that so many of you are finding value in the information contained on the website. If I can help just a few people fix a problem that has been keeping their servers down or unstable, than this site has been a complete success. Please feel free to repost any of the information you see here in whatever format you like. The point is to make the Internet a better place - safer, easier to use and more reliable - which is an ongoing collaborative effort between millions of people. As much as I appreciate all of the links that readers have posted, getting the data out there and learning new things about the tools available to us is most important.

So a big Thank You to everyone who has visited the site or posted links or commented or emailed. I will do my best to keep providing new content and solutions that continue to be helpful.

In the spirit of posting content here that will help readers the most, I would like to make a small request. Is there a particular problem or error you are encountering but can't figure out? Have you read an article here but wished that more information was included, or had some ideas on how to expand the article to make it more useful? Please feel free to contact me directly and so long as the request fits into our focus here - errors related to server, network and database administration - I would be happy to provide that fix as a new article. One thing I should point out - I have been working on a ton of Microsoft projects lately, but by no means is the focus limited to Microsoft. Need help with CentOS, Debian, BSD, Cisco IOS or Juniper JunOS? You're covered here - we just haven't had time to publish on those topics in as much depth as they deserve. Your request would be a great excuse to get it done!

My contact info is available on the other side of this link. Unless you request otherwise, your name and contact information will be kept strictly confidential and it should go without saying that there are no stupid questions - you won't get treated like the Saturday Night Live IT guy here.

NSA Leak Bust Points to State Surveillance Deal with Printing Firms

Earlier this week a young government contractor named Reality Winner was accused by police of leaking an internal NSA document to news outle...