Showing posts with label Amazon. Show all posts
Showing posts with label Amazon. Show all posts

Thursday, January 14, 2016

Bash script to email new S3 bucket files as compressed attachments (UDPATED)

I've written a simple bash script that checks for new files in an AWS S3 bucket and emails any that it finds as a compress (tar.gz) attachment - you can find it at my Github account under the name "S3-Filer-Mailer". I built it as a supplement for a contact form that relies on S3 as a back-end, rather than a php mailer or database. Using S3 for contact forms is attractive because it is so unattractive to spammers. There is no way to corrupt this sort of setup for spamming or to get hands on a database through the form, because it isn't connected to one.

Why not use Amazon's Simple Notification Service (SNS)? For one, AWS charges more for SNS than it does for S3 queries and downloads. For another, if this sort of functionality is available through SNS it is not clearly documented.

Getting back to the topic of security, the script establishes two network connections - one a connection to S3 to retrieve the files, the other sending the email. The S3 connection is encrypted using TLS; I'm going to add an extra pipe in here to gpg2 as time permits to encrypt the attachments themselves to close the loop - or you can do it yourself by adding a line with gpg -e -r Name foo.txt, where Name is the name you used while generating the public key you wish to use to encrypt the file. Adding encryption support as a command line operator is easy, but I want to add it as part of more general sanity-checking input.

The script was built and tested on RHEL, but it should work in any Linux that supports bash. This is pre-pre-alpha version, so no complaining. The obvious and immediate functionality problem ATM is that the script assumes that only files that contain a string with today's date in their filename were created today (plus the string has to be in format YYYYMMDD). When my copious spare time allows I will get to adding an option to filter results via regex; for now users can do this fairly simply by piping an additional grep command between grep ${TODAY} and > ${FILE} on line 16 of S3-Filer-Mailer.sh.

The script includes two files, an executable file (S3-Filer-Mailer.sh) and a configuration file (S3-Filer-Mailer.conf). To get things working, move both files to a computer running Linux and modify the S3-Filer-Mailer.conf file settings; that is where you will specify your email address and your S3 bucket. You can also limit the script to a subdirectory of your bucket in the conf file. The script is recursive, so if you specify the root directory of your bucket it will check every subdirectory. For the time being, that is the only way to specify multiple subdirectories; similarly disabling recursiveness requires modifying the executable.

Also, dependencies. There are some. Only one of them should take more than 5 seconds to install, the AWS Command Line Interface. You will need Python for that if you don't already have it. On the bright side, if you want to do cool stuff with AWS and you are using linux you should be happy to drag more crap to a CLI, right? The only other dependency is mailx.

UPDATE: I've moved this from a gist to a full-fledged Github repo, and I've made a few updates that make this script significantly less lame.

The earliest version of this required sharutils to uuencode attachments, but that is no longer necessary. Relying entirely on mailx encoding also resolved an ongoing issue in which Mozilla Thunderbird did not properly recognize attachments.

Variables that need to be changed in order for the script to function have been placed into a separate .CONF file.

Friday, January 8, 2016

Setting a hostname for your Amazon AWS EC2 server running RHEL or CentOS 7

So it turns out that setting your AWS EC2 server's hostname to be persistent across reboots is a surprising pain in the ass, at least with my usual OS of choice - RedHat/CentOS Linux.

If you're like me, setting a hostname is the sort of trivial non-task that at this point you really feel like you dont need to RTFM to figure out. You know about `hostnamectl set-hostname`. You've tried `nmcli general hostname`. You've manually set /etc/hostname. None of its persists past a reboot. Which can make life very difficult for those planning to use EC2 for email or dozens of other tasks.

Here's how to do it the right way, the first time. I'll also describe some circumstances that setting your own hostname will break things, and why its such a hassle to get this done in AWS in the first place.

Amazon relies on cloud-init to manage a variety of initialization tasks for its cloud servers; cloud-init was originally built to support Ubuntu images, but it is now used for a variety of different Amazon distros, including RHEL, CentOS and "Amazon linux". cloud-init is manged through a series of configuration files and modules; you can use them to add SSH keys, setup chef & puppet recipes, install SSL certificates, and all sorts of stuff. Think of it as a very fancy kickstart script.

By default, Amazon resets your server's hostname to the Public DNS entry for the IP address assigned to your server. These default hosts look something like this: ec2-111-222-333-444.compute-1.amazonaws.com for an IP address 111.222.333.444. If you have an Elastic IP Address, this hostname can be viewed through your EC2 Console by navigating to Network & Security -> Elastic IPs. The hostname is viewable in the "Public DNS" column. Because of this behavior, all of the default methods for assigning a hostname to your server are over-ridden on reboot. There is no way to change the hostname through the EC2 Console after your server has been built.

Here's the part of the walk through where I describe some circumstances where messing with your hostname can break stuff. If you have not assigned at least one Elastic IP Address (EIP) to your server, I strongly advise against messing with your server's hostname. Without an EIP, Amazon changes your server's public IP, private IP and hostname to whatever is available at the moment in your region. I haven't tried it, but I strongly suspect that making the changes in this walkthrough without an EIP will either just not work or will break something. There may be circumstances where you would want to accomplish this; hacks probably exist but this walkthrough ain't it.

Here's what to do:


Update the /etc/hostname file with your new hostname:
    [centos@... ~]$ sudo vi /etc/hostname
Initially, this file will contain the hostname assigned by Amazon. Delete this value and replace it with your preferred hostname. With vi, you must enter "INSERT MODE" to make changes to a document by pressing the i key.
NOTE: the official Amazon walkthrough tells you to add your hostname like this: HOSTNAME=persistent_host_name - that is incorrect. The correct way is to just put your hostname in there; if you want your hostname to be www.example.com than the contents of /etc/hostname should be www.example.com. The official walkthrough also tells readers to use vim using the syntax #vim <filename>. Although installed by default with RHEL 7 & CentOS 7, vim has to be launched using #vi <filename>. 
Save and exit the vi editor. After you've made you're changes, press ESCAPE to exit INSERT MODE, then press SHIFT and : [colon] simultaneously to issue a command to the vi editor. Type wq, and then press Enter to save changes and exit back to the command prompt.

Update the /etc/hosts file with the new hostname.
    [centos@... ~]$ sudo vi /etc/hosts
Change the entry beginning with 127.0.0.1 to read as follows:
127.0.0.1 www.example.com localhost.localdomain localhost
Save and exit the vi editor.

Update the /etc/sysconfig/network file.
    [centos@... ~]$ sudo vi /etc/sysconfig/network
Update the /etc/sysconfig/network file with the following values:
NETWORKING=yes
NETWORKING_IPV6=no
HOSTNAME=www.example.com
Save and exit the vi editor.
Change your server's primary cloud-init configuration file
    [centos@... ~]$ sudo vi /etc/cloud/cloud.cfg
Add the following string at the bottom of the file to ensure that the hostname change stays after a reboot.
    preserve_hostname: true
NOTE: At the bottom of /etc/cloud/cloud.cfg, you may find a line that appears to be commented out, like this: # vim:syntax=yaml - the preserve_hostname line must go at the very bottom of the file, even beneath the commented out line, or else it won't work.
Save and exit the vi editor.
Run the following command to reboot the instance to pick up the new hostname:
    [centos@... ~]$ sudo reboot 

After you reboot your server, execute the hostname command to check that your changes have stayed put.
    [centos@... ~]$ hostname
The command should return the new hostname:
    [centos@... ~]$ hostname
    www.example.com

And that's about it, sports fans. I ripped off most of this from an Amazon KB article on the topic, with a few updates where the KB had some mistakes. This has been an issue with AWS for a while, and there appears to be a lot of confusion on the internet on how to get this accomplished, so I hope that by making this available more people will be able to get this resolved without wasting time.

Monday, September 28, 2015

EC2 IP aliasing script is now ready for use

About a month and a half ago I grew so frustrated by the boneheaded way that Amazon EC2 handles IP aliasing that I wrote a pretty lengthy post about the problems entailed and included a small program that would fix those problems.

Amazon provides some pretty productive documentation for some types of users. There is help available for you if you are any one of the following:

     - You are willing to pay for a new ENI to support a second IP address
     - You are multihoming / load balancing
     - You want to use "Amazon Linux" and install their ec2-net-utils

But, if you want to just add a second IP address to a pre-existing Linux server, you are pretty much screwed. Well, you were screwed. Now you can install my program - aliaser - as a service and it will route additional IP addresses for you without the need for an extra ENI.

I've uploaded aliaser to Github - it includes a shell script and a .service file, as well as some very easy-to-follow instructions for how to install the script to run at boot. I've also included a link to instructions on how to get your secondary IP from Amazon, which I went through in my first blog post and is a pre-requisite for installing aliaser.

NOTE: this service is built for Red Hat Enterprise Linux / CentOS version 7 using systemd. I haven't tested it with installs using initd; the .service file would not work, obviously, but could be replaced with a fairly simple init script. I might get around to adding one for initd fans, but odds are good if you are still using initd its because you are already pretty familiar with writing an init file yourself and this would be a very simple one. 

I also haven't tested aliaser with any releases other than 7.1 - so buyer beware. It would be cool to get something working for Gentoo and other operating systems. 

Anyone is welcome to use aliaser for any purpose. You're welcome to add it into other software, yadda yadda yadda. If it helps another admin out of a bind, I would be happy :)

Tuesday, September 15, 2015

An IRS tax refund phishing scam illustrates the widespread failure of hosting and antivirus providers' security measures

Scams focused on stealing tax refunds remain highly profitable, despite the fact that they are well known and understood by security professionals and the general public, and have been for years. A variety of distribution methods are used, with the common threads being the use of IRS logos and bureaucratic-sounding language to convince users to click a link, download and execute a file and/or send personally identifying information like a Social Security number. A recent example of one such a scam that I came across is a damning illustration of the failure of online service providers to protect users from obvious and simple malware distribution methods.

In the example I wish to discuss today, the distribution method was a spammed email that on a small ISP's installation of SpamAssassin (note: I am not an admin or employee of this system; I'm a customer) received an X-Spam-Status score of 5.3 after being flagged with the following variables:

X-Spam-Status: No, score=5.3 required=10.0 tests=AM_TRUNCATED,CK_419SIZE,
 CK_KARD_SIZE,ENV_FROM_DIFF,ENV_FROM_DIFF0,FROM_SECURITY,HAS_REPLY_TO,
 HEADER_FROM_DIFFERENT_DOMAINS,JUNKE_IXHASH,LINK_NR_TOP,MAILPHISH_REPLYTO,
 PSTOCK_PART,TO_NOREAL,XPRIO,ZIP_ATTACH shortcircuit=no  
        autolearn=disabled version=3.4.0 

While the default SpamAssassin threshold for marking a message as spam is 5.0, few admins leave this default value. SpamAssassin itself recommends that admins of multiple user mail servers use a threshold of 8 to 10. I don't have this ISP's spamassassin.conf file, and its obviously been customized. My point here isn't to take issue with SpamAssassin, which I have used for many years, but to demonstrate how this message made its way to mailboxes through pretty solid security software despite these being included in the headers:

From: "Internal Revenue Service" <office@irs.gov> 
Reply-To: "Internal Revenue Service" <office@irs.gov>  
Return-Path: <servers@abitindia.com>

Here's another depressing bonus. In addition to SpamAssassin, the recipient mail server had clamav installed. The message had a .ZIP file attachment, and the mail server's clamav install marked it as clean:

X-Virus-Scanned: clamav-milter 0.98.7 at mx1.riseup.net
X-Virus-Status: Clean


The attachment does in fact have a javascript nasty-ware. And clamav is not alone in its failure to pick up the file. According to Virustotal, 31 out of 56 AV platforms failed to detect this file - including Symantec, TrendMicro, Panda, Malwarebytes, Avast and Avira. In defense of these AV heavyweights, the file used a single basic obfuscation function to disguise its purpose - which at the moment is apparently enough to fool these AV packages.


One round through Einar Lielmanis' JS Beautifier later, and we have this:


The script creates an EXE file in the %TEMP% directory - usually something like C:\Users\UserName\AppData\Local\Temp - that is named some random string, and fills it with a bunch of garbage that it retrieves from one of the three domain names listed: dickinsonwrestlingclub.com, syscomm.smartlanka.net or les-eglantiers.fr.

There are a number of domains and hosts associated with this scam.



Malware domains
Domain IP Host Registrant Contact DNS IPs
dickinsonwrestlingclub.com 72.20.64.58 Consolidated Telcom Perfect Privacy, LLC N/A 72.20.64.11, 72.20.64.12
syscomm.smartlanka.net 69.89.31.73 box273.bluehost.com / Bluehost / Unified Layer Dilhan Seneviratne prabhath247@gmail.com 74.220.195.31, 69.89.16.4
les-eglantiers.fr 76.74.242.190 hp92.hostpapa.com / Peer 1 Network / Cogeco John Huisman / Camping Beau Rivag huisman.huisman@orange.fr 69.90.36.133, 204.15.193.53



Spam domains
Domain IP Host Email Provider Contact DNS IPs
abitindia.com 54.165.102.41 Amazon EC2 Gmail accounts@abitindia.com 50.23.136.229, 50.23.75.96, 162.251.82.118, 184.173.150.57
mail.netspaceindia.com 74.54.133.186 The Planet N/A help@netspaceindia.com 205.251.196.41, 205.251.192.135, 205.251.199.124, 205.251.195.214
netspaceindia.com 104.131.68.147 Digital Ocean N/A help@netspaceindia.com 205.251.196.41, 205.251.192.135, 205.251.199.124, 205.251.195.214



Taking a look at the hosts involved in this scam provides even further disappointment. abitindia.com, whose email is managed by Gmail, is providing the return-path for the spam messages but not the reply-to. Replies, incredibly, go directly to the IRS support email address. The reply-to header is commonly forged so that backscatter goes to some random sucker. In this case, abitindia.com is affiliated with the sender domain netspaceindia.com:

Domain Name: ABITINDIA.COM
Updated Date: 2014-11-24T05:21:07Z
Creation Date: 2006-11-23T19:31:19Z
Registrar Registration Expiration Date: 2015-11-23T19:31:19Z
Registrar: PDR Ltd. d/b/a PublicDomainRegistry.com
Registrar IANA ID: 303
Registrant Name: Netspaceindia
Registrant Organization: Netspaceindia
Registrant Street: Hall no 3, Wing B, Parshuram apt Above Woodlands Showroom College Road Nashik
Registrant City: Nashik
Registrant State/Province: Maharashtra
Registrant Postal Code: 422005
Registrant Country: IN
Registrant Phone: +91.9975444464
Registrant Email: accounts@abitindia.com
Name Server: dns1.netspaceindia.com
Name Server: dns2.netspaceindia.com
Name Server: dns3.netspaceindia.com
Name Server: dns4.netspaceindia.com


In other words, in many circumstances backscatter recipients are innocent victims. That is not the case here - the sender is managing the backscatter recipient address, likely to keep their mailing lists updated. As such, Google could play a role in putting a stop to this scam - a review of the backscatter would make the relationship between sender and backscatter recipient obvious, and in an ideal world would precipitate the suspension of the Google Apps account for "abitindia.com".

To be fair, Google's responsibility here is minimal - particularly when compared to the role that every other hosting provider plays in this. The Planet and Digital Ocean are providing the infrastructure for the spam campaign, while Bluehost, Cogeco and Consolidated Telcom are providing the infrastructure for hosting the malware. Its likely that the accounts for these providers were created using fraudulent/stolen payment information, or legitimate accounts were compromised. This sort of thing is an everyday occurrence for hosting providers; for providers who do not invest in abuse response, these types of scams can use the same accounts with the same hosting providers for months if not years. When I come across this sort of scam, I do my best to inform the hosting providers involved using the abuse contact information that is required to be associated with IP/DNS registrations, along with enough evidence for the provider to confirm Im not a nut. It is unusual to receive a response and even more unusual to receive a non-automated response. It is just as unusual for hosting provider staff to review their abuse@ contacts, let alone resolve the issues they receive.

Hemming and hawing over the need for state intervention to prevent "cyber-attacks" (vomit) and scams like the ones described here are all over the place. Many of those who support such a view make it a point to justify government intervention because of the incredible sophistication and technical complexity of the scams that plague internet users. However, the overwhelming volume of the scams I have encountered over the course of my career involved well known techniques and software. There is significant room for improvement in security practices with applying what we already know: like how to prevent (or rapidly stop) a 30 year old scam using 20 year old spam techniques to circulate 10 year old malware.

Tuesday, August 11, 2015

Assigning multiple IP addresses to a single Amazon EC2 instance on a single ENI

UPDATE March 1st, 2017: I'm glad to see that people are finding this helpful, and thanks to everyone that has contacted me here or via email. Just to be clear, though, the script on GitHub works much better than what I describe here in this post. The idea for this post was to describe the basics of how to get IP aliasing working in EC2 w/out using Amazon's weirdo linux distro, and I wrote it about a while before I posted the script to GitHub. If you want functional code with step-by-step instructions, goto the aliaser GitHub repo. I just don't have the time to rewrite the post each time I (or someone else) has an update for the script. Also, if you have feature requests or feedback, it will be easier for me to get back to you on GitHub than here ... especially if you have something specific you want added or that doesn't work.

Also, just FYI, I added a systemd .service file to the script in the aliaser GitHub repo a year ago. IIRC its LFB compatible so should work in RedHat/CentOS & Debian/Ubuntu, but I've only tested in using CentOS atm. I'm using Debian now a lot more than I was a year ago, so I should be able to test it out using Deb soon.

For those who are still using init.d for whatever reason, drop the .service file & use either `chkconfig -add` (for RedHat) or update-rc (for Debian). I know originally in this post I was saying I was going to be Mr Helpful with this kind of thing, but I don't have a ton of free time ATM, and there are mountains of documentation on how to run a script at boot time with init. I don't have any problem with init, I just haven't used it very much lately.

UPDATE: I'm going to be building out pretty much everything I describe here for fixing IP aliasing, multiple IPs and other networking issues with Amazon EC2 with a program called Aliaser which is available on GitHub. All the functionality described below already works in Aliaser; I will be extending Debian/Ubuntu support and systemd service compatibility within the next day or two. If someone really needs this functionality now let me know and I can fast track it if you're nice.

There are many ways to add additional IP addresses to EC2 in support of various types of projects. And the documentation is pretty good when you want to add additional Elastic Network Interfaces (ENI) or if you are using an Amazon Linux AMI that provides support for ec2-net-utils and/or if you are planning on multihoming/load balancing.

I recently needed to do something much more simple than is typically provided for in the documentation. I had a single Amazon EC2 instance running Red Hat Enterprise Linux 7 (RHEL) and I wanted to add a second public IP address to it. Furthermore I wanted to do it in the most straight-forward way: without adding an additional ENI - which is the equivalent of adding a secondary physical network interface - which would require me to make additions to my routing table I didn't want to bother with. For test cases, think of adding several SSL certificates or a shared hosting web server - several IPs, one subnet, easy.

Unfortunately things are a bit more complex with EC2.

There are good reasons for this additional complexity. For one, NAT has to be a part of this picture because EC2 depends on it for a whole host of reasons; to be able to keep you IP (almost) immediately consistent across multiple virtual machines, for load balancing, for fail over, and many other reasons too exciting to spend time on here. For my use case, this meant I had to configure a new private IP address to go with my public IP address.

The second reason for the extra complexity is that EC2 depends on DHCP (which, in turn, is required for all the reasons we just briefly outlined). Assignment of a static IP address for your primary network interface in EC2 is a big no-no. I haven't taken a look lately but if my memory is correct on a reboot the cloud-init scripts that come pre-packaged in standard Amazon EMI's will blow out static assignments and replace them with DHCP. Needless to say I didn't want to really get into the nitty-gritty of Amazon's network architecture.

I just wanted a damn second IP address.

Typically with Linux the solution to adding multiple IP addresses to the same interface is really quite straight-forward; particularly when you are assigning those IP addresses within the same subnet. The method is called IP aliasing, and involves the creation of "virtual" network interfaces by adding one or more network initialization scripts. In RHEL, those scripts are stored in a series of files within /etc/sysconfig/network-scripts/ (in Ubuntu they are stored in a single /etc/network/interfaces file - but this walkthrough is focused on RHEL because there is already documentation for Ubuntu).

In this scenario, to add additional IPs to my existing NIC, I would just copy the network-script for my NIC - which by default would be /etc/sysconfig/network-scripts/ifcfg-eth0 - to a new file that prependeds ":0" to the end of the file name, like this:

#cp /etc/sysconfig/network-scripts/ifcfg-eth0 /etc/sysconfig/network-scripts/ifcfg-eth0:0

Additional IPs can be added simply by incrementing the last digit (ifcfg-eth0:1, ifcfg-eth0:2, ifcfg-eth0:3, etc).

I would have to make some changes inside the new file itself as well. Let's say this was the content of my eth0 file:

DEVICE=eth0
BOOTPROTO=static
NETMASK=255.255.255.0
TYPE=Ethernet
ONBOOT=yes
HWADDR=00:10:17:24:bf:77
GATEWAY=192.168.1.1
IPADDR=192.168.1.2
Copying it with 'cp' as outlined above would give me a duplicate of this file, but to get it working I would need to change the DEVICE and IPADDR fields to indicate the new IP. The DEVICE field should match the file name assigned to the configuration file, which also indicates the name of the virtual interface. In this example, it would be eth0:0. I also need to change the IPADDR to indicate the new IP I want - let's say I want it to be 192.168.1.3 in this scenario. So this is what the new file would look like:

DEVICE=eth0:0
BOOTPROTO=static
NETMASK=255.255.255.0
TYPE=Ethernet
ONBOOT=yes
HWADDR=00:10:17:24:bf:77
GATEWAY=192.168.1.1
IPADDR=192.168.1.3

Once that is set up, I should test the new interface by trying to activate it individually using the "ifup" command:

#ifup eth0:0

If it works without issue, I'm all set. If errors occur, I should start troubleshooting. Alternatively, restarting the network service would also raise the interface:

#service network restart

or if you are using systemd instead of init:

#systemctl restart network.service

I could change this behavior by setting the ONBOOT flag to "no" within the configuration file.

Anyway - this is all pretty easy right? IP aliasing! Anyone can do it!

Here's the problem - none of this works with EC2. It doesn't work with EC2 because, as we mentioned, ENIs must be configured to use DHCP. This is what /etc/sysconfig/network-scripts/ifcfg-eth0 typically looks like in EC2:

DEVICE="eth0"
BOOTPROTO="dhcp"
ONBOOT="yes"
TYPE="Ethernet"
USERCTL="yes"
PEERDNS="yes"
IPV6INIT="no"

Unfortunately, it is impossible to use IP aliasing with a primary network interface that is configured to use DHCP. Here is how CentOS puts it in their documentation:

Josh Wieder IP Alias DHCP conflict






Trying to configure an Alias will result in an error as soon as the interface attempts to load. So don't even bother.

Before I provide the solution for dealing with this routing issue, let's make sure you can jump through the hoops you need to do with Amazon itself.

Log into your EC2 console, and select Instances. Right click the instance you would like to add an IP to, select Networking and then Manage Private IP Addresses.

Amazon EC2 add private IP Joshua Wieder


A new menu will pop up. Click Assign New IP and enter the Private IP address that you wish to select. This IP should be within the subnet already assigned to your primary interface - which shouldn't be a problem, because by default it is a /20. You will not select your Public IP here, so just click Yes, Update once you have entered your Private IP.

Next we will be selecting Elastic IPs from the Network & Security group on the left menu column. From the Elastic IP menu, select Actions and Allocate New Address.


Your new public Elastic IP (EIP) will appear in the menu. Highlight the radio button next to the new EIP, go to Actions again and this time select Associate Address to launch the menu in the image below.

It is very important that you select a Network interface and not an instance in this menu. Selecting an instance will replace your pre-existing EIP with your new EIP instead of adding onto it!

If you only have one Instance with one ENI, than only one Network Interface will appear here. If you have multiple Instances be sure that you select the correct Network Interface. You can see which interfaces are assigned to which instances in the Instance menu.

Once you select a Network interface you will be able to select the Private IP Address that you assigned earlier. One you select it, click the blue Associate button (leave the Reassociation checkbox blank).

With all of that done, you should be able to see the association between your new public and private IPs in the Elastic IPs menu. However, if you try to ping your public IP from out of the network, or even ping the private IP locally from your instance, you will get timeouts. Let's resolve this by returning to the routing issue we discussed earlier.

From your user's home directory, create a file and add the following text using your favorite editor:

#!/bin/bash
#add routes for secondary IP addresses
MAC_ADDR=$(ifconfig eth0 | sed -n 's/.*ether \([a-f0-9:]*\).*/\1/p')
IP=($(curl http://169.254.169.254/latest/meta-data/network/interfaces/macs/$MAC_ADDR/local-ipv4s))
for ip in ${IP[@]:1}; do
    echo "Adding IP: $ip"
    ip addr add dev eth0 $ip/20
done

This script was modified from a script prepared by Jurian for Ubuntu in order to work on Red Hat systems. It is easily modified to work with other Linux flavors and non-default networking configurations by modifying the MAC_ADDR line to replace "ifconfig" with the distro-appropriate command to find a MAC address for a given interface, "eth0" with the name of the primary interface (for example eth1), and "ether" with the name of the label for the MAC address field returned by the command indicated in my version as "ifconfig".

For use cases that involve an interface other than eth0, or a private subnet allocation other than the EC2 default /20, this second to last line will need to be changed as well:

ip addr add dev eth0 $ip/20

For example, let's say I am using an Ubuntu system and wish to add a secondary IP address to an interface named eth2, and I am using a non-default private subnet that is a single class C (/24). I would use this script instead:

#!/bin/bash
#add routes for secondary IP addresses
MAC_ADDR=$(ifconfig eth2 | sed -n 's/.*HWaddr \([a-f0-9:]*\).*/\1/p')
IP=($(curl http://169.254.169.254/latest/meta-data/network/interfaces/macs/$MAC_ADDR/local-ipv4s))
for ip in ${IP[@]:1}; do
  echo "Adding IP: $ip"
  ip addr add dev eth2 $ip/24
done

Notice the curl command pulling from the 169.* IP address? That is how we call EC2 Instance Metadata and User Data. Using the Instance Data API is incredibly useful for being able to pull information about your instance in situations like ours where statically storing that data is either impossible or inconvenient.

Save the file and add an executable bit. I named by file "ip-script.bash", so to add an executable bit I performed this command from the same directory as the script:

#chmod+x ip-script.bash

I can then execute the script in order to complete the routing configuration for the new secondary IP (NOTE this script will handle multiple secondary IP addresses):

# ./ip-script.bash
 % Total  % Received % Xferd Average Speed  Time  Time   Time Current
                Dload Upload  Total  Spent  Left Speed
100  25 100  25  0   0 24582   0 --:--:-- --:--:-- --:--:-- 25000
Adding IP: 192.168.1.3

If successful, you should now be able to ping the public IP from outside your Instance and receive a response (provided your firewall and EC2 Security Group policies allow ICMP traffic from the source of the ping). Alternatively, you could use the following commands to confirm everything is as it should be.

This command will return the public IP bound to the private IP provided in the privateipaddress field below. If this command times out or produces an error, something has gone wrong:

# curl --interface privateipaddress ifconfig.me

You will also want to check your routing table:

# route -n

If this method has been performed exactly as described in this walkthrough - on an instance with a single ENI and a single private IP subnet allocation, but with multiple public and private IPs, then your routing table should look something like this:

# route -n
Kernel IP routing table
Destination   Gateway     Genmask     Flags Metric Ref  Use Iface
0.0.0.0     192.168.1.1   0.0.0.0        UG  100    0    0  eth0
192.168.1.0   0.0.0.0     255.255.240.0   U    0     0    0  eth0

One of the more common mistakes is to use a different netmask when assigning the secondary private IP address, even though that secondary private IP is part of the existing private IP allocation. When that occurs, it would look something like this (in this example, the user put a /24 netmask on the secondary private IP instead of the correct /20):

# route -n
Kernel IP routing table
Destination   Gateway     Genmask     Flags Metric Ref  Use Iface
0.0.0.0     192.168.1.1   0.0.0.0        UG  100    0    0  eth0
192.168.1.0   0.0.0.0     255.255.240.0   U    0     0    0  eth0
192.168.2.0   0.0.0.0     255.255.255.0   U    0     0    0  eth0

Thursday, May 7, 2015

Amazon Finally Ditches SSLv3

Amazon S3 subscribers recently received a form letter like this one:

Dear AWS Customer,

This message explains some security improvements in our services. Your security is important to us. Please review the entire message carefully to determine whether your use of the services will be affected, and if so what you need to do.

As of 12:00 AM PDT May 20, 2015, AWS will discontinue support of SSLv3 for securing connections to S3 buckets. Security research published late last year demonstrated that SSLv3 contained weaknesses in its ability to protect and secure communications. These weaknesses have been addressed in Transport Layer Security (TLS), which is the replacement for SSL. Consistent with our top priority to protect AWS customers, AWS will only support versions of the more modern TLS rather than SSLv3.

You are receiving this email because some of your users are accessing Amazon S3 using a browser configured to use SSLv3, or some of your existing applications that use Amazon S3 are configured to use SSLv3. These requests will fail once AWS disables support for SSLv3 for the Amazon S3 service.

The following bucket(s) are currently accepting requests from clients (e.g. mobile devices, browsers, and applications) that specify SSLv3 to connect to Amazon S3 HTTPS endpoints.

XXXXXXXX.XXXXXXX.XXXXXXX : XXXXXX-XXXXXX-XXXXX

For your applications to continue running on Amazon S3, your end users need to access S3 from clients configured to use TLS. As any necessary changes would need to be made in your application, we recommend that you review your applications that are accessing the specified S3 buckets to determine what changes may be required. If you need assistance (e.g. to help identify clients connecting to S3 using SSLv3), please contact our AWS Technical Support or AWS Customer Service.

For further reading on SSLv3 security concerns and why it is important to disable support for this nearly 18 year old protocol, we suggest the following articles:

https://www.us-cert.gov/ncas/alerts/TA14-290A
https://blog.mozilla.org/security/2014/10/14/the-poodle-attack-and-the-end-of-ssl-3-0/
http://disablessl3.com/#why

Thank you for your prompt attention.

Sincerely,
The Amazon Web Services Team

Amazon Web Services, Inc. is a subsidiary of Amazon.com, Inc. Amazon.com is a registered trademark of Amazon.com, Inc. This message was produced and distributed by Amazon Web Services Inc., 410 Terry Ave. North, Seattle, WA 98109-5210

Sunday, February 1, 2015

Uploading HTML forms to Amazon S3 using PHP

Dynamically uploading information to S3 can be a bit challenging to do initially, particularly in PHP where a lot of the documentation is either really new or really old.

Amazon has a PHP SDK, which is available as either a .phar file or can be installed using Composer. That's cool for building a new project, but what if you have a pre-existing project or form and just want to be able to dump the text output to S3?

I've put together some code at Github that will take care of that issue. The only requirement is PHP and an Amazon S3 account.

Download or clone the files here: https://github.com/jwieder/s3-http-php-form

Your Amazon access keys and other configuration are stored in a single configuration file. Just fill out your login info into the configuration file and include the php form where you need it as outlined in the README.md file and you should be all set!

Saturday, October 4, 2014

Amazon EC2 Connectivity Failures - 10/4/2014

I have seen indications of periodic connectivity issues impacting Amazon's EC2 Cloud Computing architecture. Personally, I have encountered issues with connecting to Amazon's Yum repository hosts from EC2 instances.

Amazon has published Outage notifications of brief connectivity and DNS failures impacting US-EAST-1 Availability zone between October 2nd and October 4th. However, my EC2 instances are within the US-WEST-2 Availability zone and I am experiencing issues today, October 4th 2014 at approximately 11:30 AM EST.

For example:

# yum provides seinfo
Loaded plugins: amazon-id, rhui-lb

epel/x86_64/filelists_db                                        | 4.7 MB  00:00:01
rhui-REGION-rhel-server-optional/7Server/x86_64/filelists_db    | 3.2 MB  00:00:00

https://rhui2-cds01.us-west-2.aws.ce.redhat.com/pulp/repos//content/dist/rhel/rhui/server/7/7Server/x86_64/os/repodata/e5ee2c196ee6525998525a2bf74bb40608dce199-filelists.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found

Trying other mirror.

https://rhui2-cds02.us-west-2.aws.ce.redhat.com/pulp/repos//content/dist/rhel/rhui/server/7/7Server/x86_64/os/repodata/e5ee2c196ee6525998525a2bf74bb40608dce199-filelists.sqlite.bz2: [Errno 14] HTTPS Error 404 - Not Found

Then, 5 minutes later, with absolutely no changes to my server's network or yum configuration:

# host rhui2-cds01.us-west-2.aws.ce.redhat.com
rhui2-cds01.us-west-2.aws.ce.redhat.com has address 50.112.120.15

# yum provides seinfo
Loaded plugins: amazon-id, rhui-lb
setools-console-3.3.7-46.el7.x86_64 : Policy analysis command-line tools for SELinux
Repo        : rhui-REGION-rhel-server-releases
Matched from:
Filename    : /usr/bin/seinfo

I find this extremely frustrating. With my small presence on EC2, I have no ability to troubleshoot what is causing these issues. However, I can confirm that there *are* issues as of today, that Amazon has been aware of connectivity and DNS failures for at least two days, and that Amazon is currently claiming that there are no issues.

This is quickly becoming the industry-standard mode of behavior for Cloud computing providers: wild-eyed, outlandish promises of perfect availability followed by regular connectivity failures that are haphazardly brushed under the rug.

Customers are owed transparency. I remain convinced that the only way to accomplish reliability is by "doing it yourself" and colocating servers in multiple datacenters, implementing and managing redundancy directly. The issue is too important to trust to hosting providers who have consistently demonstrated dishonesty.

See for yourself the almost invisible notice Amazon has posted to customers on their Service Health Dashboard:

Amazon EC2 Buries Connectivity Failure Notifications
Downtime? What Downtime?

NSA Leak Bust Points to State Surveillance Deal with Printing Firms

Earlier this week a young government contractor named Reality Winner was accused by police of leaking an internal NSA document to news outle...