Showing posts with label AWS. Show all posts
Showing posts with label AWS. Show all posts

Thursday, February 11, 2016

Recovering network access to EC2 instances

So you've screwed something up. You made a typo in your sshd_config file. You added a firewall rule, or a route, or some other thing, and lost your network access to your EC2 instance. And of course whatever you broke, you broke permanently - you wrote your firewall rules directly to /etc/sysconfig/iptables, you made your goofy change to /etc/sysconfig/network-scripts/whatever-interface; so rebooting won't make a damn bit of difference. You read the warnings, you know you shouldn't have. But you did anyway.

Oh, and you don't have any backups. Or you have backups from three months ago. Restoring from your crappy backups would mean hours to days of non-stop work and consistent downtime. Or Amazon or whatever other company you're using for backups actually broke your backups/lost your backups/never actually provided you with the backups you paid for.

Don't panic. You've got this. You remember that Amazon has some sort of Java-based something or other. Its got to be a virtual KVM. You login to the web console and find out that the Java-based something is a completely worthless SSH client, and not a KVM at all.

You are going to be fired.

Unless you found this post. I will save your backside, sir and/or ma'am.

Well, I will save your backside provided your environment has a couple of caveats. I will make them clear so that if you don't meet them you can get going somewhere else to find a solution ASAP. Here they are:

    - This is for Linux. If you are using Windows you are fired. Just kidding! You can mount Windows volumes in Linux, but reconfiguring network settings in this way is much more complicated since those settings are often stored in the registry rather than flat files. This walkthrough is just for volume management side of things; if you're dealing with Windows consider mounting the volume on a Linux VM and then using a tool like this one to modify the registry in the broken volume.
    - This only works for EBS volumes. There may be a way to do it with other types of volumes, but I haven't had to worry about it, and it will be much more complex than this if there is a way to do this with non-EBS instance store volumes.
    - I'm going to take for granted that you know how to start and stop an EC2 instance, and how to deploy an EC2 instance. I'm assuming this because you had to have done these things to make the instance you just broke. If you broke somebody else's instance and you don't know how to even restart the damn thing, well, first off - lol. And second, you're fired.
    - You need to either already have or be able to provision a second linux EBS-back EC2 instance in the same availability zone as your broken server

Those should be the only requirements. It won't matter if your broken volume is magnetic or SSD. Here is what to do:

1. For this to work you need a second EBS-backed EC2 instance running linux, other than the broken one, within the same region and availability zone (i.e. us-west-1a) as the broken server. It doesn't need to be the same "flavor" of Linux, but it makes things a lot easier if the kernel version is pretty close to one another. If you do not already have one deployed, create one now. Make a note of the instance-id of the second server (if you created your instance a while back, the instance-id will look like this: i-123a45fe - if you just created your instance, the id will be longer, 17 characters, like this: i-1234567890abcdef0).

2. From the AWS Management Console, select Instances and then highlight the broken instance. Make a note of the instance-id . Then STOP the instance.

3. Next, select Volumes. If you haven't already, give the volume of both the broken instance and your second troubleshooting instance a descriptive Name so you can quickly tell them apart. Make a note of the names and volume-id's and which instances they are connected to.



4. Highlight the volume of the BROKEN server, right-click, and select DETACH VOLUME.



5. Detaching volumes should be processed quickly, but your browser won't recognize the change right away. Refresh your screen to make sure the volume is detached. Then, right click the detached volume and select ATTACH VOLUME.


This will open a new window asking you which instance to attach the volume to and what to name the volume on the new server. Select your secondary, working server to attach the volume to. It should be alright to leave the default device label - it should be /dev/sdf. The only concern here is that you don't want to name the new volume a label that is already assigned. If you only have one EBS volume attached to your server, it will automatically be assigned /dev/sda1. If you've customized volume management for your server, you know these settings; if you haven't, then this walkthrough will assume you use /dev/sdf for the broken disk volume label on the secondary server.


6. SSH to your working secondary server and make a new folder under /. You will be mounting the broken disk to this directory

    # cd /
    # mkdir broken/

7. Here's where things can get a bit complicated, and where a lot of the walkthroughs available on this subject get things wrong. In Step 5 we created a volume label /dev/sdf for mounting the broken disk to our secondary server; but it won't show up as /dev/sdf on your secondary server.

You should have two EBS devices attached: /dev/sda1, which is the default volume, and /dev/sdf, which is the broken drive. /dev/sda1 will show up as /dev/xvda1 - the "s" is translated to "xv" to indicate that it is a virtual disk. /dev/sdf will show up as two additional devices: /dev/xvdf and /dev/xvdf1. You will want to use /dev/xvdf1.

Where you go from here depends on the sort of filesystems that are in use. In most instances, the filesystem in use will be XFS. You can check the filesystem by running this command:

    # mount -l |grep xvd
    /dev/xvda1 on / type xfs (rw,relatime,attr2,inode64,noquota)

The filesystem is shown directly after the "type". This is important because attempting to mount the broken volume directly will fail when it uses XFS, like this:

    # mount /dev/xvdf1 /broken/
    mount: /dev/xvdf1 is write-protected, mounting read-only
    mount: unknown filesystem type '(null)'

Even though the error appears to indicate the volume was mounted "read-only", nothing get's mounted - the /broken/ directory will be empty and `mount -l` will not display /dev/xvdf1.

The problem here is that the filesystem must be specified by using the "-t" flag (using -t auto will also fail). Here is the correct command:

     # mount -t xfs /dev/xvdf1 /broken/

If successful, the command will output nothing. You can confirm by checking for content in the /broken/ directory and by running this:

    # mount -l |grep xvdf1
    # /dev/xvdf1 on / type xfs (rw,relatime,attr2,inode64,noquota)

8. You can now navigate through the /broken/ directory as if it were / on the broken server. You can use /broken/var/log/ to identify errors, and rewrite configuration files like /broken/etc/sysconfig/network-scripts/. Be sure to remember to prepend /broken/ when navigating! It's easy to forget where you are and change something on your secondary working server, so don't do that, or else ...


9. Once you have reversed whatever was broken, unmount the disk from the broken server:

    # umount /broken

10. Detach the now-fixed volume from the secondary server.


11. Refresh your window and reattach the volume to the original server.


12. Restart the server and you should now be back in business.

There are so many reasons why this process is a huge pain in the ass as compared to a virtual KVM utility. I recently had to perform this procedure to resolve a networking issue on a server where most of the services were still responding - http & https were all fine, but SSH was dead. With a virtual console I could have repaired the issue without any downtime. Using this procedure forced me to bring down the server for 5 minutes or so to perform the repairs. That sucks. And up-to-date image backups would not have made anything better; remaining the server may have shaved a minute or two off of the total downtime that was required to run this procedure, but there would still be downtime.

I'm not sure why Amazon has declined to implement this sort of feature; Rackspace and others make it available. My guess would be that there are security issues involved, but that's just a guess. In any case, hopefully this walkthough helps out.

h/t Several of the images here were taken from a walkthrough by Mike Culver. Mike's screenshots were great and spared me having to take my own; unfortunately his walkthrough as currently published in Amazon's tutorials section fails in a variety of cases, including my recent one, which is why I wrote this.

Thursday, January 14, 2016

Bash script to email new S3 bucket files as compressed attachments (UDPATED)

I've written a simple bash script that checks for new files in an AWS S3 bucket and emails any that it finds as a compress (tar.gz) attachment - you can find it at my Github account under the name "S3-Filer-Mailer". I built it as a supplement for a contact form that relies on S3 as a back-end, rather than a php mailer or database. Using S3 for contact forms is attractive because it is so unattractive to spammers. There is no way to corrupt this sort of setup for spamming or to get hands on a database through the form, because it isn't connected to one.

Why not use Amazon's Simple Notification Service (SNS)? For one, AWS charges more for SNS than it does for S3 queries and downloads. For another, if this sort of functionality is available through SNS it is not clearly documented.

Getting back to the topic of security, the script establishes two network connections - one a connection to S3 to retrieve the files, the other sending the email. The S3 connection is encrypted using TLS; I'm going to add an extra pipe in here to gpg2 as time permits to encrypt the attachments themselves to close the loop - or you can do it yourself by adding a line with gpg -e -r Name foo.txt, where Name is the name you used while generating the public key you wish to use to encrypt the file. Adding encryption support as a command line operator is easy, but I want to add it as part of more general sanity-checking input.

The script was built and tested on RHEL, but it should work in any Linux that supports bash. This is pre-pre-alpha version, so no complaining. The obvious and immediate functionality problem ATM is that the script assumes that only files that contain a string with today's date in their filename were created today (plus the string has to be in format YYYYMMDD). When my copious spare time allows I will get to adding an option to filter results via regex; for now users can do this fairly simply by piping an additional grep command between grep ${TODAY} and > ${FILE} on line 16 of S3-Filer-Mailer.sh.

The script includes two files, an executable file (S3-Filer-Mailer.sh) and a configuration file (S3-Filer-Mailer.conf). To get things working, move both files to a computer running Linux and modify the S3-Filer-Mailer.conf file settings; that is where you will specify your email address and your S3 bucket. You can also limit the script to a subdirectory of your bucket in the conf file. The script is recursive, so if you specify the root directory of your bucket it will check every subdirectory. For the time being, that is the only way to specify multiple subdirectories; similarly disabling recursiveness requires modifying the executable.

Also, dependencies. There are some. Only one of them should take more than 5 seconds to install, the AWS Command Line Interface. You will need Python for that if you don't already have it. On the bright side, if you want to do cool stuff with AWS and you are using linux you should be happy to drag more crap to a CLI, right? The only other dependency is mailx.

UPDATE: I've moved this from a gist to a full-fledged Github repo, and I've made a few updates that make this script significantly less lame.

The earliest version of this required sharutils to uuencode attachments, but that is no longer necessary. Relying entirely on mailx encoding also resolved an ongoing issue in which Mozilla Thunderbird did not properly recognize attachments.

Variables that need to be changed in order for the script to function have been placed into a separate .CONF file.

Tuesday, September 15, 2015

An IRS tax refund phishing scam illustrates the widespread failure of hosting and antivirus providers' security measures

Scams focused on stealing tax refunds remain highly profitable, despite the fact that they are well known and understood by security professionals and the general public, and have been for years. A variety of distribution methods are used, with the common threads being the use of IRS logos and bureaucratic-sounding language to convince users to click a link, download and execute a file and/or send personally identifying information like a Social Security number. A recent example of one such a scam that I came across is a damning illustration of the failure of online service providers to protect users from obvious and simple malware distribution methods.

In the example I wish to discuss today, the distribution method was a spammed email that on a small ISP's installation of SpamAssassin (note: I am not an admin or employee of this system; I'm a customer) received an X-Spam-Status score of 5.3 after being flagged with the following variables:

X-Spam-Status: No, score=5.3 required=10.0 tests=AM_TRUNCATED,CK_419SIZE,
 CK_KARD_SIZE,ENV_FROM_DIFF,ENV_FROM_DIFF0,FROM_SECURITY,HAS_REPLY_TO,
 HEADER_FROM_DIFFERENT_DOMAINS,JUNKE_IXHASH,LINK_NR_TOP,MAILPHISH_REPLYTO,
 PSTOCK_PART,TO_NOREAL,XPRIO,ZIP_ATTACH shortcircuit=no  
        autolearn=disabled version=3.4.0 

While the default SpamAssassin threshold for marking a message as spam is 5.0, few admins leave this default value. SpamAssassin itself recommends that admins of multiple user mail servers use a threshold of 8 to 10. I don't have this ISP's spamassassin.conf file, and its obviously been customized. My point here isn't to take issue with SpamAssassin, which I have used for many years, but to demonstrate how this message made its way to mailboxes through pretty solid security software despite these being included in the headers:

From: "Internal Revenue Service" <office@irs.gov> 
Reply-To: "Internal Revenue Service" <office@irs.gov>  
Return-Path: <servers@abitindia.com>

Here's another depressing bonus. In addition to SpamAssassin, the recipient mail server had clamav installed. The message had a .ZIP file attachment, and the mail server's clamav install marked it as clean:

X-Virus-Scanned: clamav-milter 0.98.7 at mx1.riseup.net
X-Virus-Status: Clean


The attachment does in fact have a javascript nasty-ware. And clamav is not alone in its failure to pick up the file. According to Virustotal, 31 out of 56 AV platforms failed to detect this file - including Symantec, TrendMicro, Panda, Malwarebytes, Avast and Avira. In defense of these AV heavyweights, the file used a single basic obfuscation function to disguise its purpose - which at the moment is apparently enough to fool these AV packages.


One round through Einar Lielmanis' JS Beautifier later, and we have this:


The script creates an EXE file in the %TEMP% directory - usually something like C:\Users\UserName\AppData\Local\Temp - that is named some random string, and fills it with a bunch of garbage that it retrieves from one of the three domain names listed: dickinsonwrestlingclub.com, syscomm.smartlanka.net or les-eglantiers.fr.

There are a number of domains and hosts associated with this scam.



Malware domains
Domain IP Host Registrant Contact DNS IPs
dickinsonwrestlingclub.com 72.20.64.58 Consolidated Telcom Perfect Privacy, LLC N/A 72.20.64.11, 72.20.64.12
syscomm.smartlanka.net 69.89.31.73 box273.bluehost.com / Bluehost / Unified Layer Dilhan Seneviratne prabhath247@gmail.com 74.220.195.31, 69.89.16.4
les-eglantiers.fr 76.74.242.190 hp92.hostpapa.com / Peer 1 Network / Cogeco John Huisman / Camping Beau Rivag huisman.huisman@orange.fr 69.90.36.133, 204.15.193.53



Spam domains
Domain IP Host Email Provider Contact DNS IPs
abitindia.com 54.165.102.41 Amazon EC2 Gmail accounts@abitindia.com 50.23.136.229, 50.23.75.96, 162.251.82.118, 184.173.150.57
mail.netspaceindia.com 74.54.133.186 The Planet N/A help@netspaceindia.com 205.251.196.41, 205.251.192.135, 205.251.199.124, 205.251.195.214
netspaceindia.com 104.131.68.147 Digital Ocean N/A help@netspaceindia.com 205.251.196.41, 205.251.192.135, 205.251.199.124, 205.251.195.214



Taking a look at the hosts involved in this scam provides even further disappointment. abitindia.com, whose email is managed by Gmail, is providing the return-path for the spam messages but not the reply-to. Replies, incredibly, go directly to the IRS support email address. The reply-to header is commonly forged so that backscatter goes to some random sucker. In this case, abitindia.com is affiliated with the sender domain netspaceindia.com:

Domain Name: ABITINDIA.COM
Updated Date: 2014-11-24T05:21:07Z
Creation Date: 2006-11-23T19:31:19Z
Registrar Registration Expiration Date: 2015-11-23T19:31:19Z
Registrar: PDR Ltd. d/b/a PublicDomainRegistry.com
Registrar IANA ID: 303
Registrant Name: Netspaceindia
Registrant Organization: Netspaceindia
Registrant Street: Hall no 3, Wing B, Parshuram apt Above Woodlands Showroom College Road Nashik
Registrant City: Nashik
Registrant State/Province: Maharashtra
Registrant Postal Code: 422005
Registrant Country: IN
Registrant Phone: +91.9975444464
Registrant Email: accounts@abitindia.com
Name Server: dns1.netspaceindia.com
Name Server: dns2.netspaceindia.com
Name Server: dns3.netspaceindia.com
Name Server: dns4.netspaceindia.com


In other words, in many circumstances backscatter recipients are innocent victims. That is not the case here - the sender is managing the backscatter recipient address, likely to keep their mailing lists updated. As such, Google could play a role in putting a stop to this scam - a review of the backscatter would make the relationship between sender and backscatter recipient obvious, and in an ideal world would precipitate the suspension of the Google Apps account for "abitindia.com".

To be fair, Google's responsibility here is minimal - particularly when compared to the role that every other hosting provider plays in this. The Planet and Digital Ocean are providing the infrastructure for the spam campaign, while Bluehost, Cogeco and Consolidated Telcom are providing the infrastructure for hosting the malware. Its likely that the accounts for these providers were created using fraudulent/stolen payment information, or legitimate accounts were compromised. This sort of thing is an everyday occurrence for hosting providers; for providers who do not invest in abuse response, these types of scams can use the same accounts with the same hosting providers for months if not years. When I come across this sort of scam, I do my best to inform the hosting providers involved using the abuse contact information that is required to be associated with IP/DNS registrations, along with enough evidence for the provider to confirm Im not a nut. It is unusual to receive a response and even more unusual to receive a non-automated response. It is just as unusual for hosting provider staff to review their abuse@ contacts, let alone resolve the issues they receive.

Hemming and hawing over the need for state intervention to prevent "cyber-attacks" (vomit) and scams like the ones described here are all over the place. Many of those who support such a view make it a point to justify government intervention because of the incredible sophistication and technical complexity of the scams that plague internet users. However, the overwhelming volume of the scams I have encountered over the course of my career involved well known techniques and software. There is significant room for improvement in security practices with applying what we already know: like how to prevent (or rapidly stop) a 30 year old scam using 20 year old spam techniques to circulate 10 year old malware.

Thursday, May 7, 2015

Amazon Finally Ditches SSLv3

Amazon S3 subscribers recently received a form letter like this one:

Dear AWS Customer,

This message explains some security improvements in our services. Your security is important to us. Please review the entire message carefully to determine whether your use of the services will be affected, and if so what you need to do.

As of 12:00 AM PDT May 20, 2015, AWS will discontinue support of SSLv3 for securing connections to S3 buckets. Security research published late last year demonstrated that SSLv3 contained weaknesses in its ability to protect and secure communications. These weaknesses have been addressed in Transport Layer Security (TLS), which is the replacement for SSL. Consistent with our top priority to protect AWS customers, AWS will only support versions of the more modern TLS rather than SSLv3.

You are receiving this email because some of your users are accessing Amazon S3 using a browser configured to use SSLv3, or some of your existing applications that use Amazon S3 are configured to use SSLv3. These requests will fail once AWS disables support for SSLv3 for the Amazon S3 service.

The following bucket(s) are currently accepting requests from clients (e.g. mobile devices, browsers, and applications) that specify SSLv3 to connect to Amazon S3 HTTPS endpoints.

XXXXXXXX.XXXXXXX.XXXXXXX : XXXXXX-XXXXXX-XXXXX

For your applications to continue running on Amazon S3, your end users need to access S3 from clients configured to use TLS. As any necessary changes would need to be made in your application, we recommend that you review your applications that are accessing the specified S3 buckets to determine what changes may be required. If you need assistance (e.g. to help identify clients connecting to S3 using SSLv3), please contact our AWS Technical Support or AWS Customer Service.

For further reading on SSLv3 security concerns and why it is important to disable support for this nearly 18 year old protocol, we suggest the following articles:

https://www.us-cert.gov/ncas/alerts/TA14-290A
https://blog.mozilla.org/security/2014/10/14/the-poodle-attack-and-the-end-of-ssl-3-0/
http://disablessl3.com/#why

Thank you for your prompt attention.

Sincerely,
The Amazon Web Services Team

Amazon Web Services, Inc. is a subsidiary of Amazon.com, Inc. Amazon.com is a registered trademark of Amazon.com, Inc. This message was produced and distributed by Amazon Web Services Inc., 410 Terry Ave. North, Seattle, WA 98109-5210

RAT Bastard

Earlier this week, several servers I maintain were targeted by automated attempts to upload a remote access trojan (RAT). The RAT is a simpl...