Installing pandas, scipy, numpy, and scikit-learn on AWS EC2

Most of the development/experimentation I was doing with scikit-learn’s machine learning algorithms was on my local development machine. But eventually I needed to do some heavy duty model training / cross validation, which would take weeks on my local machine. So I decided to make use of one of the cheaper compute optimized EC2 instances that AWS offers.

Unfortunately I had some trouble getting scikit-learn to install on a stock Amazon’s EC2 Linux, but I figured it out eventually. I’m sure others will run into this, so I thought I’d write about it.

Note: you can of course get an EC2 community image or an image from the EC2 marketplace that already has Anaconda or scikit-learn and tools installed. This guide is for installing it on a stock Amazon EC2 Linux instance, in case you already have an instance setup you want to use.

In order to get scikit-learn to work, you’ll need to have pandas, scipy and numpy installed too. Fortunately Amazon EC2 Linux comes with python 2.7 already installed, so you don’t need to worry about that.

Start by ssh’ing into your box. Drop into rootshell with the following command (if you’re going to be typing “sudo” before every single command, might as well be root by default anyway, right?)

sudo su

First you need to install some development tools, since you will literally be compiling some libraries in a bit. Run the following commands:

yum groupinstall ‘Development Tools’
yum install python-devel

Next you’ll install the ATLAS and LAPACK libraries, which are needed by numpy and scipy:

yum install atlas-sse3-devel lapack-devel

Now you’re ready to install first all the necessary python libraries and finally scikit-learn:

pip install numpy
pip install scipy
pip install pandas
pip install scikit-learn

Congratulations. You now have scikit-learn installed on the EC2 Linux box!

Amazon EC2 ssh timeout due to inactivity

Well, this applies to any Linux instance that you may be remotely connected to, depending on how sshd is configured on the remote server. And depending on how your localhost (developer machine) ssh config is done. But essentially in some instances the sshd host you’re connecting to times you out pretty quickly, so you have to reconnect often.

This was bothering me for a while. I usually am off and on all day on Linux shell on EC2 instances. And it seemed every time I come back to it, I’d be timed out, causing me to have to reconnect. Not a huge deal, just a nuisance.

To remedy this, without changing the settings on the remote server’s sshd config, you can add the following line to your localhost ssh config. Edit ~/.ssh/config file and add the following line:

ServerAliveInterval 50

And it’s as simple as that! It seems that AWS EC2s are set up to time you out at 60 seconds. So a 50 second keep-alive interval prevents you from getting timed out so aggressively.

Installing MongoDB on AWS EC2 and turning on zlib compression

At this time AWS doesn’t provide an RDS type for MongoDB. So in order to have a MongoDB server on the AWS cloud, you have to install it manually on an EC2 instance.

The full documentation for installing a MongoDB instance on an AWS EC2 can be seen at: https://docs.mongodb.com/v3.0/tutorial/install-mongodb-on-amazon/. Here’s a quick summary though.

First you’ll need to create a Linux EC2 server. Once you have the server created, log in to the machine through secure shell. Drop into root shell using the following command:

sudo su

Next you’ll need to create the repository info for yum to use to download the prebuilt MongoDB packages. You’ll create a file at /etc/yum.repos.d/mongodb-org-3.0.repo:

vi /etc/yum.repos.d/mongodb-org-3.0.repo

And copy/paste the repository:

[mongodb-org-3.0]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/amazon/2013.03/mongodb-org/3.0/x86_64/
gpgcheck=0
enabled=1

Save and exit from vi. And type in the following command to install:

yum install -y mongodb-org

And that’s it! Now you have MongoDB installed on your EC2.

Next, to turn on compression, you’ll need to edit /etc/mongod.conf

vi /etc/mongod.conf

Scroll down to the “storage” directive, and add in this configuration:

engine: "wiredTiger"
wiredTiger:
  collectionConfig:
    blockCompressor: "zlib"

Now any collections you create will be compressed with zlib, which provides the best compression currently.

To turn on your MongoDB instance by typing in this command:

service mongod start

And of course you’ll want to custom configure your MongoDB instance (or not). You can find several guides and tutorials to do that online.

Running AWS CLI commands from crontab

This is a short post to explain how to run AWS CLI commands from a crontab.

First you’ll need to install and set up the AWS CLI. More information here: http://docs.aws.amazon.com/cli/

Once you’ve set up AWS CLI, you’ll notice that there is a “.aws” folder created in the HOME folder for the user you’re logged in as. If it’s root, it would be “/root/.aws”.

The problem with running AWS CLI commands from crontab is that crontab sets HOME to “/”, so the “aws” command will not find “~/.aws”.

In order to get around this, you simply need to set HOME=”/root/” (or whatever the HOME is for the user AWS CLI was set up under). This can be done in the shell script that is being called by crontab, or if the aws command is directly in crontab, the crontab command could be something like the following:

HOME=”/root” && aws cli

And that’s it!

Setting up AWS CLI and dumping a S3 bucket

AWS CLI (command line interface) is very useful when you want to automate certain tasks. This post is about dumping a whole S3 bucket from the command line. This could be for any purpose, such as creating a backup.

First of all, if you don’t already have it installed, you’ll need to download and install the AWS CLI. More information here: http://docs.aws.amazon.com/cli/latest/userguide/installing.html

To configure AWS CLI, type the command:

aws configure

It will ask for credentials: the Access Key ID, and the Access Secret Key. More information on how to set up a key is here: http://docs.aws.amazon.com/general/latest/gr/managing-aws-access-keys.html

And that’s it! You now have the power of manipulating your AWS environment from your command line.

In order to dump a bucket, you’ll need to first make sure that the account belonging to the AWS Key you generated has read access to the bucket. More on setting up permissions in S3 here: http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html

To dump the whole contents of an S3 bucket, you can use the following command:

aws s3 cp –quiet –recursive s3:///

This will copy the entire contents of the bucket to your local directory. As easy as that!