Dynamic DNS using AWS Route 53 and AWS Java SDK

Route 53 is the Amazon Web Services (AWS) DNS service. Assuming your domain’s DNS is hosted with Route 53, you can create a utility in Java, using the AWS Java SDK, to update a hostname under your domain that points to a dynamic IP address. This may be useful if for example your home’s public IP address changes often, and you want to be able to access it remotely.

To start off, you’ll need to create a hostname in AWS Route 53 that maps to an “A” record pointing to an IP address (doesn’t matter what IP address at this point, since we’ll update it through code later). This can be done manually online, and should be pretty self-explanatory once you open up the Route 53 control panel in the AWS web console.

Let’s say your domain name is domain.com. And you want to dynamically update two hosts: home.domain.com, and dynamic.domain.com, to point to the IP address of a machine that has a dynamically assigned IP.

For this, you can use the following code snippit which I whipped up using the AWS Java SDK documentation for Route 53, and with lots of trial and error:

package utils;

import java.io.BufferedReader;
import java.io.InputStreamReader;
import java.net.HttpURLConnection;
import java.net.URL;
import java.util.ArrayList;
import java.util.HashSet;
import java.util.List;
import java.util.logging.Logger;

import org.xbill.DNS.ARecord;
import org.xbill.DNS.Lookup;
import org.xbill.DNS.Record;
import org.xbill.DNS.Resolver;
import org.xbill.DNS.SimpleResolver;
import org.xbill.DNS.Type;

import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.services.route53.AmazonRoute53;
import com.amazonaws.services.route53.AmazonRoute53ClientBuilder;
import com.amazonaws.services.route53.model.Change;
import com.amazonaws.services.route53.model.ChangeAction;
import com.amazonaws.services.route53.model.ChangeBatch;
import com.amazonaws.services.route53.model.ChangeResourceRecordSetsRequest;
import com.amazonaws.services.route53.model.GetHostedZoneRequest;
import com.amazonaws.services.route53.model.HostedZone;
import com.amazonaws.services.route53.model.ListResourceRecordSetsRequest;
import com.amazonaws.services.route53.model.ListResourceRecordSetsResult;
import com.amazonaws.services.route53.model.ResourceRecord;
import com.amazonaws.services.route53.model.ResourceRecordSet;

public class DynamicDNSUpdater {
	static String AWS_ACCESS_KEY_ID = "xxx";
	static String AWS_SECRET_KEY_ID = "xxx";
	static String ROUT53_HOSTED_ZONE_ID = "Zxxxxxxxxxxxxx";
	static String[] HOSTNAMES_TO_UPDATE = { "home.domain.com", "dynamic.domain.com" };

	static void UpdateIP() throws Exception
	{
		Logger log = ...;

		HashSet<String> hostnamesNeedingUpdate = new HashSet<String>();

		URL awsCheckIpURL = new URL("http://checkip.amazonaws.com");
		HttpURLConnection awsCheckIphttpUrlConnection = (HttpURLConnection) awsCheckIpURL.openConnection();
		BufferedReader awsCheckIpReader = new BufferedReader(new InputStreamReader(awsCheckIphttpUrlConnection.getInputStream()));
		String thisMachinePublicIp = awsCheckIpReader.readLine();
		log.fine("Current public IP of this machine: "+thisMachinePublicIp);
		
	    Resolver resolver = new SimpleResolver("8.8.8.8");
		for(String hostname : HOSTNAMES_TO_UPDATE)
		{
		    Lookup lookup = new Lookup(hostname, Type.A);
		    lookup.setResolver(resolver);
		    Record[] records = lookup.run();
		    String address = ((ARecord) records[0]).getAddress().toString();
		    address = address.substring(address.lastIndexOf("/")+1);
			if(!address.equals(thisMachinePublicIp))
			{
				log.fine("!!! Needs update: "+hostname+". Current IP: "+address+". New public IP: "+thisMachinePublicIp);
				hostnamesNeedingUpdate.add(hostname+".");
			}
		}

		if(hostnamesNeedingUpdate.size()>0)
		{
			BasicAWSCredentials awsCreds = new BasicAWSCredentials(AWS_ACCESS_KEY_ID, AWS_SECRET_KEY_ID);
			AmazonRoute53 route53 = AmazonRoute53ClientBuilder
					.standard()
					.withCredentials(new AWSStaticCredentialsProvider(awsCreds))
					.withRegion(Constants.AWS_REGIONS)
					.build(); 
		    HostedZone hostedZone = route53.getHostedZone(new GetHostedZoneRequest(ROUT53_HOSTED_ZONE_ID)).getHostedZone();

		    ListResourceRecordSetsRequest listResourceRecordSetsRequest = new ListResourceRecordSetsRequest()
		            .withHostedZoneId(hostedZone.getId());
		    ListResourceRecordSetsResult listResourceRecordSetsResult = route53.listResourceRecordSets(listResourceRecordSetsRequest);
		    List<ResourceRecordSet>	resourceRecordSetList = listResourceRecordSetsResult.getResourceRecordSets();
	    	List<Change> changes = new ArrayList<Change>();
		    for(ResourceRecordSet resourceRecordSet : resourceRecordSetList)
		    {
		    	if(resourceRecordSet.getType().equals("A") && hostnamesNeedingUpdate.contains(resourceRecordSet.getName()))
		    	{
			    	List<ResourceRecord> resourceRecords = new ArrayList<ResourceRecord>();
			    	ResourceRecord resourceRecord = new ResourceRecord();
			    	resourceRecord.setValue(thisMachinePublicIp);
			    	resourceRecords.add(resourceRecord);
			    	resourceRecordSet.setResourceRecords(resourceRecords);
			    	Change change = new Change(ChangeAction.UPSERT, resourceRecordSet);
			    	changes.add(change);
			    	log.fine("Updating "+resourceRecordSet.getName()+" to A "+thisMachinePublicIp);
		    	}
		    }
		    if(changes.size()>0)
		    {
		    	ChangeBatch changeBatch = new ChangeBatch(changes);
		    	ChangeResourceRecordSetsRequest changeResourceRecordSetsRequest = new ChangeResourceRecordSetsRequest()
		    			.withHostedZoneId(ROUT53_HOSTED_ZONE_ID)
		    			.withChangeBatch(changeBatch);
		    	route53.changeResourceRecordSets(changeResourceRecordSetsRequest);
		    	log.fine("Done!");
		    }
		    else
		    {
		    	log.fine("None of the specified hostnames found in this zone");
		    }
		}
		else
			log.fine("No updates required!");
	}

	public static void main(String args[]) throws Exception {
		UpdateIP();
	}
}

In order for this to work correctly, you’ll need to set up an AWS API key. This key will need either full access to your AWS account, or at least access to Route53. The documentation for setting it up is available at AWS.

You’ll need to update the AWS_ACCESS_KEY_ID and AWS_SECRET_KEY_ID in the code block above with the key details you get from AWS. And then you’ll need to update ROUT53_HOSTED_ZONE_ID with the Zone ID of your domain hosted in Route 53 (it begins with Z, at least as far as I’ve noticed). And, of course, you’ll need to update HOSTNAMES_TO_UPDATE with the hostname(s) that need to be dynamically updated with the public IP of the machine running this utility.

Here’s a quick breakdown of the code: We start by getting the public IP of the machine this code is running on, and then we look up the IP of the hostnames provided. If these don’t match, that means an update with the new IP is needed. That’s when the com.amazonaws.services.route53.AmazonRoute53 class is used to do the following: using the AWS API access key, it gets a list of all the “A” records for the hosted zone provided. It then loops through the hostnames needing update, and simply posts a com.amazonaws.services.route53.AmazonRoute53.changeResourceRecordSets() with the new public IP of the machine.

And that’s it! There you have it–a Java util that will dynamically update the IP address for the machine it’s running on.

Now in order to run this utility periodically (so it can actually do what it’s meant to, without you manually running it), you can compile the Java code and stick it in a jar, or a simply just copy the .class files in a directory somewhere. (Note: if you’re using Eclipse, it makes it easy to export your project as an executable jar).

Then, if you’re in Linux, you can set up a crontab entry to run every 5 minutes or so and simply run this java utility from the command line.
Granted Java is installed and available in the system path, the command would look something like: java -cp /path/to/MyUtils.jar utils.DynamicDNSUpdater. And if you’re in windows, you can set up a task with the Windows Task Scheduler to run every 5 minutes and run the same command. Pro tip: if using windows, you may want to use “javaw” instead of “java”, if you don’t want a little window to pop up and disappear periodically when you’re in the middle of on the same machine.

AmazonS3Client to loop through batches of S3 files objects

AWS provides the AmazonS3Client class, which is part of the AWS Java SDK. This class can be used to interact with files in S3.

An important feature to note of the AmazonS3Client is that it limits results to batches of 1000. If you have less than 1000 files, then all is good. You can use amazonS3Client.listObjects(bucketName); and it will provide all the objects in a bucket.

But if the bucket contains more than 1000 files, you will need to loop through the files in batches. This is not entirely obvious and can cause you to miss files (as I certainly did)!

To get started, you would initiate AmazonS3Client like so:

AmazonS3Client amazonS3Client = new AmazonS3Client(new BasicAWSCredentials(KEY, SECRET));

The approach I like to take is to first loop through and collect all the files up front like so:

ObjectListing objectListing = amazonS3Client.listObjects(bucketName);
List<S3ObjectSummary> s3ObjectSummaries = objectListing.getObjectSummaries();
while (objectListing.isTruncated()) 
{
   objectListing = amazonS3Client.listNextBatchOfObjects (objectListing);
   s3ObjectSummaries.addAll (objectListing.getObjectSummaries());
}

Note: if memory is a concern or you have an unlimited number of files, you can simply modify the approach to do whatever you need to with each file as you fetch it in batches from the API, instead of collecting them up front.

If you first collected them in a List up front, you can then loop through each file like so:

for(S3ObjectSummary s3ObjectSummary : s3ObjectSummaries)
{
	String s3ObjectKey = s3ObjectSummary.getKey();
	//Do whatever with s3ObjectSummary

 

Installing pandas, scipy, numpy, and scikit-learn on AWS EC2

Most of the development/experimentation I was doing with scikit-learn’s machine learning algorithms was on my local development machine. But eventually I needed to do some heavy duty model training / cross validation, which would take weeks on my local machine. So I decided to make use of one of the cheaper compute optimized EC2 instances that AWS offers.

Unfortunately I had some trouble getting scikit-learn to install on a stock Amazon’s EC2 Linux, but I figured it out eventually. I’m sure others will run into this, so I thought I’d write about it.

Note: you can of course get an EC2 community image or an image from the EC2 marketplace that already has Anaconda or scikit-learn and tools installed. This guide is for installing it on a stock Amazon EC2 Linux instance, in case you already have an instance setup you want to use.

In order to get scikit-learn to work, you’ll need to have pandas, scipy and numpy installed too. Fortunately Amazon EC2 Linux comes with python 2.7 already installed, so you don’t need to worry about that.

Start by ssh’ing into your box. Drop into rootshell with the following command (if you’re going to be typing “sudo” before every single command, might as well be root by default anyway, right?)

sudo su

First you need to install some development tools, since you will literally be compiling some libraries in a bit. Run the following commands:

yum groupinstall ‘Development Tools’
yum install python-devel

Next you’ll install the ATLAS and LAPACK libraries, which are needed by numpy and scipy:

yum install atlas-sse3-devel lapack-devel

Now you’re ready to install first all the necessary python libraries and finally scikit-learn:

pip install numpy
pip install scipy
pip install pandas
pip install scikit-learn

Congratulations. You now have scikit-learn installed on the EC2 Linux box!

Amazon EC2 ssh timeout due to inactivity

Well, this applies to any Linux instance that you may be remotely connected to, depending on how sshd is configured on the remote server. And depending on how your localhost (developer machine) ssh config is done. But essentially in some instances the sshd host you’re connecting to times you out pretty quickly, so you have to reconnect often.

This was bothering me for a while. I usually am off and on all day on Linux shell on EC2 instances. And it seemed every time I come back to it, I’d be timed out, causing me to have to reconnect. Not a huge deal, just a nuisance.

To remedy this, without changing the settings on the remote server’s sshd config, you can add the following line to your localhost ssh config. Edit ~/.ssh/config file and add the following line:

ServerAliveInterval 50

And it’s as simple as that! It seems that AWS EC2s are set up to time you out at 60 seconds. So a 50 second keep-alive interval prevents you from getting timed out so aggressively.

Installing MongoDB on AWS EC2 and turning on zlib compression

At this time AWS doesn’t provide an RDS type for MongoDB. So in order to have a MongoDB server on the AWS cloud, you have to install it manually on an EC2 instance.

The full documentation for installing a MongoDB instance on an AWS EC2 can be seen at: https://docs.mongodb.com/v3.0/tutorial/install-mongodb-on-amazon/. Here’s a quick summary though.

First you’ll need to create a Linux EC2 server. Once you have the server created, log in to the machine through secure shell. Drop into root shell using the following command:

sudo su

Next you’ll need to create the repository info for yum to use to download the prebuilt MongoDB packages. You’ll create a file at /etc/yum.repos.d/mongodb-org-3.0.repo:

vi /etc/yum.repos.d/mongodb-org-3.0.repo

And copy/paste the repository:

[mongodb-org-3.0]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/amazon/2013.03/mongodb-org/3.0/x86_64/
gpgcheck=0
enabled=1

Save and exit from vi. And type in the following command to install:

yum install -y mongodb-org

And that’s it! Now you have MongoDB installed on your EC2.

Next, to turn on compression, you’ll need to edit /etc/mongod.conf

vi /etc/mongod.conf

Scroll down to the “storage” directive, and add in this configuration:

engine: "wiredTiger"
wiredTiger:
  collectionConfig:
    blockCompressor: "zlib"

Now any collections you create will be compressed with zlib, which provides the best compression currently.

To turn on your MongoDB instance by typing in this command:

service mongod start

And of course you’ll want to custom configure your MongoDB instance (or not). You can find several guides and tutorials to do that online.

Running AWS CLI commands from crontab

This is a short post to explain how to run AWS CLI commands from a crontab.

First you’ll need to install and set up the AWS CLI. More information here: http://docs.aws.amazon.com/cli/

Once you’ve set up AWS CLI, you’ll notice that there is a “.aws” folder created in the HOME folder for the user you’re logged in as. If it’s root, it would be “/root/.aws”.

The problem with running AWS CLI commands from crontab is that crontab sets HOME to “/”, so the “aws” command will not find “~/.aws”.

In order to get around this, you simply need to set HOME=”/root/” (or whatever the HOME is for the user AWS CLI was set up under). This can be done in the shell script that is being called by crontab, or if the aws command is directly in crontab, the crontab command could be something like the following:

HOME=”/root” && aws cli

And that’s it!

Setting up AWS CLI and dumping a S3 bucket

AWS CLI (command line interface) is very useful when you want to automate certain tasks. This post is about dumping a whole S3 bucket from the command line. This could be for any purpose, such as creating a backup.

First of all, if you don’t already have it installed, you’ll need to download and install the AWS CLI. More information here: http://docs.aws.amazon.com/cli/latest/userguide/installing.html

To configure AWS CLI, type the command:

aws configure

It will ask for credentials: the Access Key ID, and the Access Secret Key. More information on how to set up a key is here: http://docs.aws.amazon.com/general/latest/gr/managing-aws-access-keys.html

And that’s it! You now have the power of manipulating your AWS environment from your command line.

In order to dump a bucket, you’ll need to first make sure that the account belonging to the AWS Key you generated has read access to the bucket. More on setting up permissions in S3 here: http://docs.aws.amazon.com/AmazonS3/latest/dev/s3-access-control.html

To dump the whole contents of an S3 bucket, you can use the following command:

aws s3 cp –quiet –recursive s3:///

This will copy the entire contents of the bucket to your local directory. As easy as that!