Limiting upload of number of files in ng-file-upload

ng-file-upload is a wonderful module to help you manage uploading files through AngularJS. Head on over to https://github.com/danialfarid/ng-file-upload to check it out.

One thing that isn’t obvious is how to limit the number of files that a user can upload. (This of course only applies if you are allowing multiple file uploads).

One way to limit the number of files a user can upload is through the “ngf-validate-fn” angular directive. This directive can be used to call a custom function defined in your controller that validates the file.

In this custom validation function, you can check the number of files that already exist in the files model, and return true (meaning validation passed, file should be allowed) or false (or an error name, meaning validation failed… the max number of files has reached).

Let’s say you want to limit the maximum number of files uploaded to 10. It would look like this in your html:

<div ngf-select ngf-multiple="true" ng-model="files" ngf-validate-fn="validateFile($file)" />

And in your controller:

$scope.validateFile = function(file)
{
  if($scope.files.length>=10)
    return "TOO_MANY_FILES";
  return true;
}

And that’ll do it.

HOWEVER, big caveat: This will only work if the user selects one file at a time. If the user selects multiple files all at once (from the file-selection dialog box their browser presents), then this limitation trick will not work, and more files will get through. I am currently in the process of either myself implementing this feature as a native directive in the ng-file-upload module, or waiting till someone else implements it. I’ve posted this as an enhancement request on the module’s github page.

Migrating AWS RDS MySQL instances from non-encrypted to encrypted

We had to do this exercise recently due to a security audit requirement, so I thought I’d write about it. If you’ve got old AWS (Amazon Web Services) RDS (Relational Database Service) instances around since before encrypted databases were an option in RDS, or you just never encrypted your databases, and are now deciding to encrypt them, you’ve come to the right place. The steps below apply specifically to MySQL RDS instances, but the same guidelines can be used for other database server types as well.

In summary, RDS doesn’t give you an option to simply encrypt your database if it was created as non-encrypted. Furthermore, you cannot take a snapshot of your non-encrypted database and create an encrypted instance out of it. Essentially you have to manually export the data from the non-encrypted instance, import it into the new encrypted instance, and switch your applications over to use the new encrypted database instances. Then you can get rid of your old non-encrypted instances.

Note that RDS does not allow you to create a full blown replicated database (that is not a read-only replica tied to the existence of a master database). Ideally, this feature would exist in RDS, in which case you can use replication from one DB instance (the un-encrypted one) to another (the encrypted one) so that data is automatically replicated and synchronized between the two. This is essential if you have live applications using your databases, in which case you can have almost no downtime if you can replicate data from an old database to a new one, and simply switch the application to use the new database without doing any manual export/import of data.

So unfortunately if you have live applications using your non-encrypted AWS RDS databases and you need to migrate to encrypted databases, you’ll need to pick a time to do the migration, get prepared, let your users know about the maintenance downtime, and take your applications offline to make it happen. (I’m hoping the good folks at AWS one day soon add a feature for us to fully replicate independently standing databases within RDS).

Anyway, on to the steps. To start out, depending on the size of your databases and your connection speed, you’ll need to decide whether to export the data from your non-encrypted database onto a machine external to AWS (such as your own developer/administration machine wherever you are), OR export it onto an EC2 instance in AWS. If your databases are relatively small in size (and non-numerous) or you just have a ton of bandwidth, you can decide to download all the data onto your own machine. Otherwise I’d recommend you create an EC2 instance in AWS if you don’t have one already, and use that to temporarily act as the machine to export data to, and import data from.

We used a Linux EC2 machine so I’ll focus on that. First of all, you want to make sure that the EC2 machine has an encrypted volume attached to it. This ensures that the data you export doesn’t end up on a non-encrypted disk in AWS (so as not to violate any security rules or policies for your data). At the time of writing this blog entry, EC2 machine root volumes cannot be encrypted, but you can attach encrypted volumes to them. In summary, from the AWS web console, in the EC2 console, you can create an encrypted volume and attach it to your EC2 machine. Then log on to your machine, format the volume, and mount it. I’m sure there are various guides out there for this, so I won’t focus on the nitty gritty.

ssh into your Linux EC2 instance and ensure that mysql client is installed by typing in the “mysql” command. If not, try “yum install mysql” to install it. Next, if you have security group (firewall) rules applied to your RDS instances, make sure that the EC2 machine can connect to the databases (add the IP for the EC2 machine to your RDS security group(s)). Ensure you can connect to your database by typing in the following command: mysql -u (username) -p –host=(database hostname)

You will probably want to create the new encrypted databases in RDS ahead of time from the actual scheduled “maintenance” with your users, so that there is minimal downtime during the actual maintenance window. So assuming your encrypted databases are created and ready, you’re in the maintenance window and are ready to migrate, and have taken your live applications offline, you can begin exporting data from each database.

Now you’re finally ready to export the data from the database. Connect to the EC2 linux machine and cd to the directory the encrypted volume is mounted on. Type in the following command to dump the database from the old non-encrypted MySQL RDS instance:

mysqldump –opt –events –routines –triggers –user=(username)-p –host=(hostname) (database name) > (database name).sql

Of course you’ll want to replace the username, hostname, and database names (everything in parenthesis) with real values. You will be prompted for the password. This command includes everything you’ll need from your old database. More information, or if you want to include multiple databases from the same MySQL server, can be found here on the mysqldump command: http://dev.mysql.com/doc/refman/5.7/en/mysqldump.html

Then to import the data into your new encrypted database, use the following command:

mysql –user=(user) -p –host=(hostname) -e “drop database if exists (database name); create database (database name); use (database name); source (database name).sql;”

Note that the export is usually very fast, but the import is slower. Also note that if you changed the username from what it was in the old database, you’ll need to modify all instances of the username in the .sql file dumped from mysql dump. In order to accomplish that, try this sed command:

sed -i “s/\`(old username)\`@\`%\`/CURRENT_USER/g” (database name).sql

Lastly, after the export and import are finished, update the hostnames of the old databases with the new ones in all your applications. Try out your applications to ensure your new databases are being queried. And once everything checks out, at this point you are ready to update your live applications and put them back online!

MEAN stack: associating a socket with a user

I’m using the MEAN stack for an application I’m working on. The project was seeded using the Angular fullstack yeoman generator (https://github.com/DaftMonk/generator-angular-fullstack/).

Out of the box the project has support for websockets (using socket.io), and users (using passportjs). However, sockets on the server side in express running on node are not tied to users, out of the box.

For several reasons the application likely needs to know what user a socket belongs to. For example, if there’s a change made to a model that needs to be emitted, you may need to emit it to only users with a certain role.

To get around this, I made a bunch of modifications which I’ll detail below. Essentially, the user object will get saved within the socket object. So when a socket is being processed, say through a model level trigger (i.e. “save” or “delete”) using mongoose for example, the user object will be in the socket and can be used in whatever processing logic.

The MEAN project seeded from the angular fullstack generator uses a token generated through jwt, which is stored in a cookie, to authenticate a user. So when a user login occurs, an event can be emitted with the jwt token over the socket to register the user with the socket. Furthermore, in your socketio.on(‘connection’,…) function in express, you can read the cookie to get the jwt token, then get the user and put it in the socket. This is essential so that if a user is already logged in, and returns to your web application (or opens a new tab to your application) and a new websocket is created, the cookie can be used to associate the socket with the user, since a new login event will not be emitted at that point.

First, let’s define a function that can take a token either directly as a parameter, or read it from the cookie in a socket, and get the user. This same function can be called from a login emit event with a jwt token as the payload over the socket, or from socketio.on(‘connection’,…).

var auth = require('../auth/auth.service');
function setupUserInSocket(socket, inputToken)
{
  var tokenToCheck = inputToken;
  if(!tokenToCheck && socket && socket.handshake && socket.handshake.headers && socket.handshake.headers.cookie)
  {
    socket.handshake.headers.cookie.split(';').forEach(function(x) {
      var arr = x.split('=');
      if(arr[0] && arr[0].trim()=='token') {
        tokenToCheck = arr[1];
      }
    });
  }
  if(tokenToCheck)
  {
    auth.getUserFromToken(tokenToCheck, function (err, user) {
      if(user) {
        console.info('[%s] socket belongs to %s (%s)', socket.address, user.email, user._id);
        socket.user = user;
      }
    });
  }
}

Note that the cookie is in socket.handshake.headers.cookie. Also note that I call auth.getUserFromToken, which is another function I created that decrypts the user ID from the jwt token, queries the user from the model, and returns it. The function looks like this:

var User = require('../api/user/user.model');
var async = require('async');
var config = require('../config/environment');
function getUserFromToken(token, next)
{
  async.waterfall
  (
    [
      function(callback)
      {
        jwt.verify(token, config.secrets.session, function(err, decoded) {
          callback(err, decoded);
        });
      },
      function(decoded, callback)
      {
        if(!decoded || !decoded._id)
          callback(null, null);
        else {
          User.findById(decoded._id, function (err, user) {
            callback(null, user);
          });
        }
      },
    ],
    function(err, user)
    {
      next(err, user);
    }
  );
}

Next, let’s use socketio.on(‘connection’,…) to call the function with the socket. If the jwt token is already in the cookies, meaning the user already logged in previously, the user will be associated with the socket:

socketio.on('connection', function (socket) {
  setupUserInSocket(socket);
  //...
});

And that’s it for that particular scenario! Next, let’s worry about when a user actually logs in. Within socketio.on(‘connection’, …) we can listen for login emits from the client over the socket like so:

socket.on("login", function(token,next) {
  setupUserInSocket(socket,token);
  next({data: "registered"});
});

And on the client side, we emit the login event over the socket when a successful login occurs. This can be done in a number of ways, but I decided to do it in login.controller.js. After Auth.login() is called, I call socket.login():

angular.module('classActApp')
  .controller('LoginCtrl', function ($scope, Auth, socket, ...) {
    //...
        Auth.login({
          email: $scope.user.email,
          password: $scope.user.password
        })
        .then( function() {
                  //...
                  socket.login();
//...

And in the client side socket.service.js, the login() function does the following:

angular.module('classActApp')
  .factory('socket', function(socketFactory, $location, CONSTANTS, Auth) {
return {
login: function () {
  socket.emit("login", Auth.getToken(), function(data) {
  });
},

Note that you also need to worry about logouts. If the user logs out from your web application, but sticks around on your web application, the socket for that session will remain associated to the user they were logged in as previously. This could be undesirable for several reasons. So in your express side of things (server side), you want to listen for logout events and clear out the user from the socket like so (note, this is added within socketio.on(‘connection’,…)):

socket.on("logout", function(next) {
  if(socket && socket.user) {
    console.info('[%s] socket being disassociated from %s (%s)', socket.address, socket.user.email, socket.user._id);
    socket.user = null;
  }
  next({data: "un-registered"});
});

And in your angular side of things (client side), when the user logs out, you want to emit a logout event over the socket. In the angular fullstack seeded project I’m using, this happens in navbar.controller.js which has the logout function.

angular.module('classActApp')
  .controller('NavbarCtrl', function ($scope, $location, socket, ...) {
    //...
    $scope.logout = function() {
      //...
            socket.logout();

And in socket.service.js:

angular.module('classActApp')
  .factory('socket', function(socketFactory, $location, CONSTANTS, Auth) {
    //...
    return {
      //...
      logout: function () {
        socket.emit("logout", function(data) {
        });
      },

And that’s it! Now all your sockets have the user they belong to (if any) associated to them (accessible from socket.user on the server side). And when emitting events from the server side from the socket, or when reading events emitted from the client side over the socket, we can now know the user the socket belongs to!

Using Grunt to deploy to individual OpenShift Applications

I ran across this problem using the MEAN stack seed project generated through the angular fullstack generator (https://github.com/angular-fullstack/generator-angular-fullstack). The seed project’s developers made it easy to deploy an application to OpenShift, however they set it up so you can only deploy to a single OpenShift application instance.

One of the requirements for my current MEAN project is to deploy it to two environments: a staging environment and a full blown production environment. However with this seed project and respective Grunt buildcontrol directives, that wouldn’t be possible.

So I took it upon myself to edit the Grunt file and the yeoman javascript to deploy to OpenShift, to allow it to create multiple environments.

The first step is to edit generator-angular-fullstack\openshift\index.js file that ships with the yeoman generator for angular fullstack. (Note: this file is outside of your actual MEAN project seeded by the generator, typically in your home directory, so you’ll need to search for it on your filesystem). As writing of this blog post, openshift\index.js has hardcoded the application name to be ‘openshift’. Find all occurences of ‘openshift’ (single quotes included) and replace it with this.deployedName (no single quotes). This will allow the application name to be the actual name you’ll input when running this openshift deployment yeoman script.

Secondly, you’ll need to edit the Gruntfile.js in your project. Skip down to the buildcontrol. Under openshift, change the remote field from ‘openshift’ to grunt.option(‘openshift_target’). grunt.option() allows you to take an argument from the command line, so this way you can specify the application name you want to deploy to.

And voilà, that’s it. Now you have the capability to deploy (and update) your MEAN application to multiple OpenShift application environments.

As per the angular fullstack generator documentation, to deploy to OpenShift, you would run the following command:

yo angular-fullstack:openshift

In the yeoman script run in the command above, you’ll be asked for the application name. I named mine “staging” and “production” (two separate environments created by running the command twice). And then to deploy to your application initially, or deploying all code updates, you’d run the commands.

grunt build
grunt buildcontrol:openshift --openshift_target=staging
grunt buildcontrol:openshift --openshift_target=production

MEAN stack foreign language translations

For the MEAN application I’m currently building, there is a requirement to have it served in multiple user-selectable languages. I used the MEAN fullstack generator (https://github.com/DaftMonk/generator-angular-fullstack), which does not provide i18n (internationalization) support.

When setting up my application for i18n, I realized that I needed translations available up and down the stack. So not just in the View, also in the Model and Controller. I ended up using angular-translate (https://github.com/angular-translate/angular-translate) and MomentJS (https://github.com/moment/moment/) in the client side AngularJS. And I created my own custom solution, very simple, in Node for the server side model and controller.

I think angular-translate works great in Angular, and there are plenty of guides around so I won’t go into it. But I want to mention that angular-translate doesn’t have great support (at least that I could find) for translating dates and numbers. This is where MomentJS can fill in the gaps. Again, plenty of guides out and good documentation out there for MomentJS.

For Node, I created a module that simply has a JSON of all the translations, and a function that returns the translation. Example below:

—translations.js—

'use strict';
var en = {
  VERIFICATION_EMAIL_SUBJECT: 'Sign up verification',
  VERIFICATION_EMAIL_TEXT: 'You are receiving this email because you or someone else has signed up with this email address (%s)',
};
var fr = {
  VERIFICATION_EMAIL_SUBJECT: 'S\'inscrire vérification',
  VERIFICATION_EMAIL_TEXT: 'Vous recevez ce courriel parce que vous ou quelqu\'un d\'autre a signé avec cette adresse email (%s)',
};
module.exports.get = function(lang, key)
{
  if(lang == 'en')
    return en[key];
  else if(lang == 'fr')
    return fr[key];
};
module.exports.en = en;
module.exports.fr = fr;

And then then use it like so:

var translations = require('translations');
console.log(translations.get('en','VERIFICATION_EMAIL_SUBJECT'));
console.log(sprintf(translations.get(user.language,'VERIFICATION_EMAIL_TEXT'),'blah@blah.com'));

This way translations can be available anywhere on the server side that uses Node.

The power of AWS Elastic Beanstalk Environment Configuration using .ebextensions

This is a quick post exploring the usefulness of AWS Elastic Beanstalk Environment Configuration files using “.ebextensions”.

.ebextensions config files, written in YAML (http://yaml.org/), can be used to set up the server platform by automatically performing various custom actions and configuration when an application is uploaded to AWS Elastic Beanstalk.

Through .ebextensions you can:

  • Create configuration or other files (SSL certificates, etc) on the server machine
  • Install packages/programs
  • Start/stop services
  • Execute custom commands
  • And much more

This can help you set up a new or existing server, as far as the configuration on the server machine is concerned, without manually having to do it yourself every time you deploy a new application.

Since I’m most familiar with how .ebextensions work using Java .war’s deployed to AWS Elastic Beanstalk, here’s a quick rundown on how to set it up for your Java environment: in your web project’s WebContent folder, create a folder called “.ebextensions”. Then within the .ebextensions folder you can create one or many files ending with a .config extension. Any and all .config files within the .ebextensions (ProjectRoot/WebContent/.ebextensions/*.config) will get executed after you upload the .war file for your project to AWS Elastic Beanstalk.

So if you’re using AWS Elastic Beanstalk and aren’t yet using .ebextensions, I would highly recommend you look into it. There is more documentation here: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/ebextensions.html

Vaadin: Executing custom JavaScript from a thread, or loading custom JavaScript functions into global scope

As you may know already, in Vaadin you can use com.vaadin.ui.JavaScript.getCurrent().execute(…) function to execute some custom JavaScript on the client browser.

  1. Executing custom JavaScript from a thread:
    The above works well as long as the JavaScript execute() method is being called on the main UI thread. However, if JavaScript.getCurrent().execute() is called from a background thread, the JavaScript won’t get executed until there is a periodic refresh of the UI, or there’s a UI event (triggered by the user, such as a mouse click somewhere). This can seem to cause erratic behavior, with the JavaScript executing at unpredictable times. (Side note: any Vaadin UI access/manipulation from a background thread needs to be done inside com.vaadin.UI.getCurrent().access(new Runnable() { … });, and also note that you want to do your time-consuming heavy lifting first (such as retrieving data from the back end) and then go into UI.getCurrent().access(…) to manipulate the UI).To get around this problem, simply use Vaadin Push. You’ll need to enable push if not already (see Vaadin documentation on how). Then depending on the push mode you’ve used (manual or automatic) you’ll either need to call com.vaadin.ui.UI.getCurrent().push() or not (for manual mode you’ll need to call the push() method, for automatic mode it will be called after the runnable you send to the UI.access(…) method finishes executing). So call get JavaScript.execute(…), and then UI.push() last. Example:

    new Thread() {
        public void run() {
            //some long running background work
                 UI.getCurrent.access(new Runnable() {
                public void run() {
                    JavaScript.getCurrent().execute("alert('Background Task Complete!');");
                                UI.getCurent().push();
                }
            });
        }
    }.start();
  2. Loading custom JavaScript functions into global scope:
    This is extremely useful so you can define a JavaScript function which you can use later from JavaScript.getCurrent().execute(…), such as inside an event (a button click, for example). However, the JavaScript function will need to be in the global scope by injecting it into thetag for the HTML page served by Vaadin. To do this, use the following code while your Vaadin view is being created, or is being enter()’ed.

    StringBuilder script = new StringBuilder(); script .append("var head = document.getElementsByTagName('head')[0];") .append("var script = document.createElement('script');") .append("script.content='function sayHello() { alert(\"Hello!\");}';") .append("head.appendChild(script);"); JavaScript.getCurrent().execute(script.toString());

    Note: as I mentioned, a JavaScript function can only be loaded this way when the view is being created, or in the view’s enter() method. To create a function this way AFTER the page is already loaded (such as through some event), you’ll need to use Vaadin Push, and call UI.getCurrent().push() after the JavaScript.getCurrent().execute() even though you’re not on a background thread.

  3. You can define a function in the script tag being created above, which can then be called later on through JavaScript.getCurrent().execute(“sayHello();”); perhaps inside a Vaadin button click listener.

Enjoy!

Getting rid of Jetty related exceptions when using Vaadin Push with Tomcat

At least in Vaadin 7.3.x and 7.4.x, running on Tomcat with Push enabled, you may see several exceptions in your console having to do with Jetty. Even though you’re not using Jetty, this is happening because Vaadin is getting tricked into assuming that Jetty is being used, since a Jetty library is found in the Java classpath.

Vaadin does realize after the exceptions that Jetty isn’t actually there and continues gracefully. But though these exceptions are harmless, they’re still unsettling to constantly in your tomcat logs. As I mentioned, this is happening because there is some Jetty library in your Java classpath. So thus the obvious solution get to rid of these exceptions is to remove all Jetty libraries from your Java classpath. Search for any .jar’s that have “jetty” in the name, and remove them.

If you’re using Ivy or Maven dependency management, the library may be downloaded as a sub-dependency from another dependency. You will need to go through all your dependencies to check which ones also reference Jetty. The culprits in our case were vaadin-client-compiler and HtmlUnit (both reference Jetty for Jetty related things, but since we’re not using Jetty anyway, getting rid of the Jetty library should cause no harm).

We use Ivy, and I found how to exclude certain sub-dependencies, which is simple. Use the  tag in your ivy.xml wherever jetty is a sub-dependency. A example follows:

<exclude org="org.eclipse.jetty" name="*" />

And viola! No more random Jetty based exceptions from Vaadin.

AngularJS creating multiple $resource endpoints in a service

So this had me baffled for a bit. For using Angular’s $resource in your Angular service, you map it to a particular URL with (optional) parameters defined for that URL. Then how can you have multiple resources with their own unique URLs mapped in the same service?

For example, here’s a service called User which maps to the URL “/api/users/…”:

angular.module('myApp')
  .factory('User', function($resource) {
    return $resource('/api/users/:id/:controller', {
      id: '@_id'
    }, {
      changePassword: {
        method: 'PUT',
        params: {
          controller: 'password'
        }
      },
      get: {
        method: 'GET',
        params: {
          id: 'me'
        }
      }
    });
  });

As you can see, there’s one resource in this service that maps to the one URL, so then how can I add another URL for the $resource?

The answer turned out to be pretty simple, actually. What you have to do is create an object (JSON) of $resources that are returned for the service. You can have individual elements inside the object that each map to a different $resource. So, for example:

'use strict';

angular.module('myApp')
  .factory('User', function ($resource) {
    return {
      WithId: $resource(
        '/api/users/:id/:controller',
        {
          id: '@_id'
        },
        {
          changePassword: {
            method: 'PUT',
            params: {
              controller: 'password'
            }
          },
        }
      ),
      Misc: $resource(
        '/api/users/misc/:controller',
        null,
        {
          generateResetPasswordToken: {
            method: 'POST',
            params: {
              controller: 'generateResetPasswordToken'
            }
          },
        }
      ),
    };
  });

In the above example, in the “User” service, we have two resources that can be accessed. To access “changePassword”, we can use User.WithId.changePassword (which maps to a particular URL), and to access “generateResetPasswordToken” we use User.Misc.generateResetPasswordToken (which maps to another URL).

Voila!

Adding an asterisk to required fields in Bootstrap

This is pretty useful in giving a visual indication to the user when a form field is required.

The CSS:

.form-group.required .control-label:after {
  content:"*";
  color:red;
}

Then in your HTML:

<form>
  <div class="form-group required">
    <label class="control-label">Required Field</label>
    ...
  </div>
</form>

There will now be a nice little “*” after the label for “Required Field”.