Getting the number of active sessions in Vaadin

Last week I explored how to get the number of active sessions that Tomcat reports: https://adeelscorner.wordpress.com/2016/03/13/getting-the-number-of-active-sessions-in-tomcat/

The problem with that metric is that some of those sessions, or all, may not actually be active users. What if someone is connected to your java web application (Tomcat in this case), but has just been sitting idle, for hours or even days. They’re not really using your application per se. However, Tomcat will report them as active users, especially if your application has a heartbeat, or a long polling push connection established.

So an additional metric that is useful in determining if your java application is actually in use is to track the number of active sessions within your java application. In our case we’re using Vaadin, so that’s what this post is geared towards. However, you’ll see the general idea here and can apply this to any user based application.

Before I continue, I want to point out a useful Vaadin add-on to measure idle time: https://vaadin.com/directory#!addon/idle. This is pretty handy in checking the actual idle time for a user.

The other important thing to explore first is as follows. Each Vaadin Session is based on a cookie, so if a browser has multiple tabs open to the same application server running Vaadin, the Vaadin Session will be shared among those tabs, because the cookie is shared. Hence, when a Vaadin session ends (perhaps when the user logs out, or times out), all user browser tabs open corresponding to that session are disconnected.

Let’s get started. At the core of how this will work is a public static HashMap which we can use to track a browser tab that belongs to a particular Vaadin session, and how long it’s been active or idle. Since this is a static data structure, it will be shared among all sessions at the Tomcat level. So it’s pretty handy in tracking all sessions on that particular Tomcat server. We’ll be able to track not only how many browser tabs a user has open to your application, but how long they’ve been idle in each tab.

I created a class called ApplicationLevelTracker to assist, which is pretty simple really. It’s as follows:

public class ApplicationLevelTracker {
 public static HashMap < String, Pair < Boolean, Long >> browserTabUniqueIDToIdleTimeHashMap = new HashMap < String, Pair < Boolean, Long >> ();

 private String thisBrowserTabUniqueID;

 public ApplicationLevelTracker(int userID, String vaadinSessionID) {
  thisBrowserTabUniqueID = userID + ":" + vaadinSessionID + ":" + UUID.randomUUID().toString();
  registerIdle(false);
 }

 public void registerIdle(boolean idle) {
  Date date = new Date();
  browserTabUniqueIDToIdleTimeHashMap.put(thisBrowserTabUniqueID, new Pair < Boolean, Long > (idle, date.getTime()));
 }

 public void deRegisterThisTab() {
  browserTabUniqueIDToIdleTimeHashMap.remove(thisBrowserTabUniqueID);
 }

 public void deRegisterAllTabsForThisVaadinSession() {
  String userID = thisBrowserTabUniqueID.split(":")[0];
  String vaadinSessionID = thisBrowserTabUniqueID.split(":")[1];
  Set < String > keySet = ApplicationLevelTracker.browserTabUniqueIDToIdleTimeHashMap.keySet();
  List < String > browserTabUniqueIDsToRemove = new ArrayList < String > ();
  for (String key: keySet) {
   String _userID = key.split(":")[0];
   String _vaadinSessionID = key.split(":")[1];
   if (_userID.equals(userID) && _vaadinSessionID.equals(vaadinSessionID))
    browserTabUniqueIDsToRemove.add(key);
  }
  for (String browserTabUniqueID: browserTabUniqueIDsToRemove) {
   browserTabUniqueIDToIdleTimeHashMap.remove(browserTabUniqueID);
  }
 }
}

Here’s a breakdown of the class, and the methods: The constructor takes the user’s ID (any identifier you have for your user, in our case an integer), and the Vaadin Session ID. Using these keys, and a random ID, each browser tab that a user opens to your application will get its own unique ID created in the constructor.

When a user goes idle, registerIdle(true) is called. When a user is active, registerIdle(false) is called. This way we can track whether they are active or idle in the corresponding browser tab. These methods will be used by the Vaadin Idle add-on (more on that soon).

When a browser tab is closed, deRegisterThisTab() is called. When the Vaadin session is being ended (for example, you may do that when a user logs out from your application, or when Vaadin determines the user has timed out), deRegisterAllTabsForThisVaadinSession() is meant to be called, to clean them out of the tracking HashMap.

Next, to actually make use of the ApplicationLevelTracker, you’ll have to set it up and use it properly in your Vaadin UI class, where your application is created and each user’s instance is initiated. So in your UI class that creates your Vaadin application (the class usually extends Vaadin’s “UI”), you’ll need to set up the following code for all this to come together:

First, define this variable in your UI class:

private ApplicationLevelTracker applicationLevelTracker = null;

Next, when the views are being created, after the User is logged in (note: you should set it up so this code is called on every new creation of the UI class, and not just the creation of a new Vaadin Session (the same Vaadin session can have multiple UIs attached to it)), you’ll want to set up ApplicationLevelTracker:

applicationLevelTracker = new ApplicationLevelTracker(user.USER_ID, VaadinSession.getCurrent().getSession().getId());
Idle.track(UI.getCurrent(), 120000, new Idle.Listener() {
 @Override
 public void userInactive() {
  applicationLevelTracker.registerIdle(true);
 }
 @Override
 public void userActive() {
  applicationLevelTracker.registerIdle(false);
 }
});

Notice that the Vaadin Idle add-on is used here. We create an instance of ApplicationLevelTracker with the user’s ID, and the Vaadin session ID, which registers this browser tab in static (shared) memory. And then when the user goes idle, or when the user goes active, as reported by the Idle add-on, we register that with ApplicationLevelTracker.

Now that things are setup, we need to be sure they are cleaned up when the user goes away.

You can use this neat trick to register whenever that particular browser tab is closed, or when the user navigates away from your application. This is where you’ll want to call applicationLevelTracker.deRegisterThisTab():

JavaScript.getCurrent().addFunction("browserIsLeaving", new JavaScriptFunction() {
 @Override
 public void call(JsonArray arguments) {
  if (applicationLevelTracker != null)
   applicationLevelTracker.deRegisterThisTab();
 }
});
Page.getCurrent().getJavaScript().execute("window.onbeforeunload = function (e) { var e = e || window.event; browserIsLeaving(); return; };");

You’ll also want to use applicationLevelTracker.deRegisterThisTab() when the Vaadin detach listener is fired. A detach listener corresponds to the UI, and not the whole Vaadin session. So it will be fired when one particular instance of UI belonging to a singular browser tab (which may one of several tabs belonging to the same Vaadin session) is detected to have gone away, this code will be called to clean up that particular browser tab. You can add the following block of code in the init() method in your Vaadin UI class:

addDetachListener(new DetachListener() {
 @Override
 public void detach(DetachEvent event) {
  if (applicationLevelTracker != null)
   applicationLevelTracker.deRegisterThisTab();
 }
});

Lastly, wherever in your code the user logs out, and you end the Vaadin session, you’ll want to call applicationLevelTracker.deRegisterAllTabsForThisVaadinSession():

if (applicationLevelTracker != null)
 applicationLevelTracker.deRegisterAllTabsForThisVaadinSession();

And that’s it! Now you have an application wide HashMap which fully tracks all individual browser tabs open to your application, and information on how long the user has been active or idle in each tab. In my next blog post I’ll cover how to actually pull this information and use it in a meaningful way to make a determination of how many users are using your application at any given moment.

Before wrapping up I want to point out one final thing on how to interpret this tracking information after you’ve started collecting it. If a user is connected, and is idle for too long (say 2 hours, or several days), you can reliably know that they are not using the application. They are either connected and left their machine running and browser open, just sitting idle. Or they’re gone (no longer connected) and their session was never cleaned up for whatever reason. If the user has been active for a long time (say, several days), that means something is wrong too, unless they really have been working every minute in your application for the last several days without a break. In this case you can assume the user is not really there, that their browser connection went away and their tracking didn’t get cleaned up for whatever reason (for example, Vaadin didn’t fire the detach listener reliably, which I’ve noticed happens sometimes though it may be fixed in new Vaadin versions). The only case you can be pretty certain that a user is using your application at the present moment is if they’ve just been idle for a short period (say within a 30 minute window), or if they’ve been active within a short period (say 30 minutes again).

Getting the number of active sessions in Tomcat

This is really useful when you’re trying to determine if your Java application running on a Tomcat server is currently serving any clients. For example, if your server is under use, you can avoid taking it offline for maintenance or upgrade. Or, you can use it to determine the load on a server.

Essentially Tomcat has some internal statistics that can be accessed using JMX (Java Management Extension). More information and details for that are available here: https://tomcat.apache.org/tomcat-8.0-doc/monitoring.html

The Tomcat JMX stats can be accessed in several ways. This article will cover how to access them from within the tomcat serer, which was our requirement. Since the application deployed to the tomcat server will be running locally on the JVM running tomcat, JMX can be accessed locally from the Java code, and no remote setup is required.

The code is as simple as this:

MBeanServer mBeanServer = ManagementFactory.getPlatformMBeanServer();
ObjectName objectName = new ObjectName("Catalina:type=Manager,context=/,host=localhost");
int activeSessions = (Integer) mBeanServer.getAttribute(objectName, "activeSessions");

And that’s it! You’ll get the number of active sessions tomcat is reporting. And since this is running within the application deployed to tomcat, no remote setup or custom JVM arguments for tomcat are required.

Mongoose model hasOwnProperty() workaround

When using a Mongoose model, if you have a reference to a queried document, you’ll notice that you cannot call hasOwnProperty() to check if it has a certain property. For example, if you have a User model, and the schema has a property called “verified”, if you call user.hasOwnProperty(‘verified’), it will not return true!

This is because the object you get back from your mongoose query doesn’t access properties directly.

An easy and quick workaround is like so:

var userObject = user.toObject();
if(userObject.hasOwnProperty('verified'))
  //...

And now hasOwnProperty() will return true.

Mongoose save() to update value in array

This had me scratching my head for a while until I figured it out. Let’s say you have a MongoDB model which has an array in it. If you’ve got a reference to a document which you pulled using Mongoose, and you update a value in the array, then you call document.save(), the array will not get updated!

Turns out you need to use the document.markModified(arrayName) function calling document.save().

Here’s an example. Let’s say your model looks like this:

var UserSchema = new Schema({
  //...
  emailSubscriptions: {
    type: Array,
    'default': [&quot;all&quot;]
  }
});
exports.model = mongoose.model('User', UserSchema);

So after you get a user and modify an array (in this case emailSubscriptions), you’ll need to mark it as modified before saving it:

User.findById(userId, function(err, user) {
  user.emailSubscriptions = ['none'];
  user.markModified('emailSubscriptions');
  user.save(function(err, user) {
    //...
  });
});

socket.io: Getting all connected sockets in Node/Express

Sometimes you need a list of all connected sockets in your server side API. This could be, for example, if you want to loop through all the sockets and emit something when a certain event happens, such as some a data model updating or something else.

As of socket.io version 1.3, you have access to all sockets using “socketio.sockets.sockets” programatically.

So for example if you needed all sockets in an Express controller running on node.js, you could access it this way.

In your server side app.js, socket.io is usually configured in express this way:

var app = express();
//...
var socketio = require('socket.io')(server, {
  //...
});

And then, you can save the reference for socketio for later using Express’s app.set() and app.get(). In app.js you would do:

app.set('socketio', socketio);

Then in an express controller, you can access socketio like so:

exports.whateverController = function(req, res, next) {
  var socketio = req.app.get('socketio');
  //...
}

And then you can loop through the sockets:

var sockets = socketio.sockets.sockets;
for(var socketId in sockets)
{
  var socket = sockets[socketId]; //loop through and do whatever with each connected socket
  //...
}

Simple, right?!

Vaadin upgrade from 7.4.5 to 7.6.1

I was recently tasked with upgrading our Vaadin applications at work from version 7.4.5 to version 7.6.1. I came across an issue that took me a long time to figure out, so I thought I’d share in case anyone else is running into the same.

The process went pretty smoothly at first. We use Ivy for our projects, and simply updating the Vaadin version in ivy.xml allowed all the new library files to be downloaded. Then I re-compiled the widgetset, and took one of our applications out for a spin.

Lo and behold I got an error stating: “Failed to load the bootstrap javascript: ./VAADIN/vaadinBootstrap.js?v=7.6.1”. I did some investigating, and noticed that the vaadinBootstrap.js served to the browser looked like a binary file, whereas the browser was trying to interpret it as a JavaScript file.

I eventually realized that the vaadinBootstrap.js is compressed. I did everything I could to verify that the compression filters were correct, but the browser for some reason seemed like it was not uncompressing vaadinBootstrap.js before trying to interpret it.

Turns out that in prior versions of Vaadin, with the Vaadin demo applications, Vaadin had stuck in a CompressionFilter using net.sf.ehcache.constructs.web.filter.GzipFilter to web.xml. It compressed all *.js, *.html, *.css files (which only works if the browser/client announces it supports it in the HTTP headers) before serving it. HOWEVER, in the new version of Vaadin (7.6.1) it seems that there are some new compression filters already built in! Hence, the content (in this case vaadinBootstrap.js) was compressed twice! So the browser would uncompress it as it should, but that would yield a second compressed file! Sheesh. Had almost pulled my hair out before I figured this out.

The solution was to get rid of all the custom compression filters using net.sf.ehcache.constructs.web.filter.GzipFilter from our web.xml. And then everything started working smoothly.

Policy based routing for PPTP VPN client on DD-WRT router

This post is a change from my usual software programming related posts. I usually don’t write about networking related issues, but I struggled with this issue a bit recently so I thought I’d write about it.

In plain English, when I say policy based routing, I mean to accomplish the following: if you want a computer or device on your home or work network to go over a VPN connection, but you don’t want the rest of the computers or devices on your home or work network to use the VPN connection, you can accomplish that by setting up policy based routing on your router. I’ve also heard this being referred to as “source based routing” or “split tunneling”.

This post specifically addresses split tunneling on a DD-WRT router (an awesome Linux based router) that has a PPTP VPN client connection configured. I want some computers on my network to have all their traffic routed through VPN connection, but not the rest of the network.

First of all, you’ll need to configure the PPTP VPN client connection on your DD-WRT router. You can read up on doing that here: https://www.dd-wrt.com/wiki/index.php/Static_PPTP_VPN_Client.

Next, assuming your PPTP VPN client connection is configured and working properly, you’ll want to set up some rules for your router. (BTW, figuring out if the PPTP VPN client connection worked wasn’t easy or obvious–I had to ssh into the router, and run “ps w” to ensure pptp and pppd were running, and “ifconfig” to ensure the “ppp0” interface was set up, which was telling of a successful PPTP connection). Assuming your home or work network address is 192.168.0.0/255.255.255.0, and you have two computers that you want traffic routed for through the VPN: 192.168.0.105 and 192.168.0.106, you’ll need to run the following commands:

iptables –table nat –append POSTROUTING –out-interface ppp0 –jump MASQUERADE
ip rule add from 192.168.0.105 table 200
ip route add default via $(ifconfig ppp0 | sed -n ‘s/.*inet *addr:\([0-9\.]*\).*/\1/p’) dev ppp0 table 200
ip rule add from 192.168.0.106 table 201
ip route add default via $(ifconfig ppp0 | sed -n ‘s/.*inet *addr:\([0-9\.]*\).*/\1/p’) dev ppp0 table 201
ip route flush cache

The first command preps your router for forwarding (NAT’ing) packets over ppp0 (the interface for the PPTP VPN client). The next few commands add rules for all traffic from 192.168.0.105 and 192.168.0.106 to get sent over ppp0 and hence the VPN connection (note that the ifconfig/sed sub-command gets the IP address for ppp0, which is the IP given to your PPTP VPN client by the PPTP VPN server). And the final command flushes any cache on the routing tables on the router, which cleans it up to work for the new rules.

And that’s it!

However, you may not want to have to manually run these commands whenever your router is restarted. So, I set up the following “startup” script to run these commands for me, and also add them to the “ip-up” script for the PPTP client so they’re executed if the interface goes down and back up (due to network connectivity interruption or something else). You can enter the script commands below to your stratup script on your DD-WRT router, and they will ensure that if your router is ever restarted, or the PPTP client connection drops and gets reconnected, the special routes will get set up again. (You can create the startup script like so: https://www.dd-wrt.com/wiki/index.php/Startup_Scripts):

sleep 120
iptables –table nat –append POSTROUTING –out-interface ppp0 –jump MASQUERADE
ip rule add from 192.168.0.105 table 200
ip route add default via $(ifconfig ppp0 | sed -n ‘s/.*inet *addr:\([0-9\.]*\).*/\1/p’) dev ppp0 table 200
ip rule add from 192.168.0.106 table 201
ip route add default via $(ifconfig ppp0 | sed -n ‘s/.*inet *addr:\([0-9\.]*\).*/\1/p’) dev ppp0 table 201
ip route flush cache
head -n -1 /tmp/pptpd_client/ip-up > /tmp/pptpd_client/ip-up.new
mv /tmp/pptpd_client/ip-up.new /tmp/pptpd_client/ip-up
echo “iptables –table nat –append POSTROUTING –out-interface ppp0 –jump MASQUERADE” >> /tmp/pptpd_client/ip-up
echo “ip rule delete from 0/0 to 0/0 table 200” >> /tmp/pptpd_client/ip-up
echo “ip route delete table 200” >> /tmp/pptpd_client/ip-up
echo “ip rule add from 192.168.0.105 table 200” >> /tmp/pptpd_client/ip-up
echo “ip route add default via \$(ifconfig ppp0 | sed -n ‘s/.*inet *addr:\([0-9\.]*\).*/\1/p’) dev ppp0 table 200” >> /tmp/pptpd_client/ip-up
echo “ip rule delete from 0/0 to 0/0 table 201” >> /tmp/pptpd_client/ip-up
echo “ip route delete table 201” >> /tmp/pptpd_client/ip-up
echo “ip rule add from 192.168.0.105 table 201” >> /tmp/pptpd_client/ip-up
echo “ip route add default via \$(ifconfig ppp0 | sed -n ‘s/.*inet *addr:\([0-9\.]*\).*/\1/p’) dev ppp0 table 201” >> /tmp/pptpd_client/ip-up
chmod a+x /tmp/pptpd_client/ip-up
Just a quick breakdown of the script above: the first command, sleep 120, ensures that enough time passes for the router to boot up, and the PPTP VPN client to connect at least once. The next few commands are the same as above… they set up the policy based routing for the current PPTP connection. The next several commands write to the “/tmp/pptpd_client/ip-up” script, which is run whenever the PPTP connection is re-established (in case it gets dropped due to whatever temporary network connectivity issue). Note: the final line of the default “ip-up” script is “exit 0” so any commands appended to the script after that line won’t get executed. Hence, I have the head command to get everything from the current ip-up script except the last line. It’s also important to note that this ip-up script is re-created any time you restart the DD-WRT router, hence it’s important to re-append the commands to it on every restart when the startup script is called.

Paginating documents/items in MEAN

If you’ve ever scrolled through the Facebook newsfeed, you’ve noticed that the topmost stories are the most recent ones, and as you scroll to the bottom, older ones get loaded over and over as you keep scrolling.

This feature is firstly kind of “cool”, and fits in perfectly in a single page application. It’s also pretty useful from a performance standpoint, since not all of the documents (items your page is displaying, such as Facebook news stories, classified ads, search results, etc.) need to be loaded up front all at once when the user first lands on the page.

Paginating your documents in a MEAN application can be accomplished fairly easily, though it isn’t necessarily obvious. So I thought I’d write about the process I took, and the code I wrote, to get it done.

Let’s start with the AngularJS side of things. I used the ngInfiniteScroll module (https://github.com/sroze/ngInfiniteScroll) to accomplish the continuous scrolling effect. It’s pretty simple to configure, so please read up on the documentation. Essentially it can just be wrapped around an Angular ng-repeat directive, and be configured with a function to call to fetch more documents when the bottom of the page is reached (ngInfiniteScroll does all the calculations internally). Here is an example of what it would look like for getting more “classifieds” from the database to add them to the view:

loading-classifieds

So in the example above, the getMorePosted() function in your controller is called whenever ngInfiniteScroll detects that the user is at the bottom of the page. Note here that ngInfiniteScroll will most likely trigger right when the user lands on the page, unless you pre-load some documents in your controller. I elected getMorePosted() to fetch both the initial set of documents, and every successive set of documents as well. Depending on how you set things up, this may or may not make a difference, but it did for me.

My getMorePosted() function in the controller looks like this (note: it uses a factory called Classified to do the actual getting of classifieds from the API (Express/MongoDB on the server side of MEAN) which I’ll define later):

$scope.initialLoadDone = false;
$scope.loadingClassifieds = false;
$scope.getMorePosted = function() {
  if($scope.loadingClassifieds) return;

  $scope.loadingClassifieds = true;

  if(!$scope.initialLoadDone) {
    Classified.getPosted(function (postedClassifieds) {
      $scope.postedClassifieds = postedClassifieds;
      $scope.loadingClassifieds = false;
      $scope.initialLoadDone = true;
    });
  }
  else
  {
    Classified.getMorePosted(function(err,numberOfClassifiedsGotten) {
      $scope.loadingClassifieds = false;
      if(numberOfClassifiedsGotten==0)
        $scope.noMoreClassifieds=true;
    });
  }
}

A couple things to note here. When the classifieds are being loaded, the $scope.loadingClassifieds flag is set to true. This disables ngInfiniteScroll from attempting to keep loading more classifieds when the bottom is reached, and it can also be used to put up a message to the user that loading is underway (in case it doesn’t happen near instantly due to a slow connection). Furthermore, getMorePosted() also tracks through the $scope.noMoreClassifieds flag when the end has reached (if ever, depending on how many thousands or millions of documents are in your database, and how far down the user scrolls). It does this by measuring the number of documents returned, and if the number equals zero, it means the end of pagination has been reached.

This is how getPosted() and getMorePosted() look like in the Classified factory:

app.factory('Classified', function Classified(ClassifiedResource, ...) {
      var postedClassifieds = [];
      var postedClassifiedsLoaded = false;
      //...
      getPosted: function(callback) {
          var cb = callback || angular.noop;
          if (postedClassifiedsLoaded) {
            //console.log(&quot;Sending already-loaded postedClassifieds&quot;);
            return cb(postedClassifieds);
          } else {
            return ClassifiedResource.Posted.query(
              function(_postedClassifieds) {
                //console.log(&quot;Loading postedClassifieds from webservice&quot;);
                postedClassifieds = _postedClassifieds;
                postedClassifiedsLoaded = true;
                return cb(postedClassifieds);
              },
              function(err) {
                return cb(err);
              }).$promise;
          }
        },
        getMorePosted: function(callback) {
          var cb = callback || angular.noop;
          if (!postedClassifiedsLoaded)
            callback();
          else {
            return ClassifiedResource.Posted.query({
                startTime: new Date(postedClassifieds[postedClassifieds.length - 1].posted).getTime()
              },
              function(_postedClassifieds) {
                //console.log(&quot;Loading more postedClassifieds from webservice, from before startTime=&quot;+postedClassifieds[postedClassifieds.length-1].posted);
                for (var i = 0; i &lt; _postedClassifieds.length; i++)
                  replaceOrInsertInArray(postedClassifieds, _postedClassifieds[i], true);
                return cb(null, _postedClassifieds.length);
              },
              function(err) {
                return cb(err);
              }).$promise;
          }
        },
        //...

And this is how ClassifiedResource looks like:

app.factory('ClassifiedResource', function ($resource) {
  return {
    Posted: $resource(
      '/api/classified/getPosted/:startTime',
      {
      },
      {
      }
    ),
}

So note that in my setup, the service loads and maintains the list of documents (postedClassifieds) within memory. And getPosted() returns that list if it is already loaded, and it also gets the first set of documents. getMorePosted() is where the magic happens. It gets the timestamp of the last classified, and transmits that to the API (server side, Express) which then loads the next “page” after for all documents (classifieds in this case) after that timestamp.

Before we continue to examine the server side, it’s important to note that you’ll need a field to sort by in a descending order (or ascending if you want you want the oldest documents up front). A timestamp value will work great. Otherwise a MongoDB ID could work too, since those are incremental. It will depend on your data. In my case, a timestamp called “posted” was available in my data, and very consistent. Documents could only be removed from before a past timestamp, but not added to in a past timestamp (even then, this wouldn’t be a huge problem). So that works just fine with this pagination approach.

Here is what the server side looks like in Express/NodeJS:

var Classified = require('./classified.model');
exports.getPosted = function(req, res) {
  var startTime = req.params.startTime ? req.params.startTime : null;

  var query = Classified.find(
      {posted: { $ne: null }}
  );
  query.sort('-posted -_id');
  query.limit(20);
  if(startTime)
    query.where({posted: {$lt: new Date().setTime(startTime)}});
  query
    .exec(function (err, classifieds) {
      if(err) { ... }
      return res.status(200).json(classifieds);
    });

}

Note that “Classified” defines my model, which is queried from using Mongoose. I limit the number of documents returned to 20, which works well for my application. And the query is sorted in descending order by the “posted” field, which is a timestamp. You’ll notice a where clause added, which gets only the classifieds posted before the time sent in (“startTime”) from the UI, so that works in conjunction with the sort and returns 20 more classifieds before the “startTime”. Also note that I send the timestamp in milliseconds, which gives a nice clean number that can be sent down to the API from the UI.

And, that’s it!

Something I want to add is that on your client side (in AngularJS) if you end up loading too many documents/items in your ng-repeat, the application performance will greatly degrade. With ngInfiniteScroll, all items on the page are always kept once they’re loaded, even if they’re not in the view currently. There’s another module: https://github.com/angular-ui/ui-scroll which will allow you to destroy and re-create items as they go in and out of the view from the user’s browser as the user scrolls through. This will vastly improve performance when a lot of documents are loaded.

Giving the user a message before resizing images through ng-file-upload

This had been bothering me for a while until I stumbled upon an answer that led me to a solution.

With ng-file-upload (https://github.com/danialfarid/ng-file-upload) for AngularJS, you have the capability to resize images using the ngf-resize directive. It’s very handy since you can put some of the CPU processing burden resizing giant images to smaller sizes on the user, rather than to put it on your own server (if you resize the images after the files are uploaded).

HOWEVER, the problem with ngf-resize is when resizing starts, and especially if the user selects multiple images at once, the user’s browser hangs while the images resize. The bigger the image, the longer it takes. This is irritating, and also confusing, causing the user to wonder what is going on. For the longest time I was trying to figure out how to give the user a message before the resizing actually starts.

I eventually stumbled upon the ngf-before-model-change directive part of ng-file-upload. This allows you to define a function that is called before the model changes (and the images start resizing). This is a perfect place to put up a message to the user that their images are about to be resized, and for them to sit tight for the next few seconds.

Then the ngf-select directive can be used to define a function which is called AFTER the resizing is complete, and this is where you can remove the message to the user.

Full example follows like this. In your JavaScript side of things (in your AngularJS controller) you would do:

$scope.beforeResizingImages = function(images)
{
  blockUI.start($translate.instant('FORMATTING_IMAGES')+"...");
  $scope.$apply();
}

$scope.afterResizingImages = function(images)
{
  blockUI.stop();
}

And then in HTML:

<div ngf-before-model-change="beforeResizingImages($files)" ngf-select="afterResizingImages($files)" />

And that’s it! beforeResizingImages() and afterResizingImages() will be called in the correct order, putting the message up before resizing images (and before the browser hangs for a few seconds for the CPU intensive process), and taking it off after resizing. Note that I use angular-block-ui (https://github.com/McNull/angular-block-ui) to block the UI and put the message up, and of course angular-translate to translate the text for the block message.

Simple AngularJS directive

So I found many articles on using AngularJS directives, which go into great detail for very complex directives. I couldn’t find any on just a relatively simple directive, so I thought I’d write about it.

First of all, an AngularJS directive defines a reusable HTML tag that you can use in your HTML files. It’s very useful if you want to create some complicated custom tags, or even just simple ones, that you can reuse everywhere.

In my case, I just wanted to give my users a message for them to add the system email to their spam filter allow list, so no system-generated emails get blocked. I wanted to do this everywhere an operation included a system-generated email to the user. I wanted to style the message in a particular way, translate it according to the user’s preferences, and I didn’t want to copy/paste it in those several places. I thought it would be difficult in case I wanted to change the styling, wording, or so on, I’d have to do it in many places. This is where an AngularJS directive becomes useful. I can define the directive in one place, and use it in many places.

This is how the directive is defined in JavaScript:

.directive('emailSpamMessage', function() {
  return {
    template: '

<span class="label label-danger">{{"NOTE" | translate | uppercase}}</span> {{"SPAM_MESSAGE" | translate {HOSTING_NAME:CONSTANTS.HOSTING_NAME,SYSTEM_EMAIL:CONSTANTS.SYSTEM_EMAIL} }}' 
  } 
})

And this is how it would look in HTML:

<email-spam-message></email-spam-message>

And that’s it! Note that the directive in JavaScript is defined as emailSpamMessage, but in HTML as “email-spam-message”. AngularJS does the conversion for you.

Also note that in the controller associated to the HTML that the custom directive is in, needs to have the correct includes for it to work fully. For my particular example the controller will need to have $translate (angular-translate) and a CONSTANTS variable included.