The story of Async in JavaScript

By Tim Davis – Director of Development

In my last post I talked about some Javascript concepts that will be useful when starting out with Node.js. This time I would like to talk about a potentially awkward part of JavaScript, i.e. asynchronous (async) operations. It is a bit of a long story, but it does have a happy ending.

So what is an asynchronous operation? Basically, it means a function or command that goes off and does its own thing while the rest of the code continues. It can be really useful or really annoying depending on the circumstances.

You may have used async code if you ever did AJAX calls to Domino web agents for lookups on web pages. The rest of the page loads while the lookup to the web agent comes back, and the user is happy because part of the page updates in the background. This is brilliant and is the classic use case for an async function.

This asynchronous behaviour is built into JavaScript through and through and you need to bear it in mind when you do any programming in Node.

So how does this async behaviour manifest itself? Lets look at an example. Suppose we have an asynchronous function that goes and does a lookup somewhere.

function doAsyncLookup() {
    ... do the lookup ...
    console.log("got data");
}

Then suppose we call this function from our main code, something like this:

console.log("start");
doAsyncLookup();
console.log("finish");

The output will be this:

start
finish
got data

By the time the lookup has completed it is too late, the code has moved on.

So how do you handle something like this? How can you possibly control your processes if things finish on their own?

The original way JavaScript async functions allowed you to handle this was with ‘callbacks’.

A callback is a function that the async function calls when it is finished. So instead of your code continuing after the async function is called, it continues inside the async function.

In our example a callback could look something like this:

function myCallback() {
    console.log("finish");
}

console.log("start");
doAsyncLookup( myCallback );

Now, the output would be this:

start
got data
finish

This is much better. Usually, the callback function receives the results of the async function as a parameter, so it can act on those results. So in examples of callbacks around the web, you might see something like:

function myCallback( myResults ) { 
    displayResults( myResults );
    console.log("finish"); 
} 

console.log("start"); 
doAsyncLookup( myCallback );

Often the callback function doesn’t need to be defined separately and is defined inside the async function itself as a sort of shorthand, so you will probably see a lot of examples looking like this:

console.log("start"); 
doAsyncLookup( function ( myResults ) { 
    displayResults( myResults ); 
    console.log("finish"); 
} );

This is all great, but the problem with callbacks is that you can easily get a confusing chain of callbacks within callbacks within callbacks if you want to do other asynchronous stuff with the results.

For example, suppose you do a lookup to get a list, then want to look up something else for each item in the list, and then maybe update a record based on that lookup, and finally write updates to the screen in a UI framework. In a JavaScript environment it is highly likely that each of these operations is asynchronous. You end up with a confusing chain of functions calling functions calling functions stretching off to the right, with all the attendant risk of coding errors that you would expect:

console.log("start"); 
doAsyncLookup( function ( myResults ) { 
    lookupItemDetails( myResults, function ( myDetails ) {
        saveDetails( myDetails, function ( saveStatus ) {
            updateUIDisplay( saveStatus, function ( updatedOK ) {
                console.log("finish");
            } );
        } );
    } );    
} );

It gets even worse if you add in error handling. We may have solved the async problem, but at the penalty of terrible code patterns.

Well, after putting up with this for a while the JavaScript world came up with a better version of callbacks, called Promises.

Promises are much more readable than callbacks and have some useful additional features. You pass the results of each function to the next with a ‘then’, and you can just add more ‘thens’ on the end if you have more async things to do.

Our nightmare-indented example above becomes something like this (here I am using the popular arrow notation for functions, see my previous article for more on them):

console.log("start"); 
doAsyncLookup()
.then( (myResults) => { return lookupItemDetails(myResults) } )
.then( (myDetails) => { return saveDetails(myDetails) } )
.then( (saveStatus) => { return updateUIDisplay(saveStatus) } )
.then( (updatedOK) => { console.log("finish") } );

This is much nicer. We don’t have all that ugly nesting.

Error handling is easier, too, because you can add a ‘catch’ to the end (or in the middle if you need) and it is all still much more clear and understandable:

console.log("start"); 
doAsyncLookup() 
.then( (myResults) => { return lookupItemDetails(myResults) } ) 
.then( (myDetails) => { return saveDetails(myDetails) } ) 
.then( (saveStatus) => { return updateUIDisplay(saveStatus) } ) 
.then( (updatedOK) => { console.log("finish") } )
.catch( (err) => { ... handle err ... } );

What is really neat is that you can create your own promises from existing callbacks, so you can tidy up any older messy async functions.

Promises also have some great added features which help with other async problems. For example, with ‘Promises.all’ you can force a list of async calls to be made in order.

So promises solved the callback nesting problem, but The Gods of JavaScript were still not satisfied.

Even with all these improvements, this code is still too ‘asynchronous’. It is still a chain of function after function and you have to pay attention to what is passed from one to the next, and remember that these are all asynchronous and be careful with your error handling.

Once upon a time, Willy Wonka gave us ‘square sweets that look round’, and so now TGOJ have given us ‘asynchronous functions that look synchronous’.

The latest and greatest advance in async handling is Async/Await.

All you need to do is make your main function ‘async’, and you can ‘await’ all your promises:

async function myAsyncStuff() {
    console.log("start"); 
    let myResults = await doAsyncLookup();
    let myDetails = await lookupItemDetails(myResults);
    let saveStatus = await saveDetails(myDetails);
    let updatedOK = await updateUIDisplay(saveStatus); 
    console.log("finish");
 }

How cool is this? Each asynchronous function runs in order, with no messy callbacks or chains of ‘thens’. They all sit in their own line of code just like regular functions. You can do things in between them, and you can wrap them in the usual try/catch error handling blocks. All the async handling stuff is gone, and this is done with just two little keywords.

Plus, the functions are all still promises, so you can do promise-y things with them if you want to, and you can create and ‘await’ your own promises to refactor and revive old callback code.

Async/Await is fully supported by Node.js, by popular UI frameworks like Angular and React, and by all modern browsers.

One of the biggest headaches in JavaScript development now has an elegant and usable solution and they all lived happily ever after.

I hope you enjoyed this little story. I told you it had a happy ending.

Things to know with JavaScript – JSON, let, const, and arrows

By Tim Davis – Director of Development

While we eagerly await the arrival of the npm domino-db module with Domino 10, I thought I would spend this instalment of my blog series on Node.js talking a little about some concepts in JavaScript that are used a lot in Node development. If you haven’t looked at JavaScript much since Domino web forms or XPages SSJS then you may not have come across them. You will see them in examples and articles on Node around the web and will want to use them in your own projects as they will make your life easier when starting out.

JSON

The first is JavaScript Object Notation, or JSON, which I talked about briefly in my blog on NoSQL.

Basically, all the data in Node is JSON. This makes it great for storing in backend NoSQL data stores and for handling in front-end JavaScript frameworks.

JSON is a very readable way of describing data, and it looks like this:

{
    "orderNo" : "00101",
    "orderLines": [
        { "quantity" : 7 },
        { "quantity" : 11 },
        { "quantity" : 3 }
    ],
    "status": "Invoiced"
}

An object is denoted by the curly brackets { }. An array is denoted by square brackets [ ]. The items inside the object are name-value pairs. Items are separated by commas in both objects and arrays.

You can type this sort of thing directly into your code if you like, but you would normally just get it from somewhere else, like a database.

You reference the object by name and can access or update its properties using dot notation:

currentOrder.status = "Invoiced";

if ( orderLine.quantity > 100 ) {
    ... your code here ...
}

JSON is hierarchical and you can nest objects inside objects, and arrays inside objects inside arrays, etc, etc. It is a bit like XML in that way, but much easier to read.

You can access nested objects inside arrays inside objects (etc) using dot notation like this:

orders[i].orderLines[j].quantity = 10

JSON arrays are just regular arrays, so you can loop through them:

for ( i = 0; i < currentOrder.orderLines.length; i++ ) {
    ...
}

One great side effect of JSON being so readable is that it is easily converted to and from strings. Converting to strings is a great way to pass data around between different systems. You can use the built-in JSON object to do this:

JSON.stringify( currentOrder )

JSON.parse( '{ "status":"Invoiced", "orderNo":"00101" }' )

You should use these methods because they handle all the formatting and escaping of special characters for you.

Let and Const

These are two new ways of defining variables in Javascript and you will see them a lot. You will already be familiar with using ‘var’, like this:

var count = 0;

You use ‘let’ and ‘const’ in the same way as ‘var’:

let count = 0;

const domain = "mydomain.com";

‘Let’ and ‘const’ are similar to ‘var’, but they behave in a way that helps you avoid problems in your code.

As you can probably guess, ‘const’ is for constant values that will never change. If you try to set another value you will get an error. This will help prevent you overwriting something important in another part of your code.

What ‘let’ does that is different from ‘var’ is more subtle and is all about the variable’s scope, i.e. where in your code it exists.

If you define a variable using ‘var’, then it exists everywhere inside the enclosing function, i.e. everywhere inside the function you are currently in. This is a very wide area, and it is easy to forget and lose track of variable names and values and get confused. This is especially common when you have lots of loops inside loops inside one function.

With ‘let’, a variable only exists inside the current set of curly brackets, i.e code block. So for example a variable would only exist inside a particular loop and not exist outside in the parent function. This helps avoid all sorts of conflicts and overwriting errors.

Here is an example of how let and var work differently inside and outside curly brackets. Notice how ‘var’ overwrites the value while ‘let’ does not:

let cat = "meow";
var dog = "bark";

console.log("cat "+cat); // will be meow
console.log("dog "+dog); // will be bark

if (true) {
    let cat = "scratch";
    var dog = "wag";
    console.log("cat "+cat); // will be scratch
    console.log("dog "+dog); // will be wag
}

console.log("cat "+cat); // will be meow
console.log("dog "+dog); // will be wag

The cat inside the curly brackets is a different cat from the one outside them, but the dog is the same everywhere. This is why the dog gets confused.

Arrow functions

If you read articles on Node or look at code examples, you may have seen functions defined something like this, with the ‘=>’ arrow notation:

( arg1, arg2 ) => { ... }

This is more or less equivalent to

function( arg1, arg2 ) { ... }

The main difference is in how the keyword ‘this’ works.

In a regular function(), when you use ‘this’ it refers to what calls the function. With an arrow function, ‘this’ is from outside what calls the function. This is a pretty arcane distinction, and worth reading up about, but it is very useful in avoiding coding errors.

As an example, when developing in Node you often have functions defined inside methods as callbacks. A callback is a function that is called when a process has finished, usually to go ahead and do something with the results of that process. These usually look something like this:

myOrderDb.getOrders( function( myOrders ) {
    ... do something with myOrders ...
} );

Here you can see that the parameter in the getOrders method is a function. This is a callback function which is called when getOrders finishes and takes the result, ‘myOrders’, and does something with it.

Consider the following example code. I want my app (i.e. ‘this’) to get records from a database and, when that is done, to update its display:

this.showLoadingMessage();
let myOrdersDb = this.getDb();
myOrdersDb.getOrders( function(myOrders) {
    // the following line does not work
    this.displayOrders(myOrders);
} );

So what is wrong? I am expecting ‘this’ to refer to my app so I can go ahead and update the app display with the orders, but the ‘this’ inside the function actually points to myOrderDb because it is myOrderDb that is calling the function. The object that ‘this’ refers to gets overwritten inside regular functions. Keeping track of ‘this’ can be a nightmare when you have a complicated series of callbacks and this can be an easy mistake to make.

However, if you use an arrow function then ‘this’ is not overwritten. It is the same inside the function as it was outside it. So an arrow function version of our code would be:

this.showLoadingMessage();
let myOrdersDb = this.getDb();
myOrdersDb.getOrders( (myOrders) => {
    this.displayOrders(myOrders);
} );

This is only a small change, but now the ‘this’ inside the function is the app, same as outside it, and my call to the app’s displayOrders method will work. With arrow functions everything behaves much more how you would expect it to.

Next Up

In this post I have touched on callbacks, and next time I plan to expand on this topic and talk in detail about a classic bugbear in JavaScript development, asynchronous functions.

My Collabsphere 2018 video goes live: “What is Node.js?”

By Tim Davis – Director of Development

Last month, I presented a session at Collabsphere 2018 called ‘What is Node.js?’

In it I gave an introduction to Node and covered how to set it up and create a simple web server. I also talked about how Domino 10 will integrate with it, and about some cool new features of JavaScript you may not be aware of.

Luckily my session was recorded and the video is now available on the YouTube Collabsphere channel.

The slides from this session are also available on slideshare.

If you are interested in learning about Node.js (especially with the npm integration coming up in Domino 10) then its worth a look.

Many thanks to Richard Moy and the Collabsphere team for putting on such a great show!

Improving Node.js with Express

By Tim Davis – Director of Development

In my previous post I talked about what Node.js is and described how to create a very simple Node web server. In this post I would like to build on that and look at how to flesh out our server into something more substantial and how to use add-on modules.

To do this we will use the Express module as an example. This is a middleware module that provides a huge variety of pre-built web server functions, and is used on most Node web servers. It is the ‘E’ in the MEAN/MERN stacks.

Why should we use Express, since we already have our web server working? Really for three main reasons. The first is so you don’t have to write all the web stuff yourself in raw Node.js. There is nothing stopping you doing this if you need something very specific, but Express provides it all out of the box. One particularly useful feature is being able to easily set up ‘routes’. Routes are mappings from readable url paths to the more complicated things happening in the background, rather like the Web Site Rules in your Domino server. Express also provides lots of other useful functions for handling requests and responses and all that.

The second reason is that it is one of the most popular Node modules ever. I don’t mean therefore you should use it because everyone else does, but its popularity means that it is a de facto standard and many other modules are designed to integrate with it. This leads us nicely back around to the Node integration with Domino 10. The Node integration with Domino 10 is based on the Loopback adaptor module. Loopback is built to work with Express and is maintained by StrongLoop who are an IBM company, and now StrongLoop are looking after Express as well. Everything fits together.

The third and final reason is a selfish one for you as a developer. If you can build your Node server with Express, then you are halfway towards the classic full JavaScript stack, and it is a small step from there to creating sites with all the froody new client-side frameworks such as Angular and React. Also, you will be able to use Domino 10 as your back-end full-stack datastore and build DEAN/NERD apps.

So, in this post I will take you through how to turn our simple local web server into a proper Node.js app, capable of running stand-alone (e.g. maybe one day in a docker container), and then modify the code to use the Express module. This can form the basis of almost any web server project in the future.

First of all we should take a minute or two to set up our project more fully. We do this by running a few simple commands with the npm package manager that we installed alongside Node last time.

We first need to create one special file to sit alongside our server.js, namely ‘package.json’. This is a text file which contains various configuration settings for our app, and because we want to use an add-on module we especially need its ‘dependencies’ section.

What is great is we don’t have to create this ourselves. It will be automatically created by npm. In our project folder, we type the following in a terminal or command window:

npm init

This prompts you for the details of your app, such as name, version, description, etc. You can type stuff in or just press enter to accept the defaults for now. When this is done we will have our package.json created for us. It isn’t very interesting to look at yet.

We don’t have to edit this file ourselves, this is done automatically by npm when we install things.

First, lets install a local version of Node.js into our project folder. We installed Node globally last time from the download, but a local version will keep everything contained within our project folder. It makes our project more portable, and we can control versions, etc.

We install Node into our project folder like this:

npm install node

The progress is displayed as npm does its job. We may get some warnings, but we don’t need to worry about these for now.

If we look in our project folder we will see a new folder has been created, ‘node_modules’. This has our Node install in it. Also, if we look inside our package.json file we will see that it has been updated. There is a new “dependencies” section which lists our new “node” install, and a “start” script which is used to start our server with the command “node server.js”. You may remember this command from last time, it is how we started our simple Node server.

We can now start our server using this package.json. We will do this using npm, like this:

npm start

This command runs the “start” script in our package.json, which we saw runs the “node server.js” command which we typed manually last time, and our server starts up just like before, listening away. You can imagine how using a package.json file gives us much more control over how our Node app runs.

Next we want to add the Express module. You can probably already guess what to type.

npm install express

When this is finished and we look inside our package.json, we have a new dependency listed: “express”. We also have many more folders in our node_modules subfolder. Express has a whole load of other modules that it uses and they have all been installed automatically for us by npm.

Now we have everything we need to start using Express functions in our server.js, so lets look at what code we need.

First we ‘require’ Express. We don’t need to require HTTP any more, because Express will handle all this for us. So we can change our require line to this:

const express = require('express')

Next thing to do is to create an Express ‘app’, which will handle all our web stuff. This is done with this line:

const app = express()

Our simple web server currently sends back a Hello World message when someone visits. Lets modify our code to use Express instead of the native Node HTTP module we used last time.

This is how Express sends back a Hello World message:

app.get('/', (req, res) => { 
   res.send('Hello World!') 
} )

Hopefully you can see what this is doing, it looks very similar to the http.createServer method we used previously.

The ‘app.get’ means it will listen for regular browser GET requests. If we were sending in form data, we would probably want to instead listen for a POST request with ‘app.post’.

The ‘/’ is the route path pattern that it is listening for, in this case just the root of the server. This path pattern matching is where the power of Express comes in. We can have multiple ‘app.get’ commands matching different paths to map to different things, and we can use wildcards and other clever features to both match and get information out of the incoming URLs. These are the ‘routes’ I mentioned earlier, sort of the equivalent of Domino Web Site Rules. They make it easy to keep the various, often complex, functions of a web site separate and understandable. I will talk more about what we can do with routes in a future blog.

So our app will listen for a browser request hitting the root of the server, e.g. http://127.0.0.1:3000, which we used last time. The rest of the command is telling the app what to do with it. It is a function (using the arrow ‘=>’ notation) and it takes the request (‘req’) and the response (‘res’) as arguments. We are simply going to send back our Hello World message in the response.

So we now have our simple route set up. The last thing we need to do, same as last time, is to tell our app to start listening:

app.listen(port, hostname, () => { 
   console.log(`Server running at http://${hostname}:${port}/`); 
});

You may notice that this is exactly the same code as last time, except we tell the ‘app’ to listen instead of the ‘server’. This helps illustrate how well Express is built on Node and how integrated it all is.

Our new updated server.js should look like this:

const express = require('express');
const hostname = '127.0.0.1';
const port = 3000;
const app = express();
app.get('/', (req,res)=> {
   res.send("Hello World!")
});
app.listen(port, hostname, () => {
   console.log(`Server running at http://${hostname}:${port}/`);
});

This is one less line than before. If we start the server again by typing ‘npm start’ and then browse to http://127.0.0.1:3000, we get our Hello World message!

Now, this is all well and good, but aren’t we just at the same place as when we started? Our Node server is still just saying Hello World, what was the point of all this?

Well, what we have now, that we did not have before, is the basis of a framework for building proper, sophisticated web applications. We can use npm to manage the app and its add-ons and dependencies, the app is self-contained so we can move it around or containerise it, and we have a fully-featured middleware (i.e. Express) framework ready to handle all our web requests.

Using this basic structure, we can build all sorts of things. We would certainly start by adding the upcoming Domino 10 connector to access our Domino data in new ways, and then we could add Angular or React (or your favourite JS client platform) to build a cool modern web UI, or we could make it the server side of a mobile app. If your CIO is talking about microservices, then we can use it to microserve Domino data. If your CIO is talking about REST then we can provide higher-level business logic than the low-level Domino REST API.

In my next blog I plan to talk about more things we can do with Node, such as displaying web pages, about how it handles data (i.e. JSON), and about how writing an app is both similar and different to writing an app in Domino.