How to use the new Domino Query Language

By Tim Davis – Director of Development.

Before I talk about building a Domino-based API gateway on Node.js, I thought it would be a good idea to expand a little on how to use the new Domino Query Language (DQL) to run queries on Domino documents.

DQL is the new query language that comes with Domino 10. It is separate from the other query syntaxes in Domino, such as the Full Text Search syntax, or @formula selection criteria. You can use it in other places than Node.js, such as LotusScript and Java, and you can even test your queries from the command line.

What is cool about DQL is that it runs using the new Design Catalog system database, GQFdsgn.nsf. This does a lot of heavy lifting in the background with pre-built design data which makes running the queries more efficient. I am told this will not always be part of the DQL engine, but for now you do need it.

The DQL processing engine also does clever prioritizing of the components of the query to make the process as efficient as possible. You can see how this is done using the ‘explain’ tool (see below).

The syntax is pretty straight-forward. Here are some simple examples to give you an idea:

customer = 'Bob Smith'
age > 21
form = 'Person' and type = 'Client'

You have all the usual operators such as =, >, <, <=, >=, and, or, not. You can wrap elements in parentheses ( ) to control the precedence.

String values are enclosed in single quotes. You can duplicate quotes in your strings to escape them, e.g.:

name = 'James O''Brien'

You can also use lists with the ‘in’ operator, like this:

form = 'Person' and type in ( 'Client', 'Supplier', 'Agency' )

DQL comes with some useful built-in functions that you can use in your queries:

@Created
@ModifiedInThisFile
@DocumentUniqueID

You can search using dates, which are included using the built-in @dt function and are in RFC3339 format. This looks a bit clunky but is pretty clear once you get used to it:

@Created > @dt('2018-01-01T00:00:00+0500')

A cool feature is being able to search in view columns. You use the view name or alias and the programmatic column name, like this:

'PendingOrders'.orderNo = '000101'

These view and column searches are fast because the design analysis of the db has already been done and stored in the design catalog. However if the database isn’t in the catalog then this kind of search will fail.  Adding databases to the catalog is an optional step you would do if you want to perform this kind of query, but being in the catalog helps with the performance of regular searches as well and so is always recommended.

One thing to bear in mind is that currently you can only search on regular summary fields in the documents, which means you can’t search in rich text items yet. I am told this is coming later.

Also, if your query generates an error during the processing then you will get no results returned. You don’t get partial results (unlike some SQL behavior). This is likely not a problem provided you know to expect it.

With all of this you should have everything you need to run pretty much any query you want, with plenty of flexibility and control.

Talking of control, DQL comes with a command line DomQuery tool to test your queries. It has an -e (explain) option that will deconstruct how your query will be processed and details how it will perform. It splits out each element of the search and lists the order they are processed, how long each takes, and how the results get filtered down step by step. You can use this to test out your searches and tune them to make sure they run efficiently and perform well for the users.

However, just in case you do end up running a query that gets out of hand there are limits to protect the server. The limits are:

  • MaxDocsScanned – maximum allowable NSF documents scanned ( 200000 default)
  • MaxEntriesScanned – maximum allowable index entries scanned (200000 default)
  • MaxMsecs – maximum time consumed in milliseconds ( 120000 ) (2 minutes)

You can override these limits system-wide using notes.ini settings if you need to, but you really ought to instead review your query’s behaviour before you consider doing this.

The DQL engine is continuously being enhanced and the new Domino 10.0.1 version adds support for query arguments (also known as substitution variables). You use query arguments when building your queries to help avoid code injection. This works by defining specifically which elements of a query are user input rather than just constructing the query as one string. Without this, a user could maliciously input some DQL syntax into the query and access unexpected documents or data.

I hope all this helps give you an idea of how easy and fast it will be to run DQL queries from a Node.js app. In my next post I plan to bring this together with the domino-db module and build an API gateway.

Using Node.js to access Domino

You will be pleased to hear that the Domino 10 module for Node.js is now in beta (you can request early access here) and in this article I would like to show you how easy it will be to use.

Before we get started using the Domino module in Node, we do need to do some admin stuff on our Domino server. It has to be running Domino 10 and we have to install the Proton add-in, and we also have to create the Design Catalog including at least one database. (The Proton add-in listens on its own port, by default 3002, and is separate from HTTP.)

More detail on these admin tasks will be covered in a companion blog, but here I would like to focus on the app-dev side.

Lets build a simple Node.js system that will read some Domino documents and get field values, and also create some new documents. In my next post we can integrate this into the basic Node stack we developed in my previous post to create an actual API.

Assuming our Domino server is all set up, the first thing we do is install the new dominodb module. When this goes live later in the year, we will do this with the usual  ‘npm install dominodb’, but while we are still in beta we install it from the downloaded beta package:

npm install ../packages/domino-domino-db-1.0.0.tgz --save

Once we have the dominodb connector installed, we can use it in our Node server.js code. The sequence is almost exactly the same as in LotusScript. First, we connect to Domino and open the Server (sort of equivalent to a NotesSession), then from this we open our Database, and then using the Database we can access and update Documents.

We start the same way as with all other Node.js modules, with a ‘require’:

const { useServer } = require('@domino/domino-db');
This ‘useServer’ is a function and is going to create our server connection. We give it the Domino server’s hostname and the Proton port, and it connects to our server like this:
serverConfig = {
    hostName: 'server.mydomain.com',
    connection: { port:'3002'}
};
useServer( serverConfig )
The useServer function returns a ‘server’ object. This is inside a javascript promise (I talked about promises in an earlier post), so we can use the server object inside the promise to open our database like this:
useServer( serverConfig ).then( async server => {
    const database = await server.useDatabase( databaseConfig );
The databaseConfig contains the filepath of our database on the server:
const databaseConfig = {
    filePath: 'orders.nsf'
};

Notice how we are using ‘async await’ to simplify the asynchronous nature of getting the database. This code looks very similar to the LotusScript equivalent.

So we have created a server connection and opened a database. Now we want to get some documents.

The domino-db module uses the new Domino Query Language (DQL) which comes with Domino 10. This can perform very efficient high-performance queries on Domino databases.

It is very much like getting a document collection with a db.search( ) in LotusScript, and the query syntax is similar to selection formulas, for example:

const coll = await database.bulkReadDocuments({
    query: "Form = 'Order' and Customer = 'ACME'"
});
This returns a collection of documents in a JSON array. These documents do not automatically contain all the items from the Notes documents. By default they only have some basic metadata, i.e. unid, created date, and modified date:
{
  "documents":[
    {
      "@unid":"A2504056F3AF6EFE8025833100549873",
      "@created": {"type":"datetime","data":"2018-10-25T15:24:00.51Z"},
      "@modified": {"type":"datetime","data":"2018-10-25T15:24:00.52Z"}
    }
  ],
  "errors":0,
  "documentRange":{"total":1,"start":0,"count":1}
}
To get field data from the documents, you need to specify which fields you want returned, and this is done with an array of itemNames:
const coll = await database.bulkReadDocuments({
  query: "Form = 'Order' and Customer = 'ACME'",
  itemNames: ['OrderNo', 'Qty', 'Price']
});
Then you get the field data included in your query results:
{
"documents":[
  {
    "@unid":"A2504056F3AF6EFE8025833100549873",
    "@created":{"type":"datetime","data":"2018-10-25T15:24:00.51Z"},
    "@modified":{"type":"datetime","data":"2018-10-25T15:24:00.52Z"},
    "OrderNo":"001234",
    "Qty":2,
    "Price":25.49
  }
],
"errors":0,
"documentRange":{"total":1,"start":0,"count":1}
}
We can output the document field data with JSON dot notation in a console.log (with the JSON.stringify method to format it properly):
console.log( "Order No: " + JSON.stringify( coll.documents[0].OrderNo ) );

So this is how to read a collection of documents. Now lets look at creating a new document, which is even easier.

First we build our document data in an ‘options’ object, including the Form name and all our other item values:

const newDocOptions = {
  document: {
    Form: 'Order',
    Customer: 'ACME',
    OrderNo: '000343',
    Qty: 10,
    Price: 49.99
  }
};
Then we simply call the ‘createDocument’ method, like this:
const unid = await database.createDocument( newDocOptions );

We don’t need to call a ‘save’ on the document, it is all handled in one operation. The return value is the unid of the newly-created document, so we can act on it again to update it if we want to.

To update a document, we get it by its unid with the ‘useDocument’ method, and then we can call ‘replaceItems’ on it.

This takes the new values in a ‘replaceItems’ parameter, but it only needs to contain the fields to update:

const document = await database.useDocument({
  unid: coll.documents[0]['@unid']
});
await document.replaceItems({
  replaceItems: { Qty: 11 }
});

Here we are using the ‘@unid’ value from the collection we got earlier. This is a bit fiddly because ‘@’ is a reserved symbol in JSON, but we can use the JSON square bracket notation to get around this.

We have got documents and created and updated documents. In addition to all these, the domino-db module provides a variety of methods for reading, creating and updating documents, allowing you to do anything you would need.

Also, the DQL syntax has sophisticated search facilities that can return large numbers of documents and can even search in views and columns.

There are couple of things to be aware of:

The first is that by default the connection to Domino is unsecured, but you can easily make it use TLS/SSL. You may need help from your Domino admin to provide you with the certificate and key files, but this is all explained in the documentation.

The second thing is that currently this access is either Anonymous or can be set to a single Domino user account in TLS/SSL by the client certificate (which maps to a Domino person document). So in the beta there is no user authentication per se, but this will be coming later with OAuth support.

I hope you found this both useful and exciting. In my next article I plan to show how to build these domino-db methods into a Node HTTP server and create an API gateway.

 

Building a stack in Node.js

[Update: Since I posted this article, I have been informed that the domino-db node integration will be available in beta with Domino10 in a week’s time!]

With Domino 10 nearly upon us, and the Node integration hopefully following soon after, I thought I would talk about building a full-stack application in Node.js, covering how modern JavaScript UI frameworks can be built on top of Node.js and integrated with Domino in the background as a datastore.

This is all part of the IBM and HCL strategy of having Node.js as a parallel development platform alongside the standard Domino development tools, with Node providing a way for web developers to extend existing Domino apps and datastores. However, you don’t have to wait for the Domino node module to start learning about this.

If you consider the development stack acronyms of DEAN and DERN (or NERD), the UI framework is the ‘A’ and the ‘R”. These initials refer to Angular and React respectively, but generally apply to any JavaScript framework, and there are many very good ones.

The main advantage of a development stack is that the UI layer can be independent of the middleware, server, and datastore layers and so you can replace or modify the UI without impacting the rest of the architecture. As an example, you might want to do this to extend an existing web app onto a mobile platform which may require a different UI.

A review of popular frameworks and their features and advantages is beyond the scope if this post, and I may return to that later, but for now I would like to get into the broader topic of how this all fits together, i.e. how does a JavaScript UI sit ‘on top of’ Node?

The first thing to understand is that UI frameworks such as Angular, React, et al, are nothing to do with Node.js. They are not part of Node.js and do not require Node.js to work. They all run perfectly happily on any web server, including Domino. When we use a UI framework with Node, Node is essentially acting as a web server, serving the framework web pages to a browser which then loads the pages and runs the framework code.

You can run framework components inside a regular Domino web form on a Domino server, but the advantage of using the JavaScript development stack is two-fold. First, the stack is all JavaScript, so it makes it easy to talk between layers because all the data is JSON. Second, we are opening up Domino to a new development arena, with an established community, support resources and third-party products.

JavaScript UI frameworks come in all sizes and flavours, but they all work essentially the same way. You embed some code in your HTML web pages and then call the framework library (usually a js file) to enable the framework within these pages. The key thing is that everything is held in the usual web files and folders which are stored on the web server file system. Your server (whether it is Domino, Node, or Apache) simply serves up these files when a browser hits it.

In my earlier post, I talked about how you can use Express to provide middleware routes in your Node server. Here is an example which uses a static route to serve up the contents of the ‘public’ folder on the server:

const express = require('express');
const app = express();
app.use(express.static(__dirname + '/public'));
const hostname = '127.0.0.1';
const port = 3000;
app.listen(port, hostname, () => {
   console.log(`Server running at http://${hostname}:${port}/`);
});

By default, the server opens the index.html file in that folder. Luckily, this is the default filename used in most UI frameworks, so it is very easy to put your framework web app and all its files and folders in the public folder and run it from your website. The folder structure could look something like this:

node
|   server.js
|   package.json
|
+---node-modules
|   \---(node stuff...)
|
+---public
|      index.html
|      other framework files...
|      +---framework folders...
|      \---etc

The Node server runs from the server.js (using modules such as Express which are installed under the node-modules folder), and serves up the framework UI starting with the index.html in the public folder.

So we have a nice UI running on our node server. Now we need to connect this UI to our data backend. In a Domino web form we would typically make a call to a web agent which would return whatever Domino data we need. Here in Node we do pretty much the same thing, specifically our UI will make a call to an API to get the data.

Node is great at two important roles - being a web server and being an API gateway. So we can add middleware routes to our Node server to handle these API calls.

Let’s suppose we want to get a list of people to display on-screen. In our UI framework we will make a call to an API on the same Node server, something like this:

let uri = myDomain + "/api/people";
request.get( uri ).then( displayTheResults );

So this request would hit our Node server with the route “/api/people”. Currently, our server doesn’t know what to do with this, so let’s add in a new Express route to handle it.

app.get('/api/people', (req, res) => {
   let myResults = doLookup();
   res.json( myResults );
})

This will catch requests coming in to “myDomain.com/api/people”, and then our middleware code does the lookup and sends the results back in the response as JSON.

Notice how these methods handle all the JSON stuff for us, making it easy to pass the data back and forth.

This is all we have to do to to get our server responding to the API call. Now we need to look at how we get data from the backend, i.e. what happens in our doLookup().

If we suppose we are going to be using the upcoming domino-db module, then we can use its methods to run queries on Domino data. The details of exactly how the domino-db module works are still under NDA right now, but it could be something like this:

function doLookup() {
const { domServer } = require('domino-db');
   domServer(serverConfig).then(
      async (server) => {
         const db = await server.useDatabase(dbConfig);
         const docs = await db.bulkReadDocuments(DQLquery);
         return docs;
   });
} 

This example function would run a Domino Query Language query on a Domino database which returns a JSON array of document objects, i.e. ‘docs’.

We pass this back as the return value of our doLookup function and this will be sent out as the response from our API route.

Back in our front-end framework UI, we receive this JSON data in our ‘request.get’ call and we can then go ahead and call our ‘displayTheResults’ function.

This really is pretty much all there is to it. We can easily get data, in a standard JSON format, all the way from the datastore to the UI front-end, without needing fiddly data manipulation and all in only a few lines of code.

Also, what is great is that the UI framework is separate from the Express routes and these are separate from the datastore. They can all be on different servers, and we can even have different UIs accessing the same API routes to get at the same data, for example separate web apps and mobile apps.

I hope this gives you an idea of how we will be able to about go about building a full-stack application in a DEAN/DERN environment. In my next blog I plan to expand on how we can use Express to build useful routes to do all sort of things, such as performing CRUD operations and incorporating business logic.

The story of Async in JavaScript

By Tim Davis – Director of Development

In my last post I talked about some Javascript concepts that will be useful when starting out with Node.js. This time I would like to talk about a potentially awkward part of JavaScript, i.e. asynchronous (async) operations. It is a bit of a long story, but it does have a happy ending.

So what is an asynchronous operation? Basically, it means a function or command that goes off and does its own thing while the rest of the code continues. It can be really useful or really annoying depending on the circumstances.

You may have used async code if you ever did AJAX calls to Domino web agents for lookups on web pages. The rest of the page loads while the lookup to the web agent comes back, and the user is happy because part of the page updates in the background. This is brilliant and is the classic use case for an async function.

This asynchronous behaviour is built into JavaScript through and through and you need to bear it in mind when you do any programming in Node.

So how does this async behaviour manifest itself? Lets look at an example. Suppose we have an asynchronous function that goes and does a lookup somewhere.

function doAsyncLookup() {
    ... do the lookup ...
    console.log("got data");
}

Then suppose we call this function from our main code, something like this:

console.log("start");
doAsyncLookup();
console.log("finish");

The output will be this:

start
finish
got data

By the time the lookup has completed it is too late, the code has moved on.

So how do you handle something like this? How can you possibly control your processes if things finish on their own?

The original way JavaScript async functions allowed you to handle this was with ‘callbacks’.

A callback is a function that the async function calls when it is finished. So instead of your code continuing after the async function is called, it continues inside the async function.

In our example a callback could look something like this:

function myCallback() {
    console.log("finish");
}
console.log("start");
doAsyncLookup( myCallback );

Now, the output would be this:

start
got data
finish

This is much better. Usually, the callback function receives the results of the async function as a parameter, so it can act on those results. So in examples of callbacks around the web, you might see something like:

function myCallback( myResults ) { 
    displayResults( myResults );
    console.log("finish"); 
} 
console.log("start"); 
doAsyncLookup( myCallback );

Often the callback function doesn’t need to be defined separately and is defined inside the async function itself as a sort of shorthand, so you will probably see a lot of examples looking like this:

console.log("start"); 
doAsyncLookup( function ( myResults ) { 
    displayResults( myResults ); 
    console.log("finish"); 
} );

This is all great, but the problem with callbacks is that you can easily get a confusing chain of callbacks within callbacks within callbacks if you want to do other asynchronous stuff with the results.

For example, suppose you do a lookup to get a list, then want to look up something else for each item in the list, and then maybe update a record based on that lookup, and finally write updates to the screen in a UI framework. In a JavaScript environment it is highly likely that each of these operations is asynchronous. You end up with a confusing chain of functions calling functions calling functions stretching off to the right, with all the attendant risk of coding errors that you would expect:

console.log("start"); 
doAsyncLookup( function ( myResults ) { 
    lookupItemDetails( myResults, function ( myDetails ) {
        saveDetails( myDetails, function ( saveStatus ) {
            updateUIDisplay( saveStatus, function ( updatedOK ) {
                console.log("finish");
            } );
        } );
    } );    
} );

It gets even worse if you add in error handling. We may have solved the async problem, but at the penalty of terrible code patterns.

Well, after putting up with this for a while the JavaScript world came up with a better version of callbacks, called Promises.

Promises are much more readable than callbacks and have some useful additional features. You pass the results of each function to the next with a ‘then’, and you can just add more ‘thens’ on the end if you have more async things to do.

Our nightmare-indented example above becomes something like this (here I am using the popular arrow notation for functions, see my previous article for more on them):

console.log("start"); 
doAsyncLookup()
.then( (myResults) => { return lookupItemDetails(myResults) } )
.then( (myDetails) => { return saveDetails(myDetails) } )
.then( (saveStatus) => { return updateUIDisplay(saveStatus) } )
.then( (updatedOK) => { console.log("finish") } );

This is much nicer. We don’t have all that ugly nesting.

Error handling is easier, too, because you can add a ‘catch’ to the end (or in the middle if you need) and it is all still much more clear and understandable:

console.log("start"); 
doAsyncLookup() 
.then( (myResults) => { return lookupItemDetails(myResults) } ) 
.then( (myDetails) => { return saveDetails(myDetails) } ) 
.then( (saveStatus) => { return updateUIDisplay(saveStatus) } ) 
.then( (updatedOK) => { console.log("finish") } )
.catch( (err) => { ... handle err ... } );

What is really neat is that you can create your own promises from existing callbacks, so you can tidy up any older messy async functions.

Promises also have some great added features which help with other async problems. For example, with ‘Promises.all’ you can force a list of async calls to be made in order.

So promises solved the callback nesting problem, but The Gods of JavaScript were still not satisfied.

Even with all these improvements, this code is still too ‘asynchronous’. It is still a chain of function after function and you have to pay attention to what is passed from one to the next, and remember that these are all asynchronous and be careful with your error handling.

Once upon a time, Willy Wonka gave us ‘square sweets that look round’, and so now TGOJ have given us ‘asynchronous functions that look synchronous’.

The latest and greatest advance in async handling is Async/Await.

All you need to do is make your main function ‘async’, and you can ‘await’ all your promises:

async function myAsyncStuff() {
    console.log("start"); 
    let myResults = await doAsyncLookup();
    let myDetails = await lookupItemDetails(myResults);
    let saveStatus = await saveDetails(myDetails);
    let updatedOK = await updateUIDisplay(saveStatus); 
    console.log("finish");
 }

How cool is this? Each asynchronous function runs in order, with no messy callbacks or chains of ‘thens’. They all sit in their own line of code just like regular functions. You can do things in between them, and you can wrap them in the usual try/catch error handling blocks. All the async handling stuff is gone, and this is done with just two little keywords.

Plus, the functions are all still promises, so you can do promise-y things with them if you want to, and you can create and ‘await’ your own promises to refactor and revive old callback code.

Async/Await is fully supported by Node.js, by popular UI frameworks like Angular and React, and by all modern browsers.

One of the biggest headaches in JavaScript development now has an elegant and usable solution and they all lived happily ever after.

I hope you enjoyed this little story. I told you it had a happy ending.

Things to know with JavaScript - JSON, let, const, and arrows

By Tim Davis - Director of Development

While we eagerly await the arrival of the npm domino-db module with Domino 10, I thought I would spend this instalment of my blog series on Node.js talking a little about some concepts in JavaScript that are used a lot in Node development. If you haven’t looked at JavaScript much since Domino web forms or XPages SSJS then you may not have come across them. You will see them in examples and articles on Node around the web and will want to use them in your own projects as they will make your life easier when starting out.

JSON

The first is JavaScript Object Notation, or JSON, which I talked about briefly in my blog on NoSQL.

Basically, all the data in Node is JSON. This makes it great for storing in backend NoSQL data stores and for handling in front-end JavaScript frameworks.

JSON is a very readable way of describing data, and it looks like this:

{
    "orderNo" : "00101",
    "orderLines": [
        { "quantity" : 7 },
        { "quantity" : 11 },
        { "quantity" : 3 }
    ],
    "status": "Invoiced"
}

An object is denoted by the curly brackets { }. An array is denoted by square brackets [ ]. The items inside the object are name-value pairs. Items are separated by commas in both objects and arrays.

You can type this sort of thing directly into your code if you like, but you would normally just get it from somewhere else, like a database.

You reference the object by name and can access or update its properties using dot notation:

currentOrder.status = "Invoiced";
if ( orderLine.quantity > 100 ) {
    ... your code here ...
}

JSON is hierarchical and you can nest objects inside objects, and arrays inside objects inside arrays, etc, etc. It is a bit like XML in that way, but much easier to read.

You can access nested objects inside arrays inside objects (etc) using dot notation like this:

orders[i].orderLines[j].quantity = 10

JSON arrays are just regular arrays, so you can loop through them:

for ( i = 0; i < currentOrder.orderLines.length; i++ ) {
    ...
}

One great side effect of JSON being so readable is that it is easily converted to and from strings. Converting to strings is a great way to pass data around between different systems. You can use the built-in JSON object to do this:

JSON.stringify( currentOrder )
JSON.parse( '{ "status":"Invoiced", "orderNo":"00101" }' )

You should use these methods because they handle all the formatting and escaping of special characters for you.

Let and Const

These are two new ways of defining variables in Javascript and you will see them a lot. You will already be familiar with using ‘var’, like this:

var count = 0;

You use ‘let’ and ‘const’ in the same way as ‘var’:

let count = 0;
const domain = "mydomain.com";

‘Let’ and ‘const’ are similar to ‘var’, but they behave in a way that helps you avoid problems in your code.

As you can probably guess, ‘const’ is for constant values that will never change. If you try to set another value you will get an error. This will help prevent you overwriting something important in another part of your code.

What ‘let’ does that is different from ‘var’ is more subtle and is all about the variable’s scope, i.e. where in your code it exists.

If you define a variable using ‘var’, then it exists everywhere inside the enclosing function, i.e. everywhere inside the function you are currently in. This is a very wide area, and it is easy to forget and lose track of variable names and values and get confused. This is especially common when you have lots of loops inside loops inside one function.

With ‘let’, a variable only exists inside the current set of curly brackets, i.e code block. So for example a variable would only exist inside a particular loop and not exist outside in the parent function. This helps avoid all sorts of conflicts and overwriting errors.

Here is an example of how let and var work differently inside and outside curly brackets. Notice how ‘var’ overwrites the value while ‘let’ does not:

let cat = "meow";
var dog = "bark";
console.log("cat "+cat); // will be meow
console.log("dog "+dog); // will be bark
if (true) {
    let cat = "scratch";
    var dog = "wag";
    console.log("cat "+cat); // will be scratch
    console.log("dog "+dog); // will be wag
}
console.log("cat "+cat); // will be meow
console.log("dog "+dog); // will be wag

The cat inside the curly brackets is a different cat from the one outside them, but the dog is the same everywhere. This is why the dog gets confused.

Arrow functions

If you read articles on Node or look at code examples, you may have seen functions defined something like this, with the ‘=>’ arrow notation:

( arg1, arg2 ) => { ... }

This is more or less equivalent to

function( arg1, arg2 ) { ... }

The main difference is in how the keyword ‘this’ works.

In a regular function(), when you use ‘this’ it refers to what calls the function. With an arrow function, ‘this’ is from outside what calls the function. This is a pretty arcane distinction, and worth reading up about, but it is very useful in avoiding coding errors.

As an example, when developing in Node you often have functions defined inside methods as callbacks. A callback is a function that is called when a process has finished, usually to go ahead and do something with the results of that process. These usually look something like this:

myOrderDb.getOrders( function( myOrders ) {
    ... do something with myOrders ...
} );

Here you can see that the parameter in the getOrders method is a function. This is a callback function which is called when getOrders finishes and takes the result, ‘myOrders’, and does something with it.

Consider the following example code. I want my app (i.e. ‘this’) to get records from a database and, when that is done, to update its display:

this.showLoadingMessage();
let myOrdersDb = this.getDb();
myOrdersDb.getOrders( function(myOrders) {
    // the following line does not work
    this.displayOrders(myOrders);
} );

So what is wrong? I am expecting ‘this’ to refer to my app so I can go ahead and update the app display with the orders, but the ‘this’ inside the function actually points to myOrderDb because it is myOrderDb that is calling the function. The object that ‘this’ refers to gets overwritten inside regular functions. Keeping track of ‘this’ can be a nightmare when you have a complicated series of callbacks and this can be an easy mistake to make.

However, if you use an arrow function then ‘this’ is not overwritten. It is the same inside the function as it was outside it. So an arrow function version of our code would be:

this.showLoadingMessage();
let myOrdersDb = this.getDb();
myOrdersDb.getOrders( (myOrders) => {
    this.displayOrders(myOrders);
} );

This is only a small change, but now the ‘this’ inside the function is the app, same as outside it, and my call to the app’s displayOrders method will work. With arrow functions everything behaves much more how you would expect it to.

Next Up

In this post I have touched on callbacks, and next time I plan to expand on this topic and talk in detail about a classic bugbear in JavaScript development, asynchronous functions.

My Collabsphere 2018 video goes live: “What is Node.js?”

By Tim Davis – Director of Development

Last month, I presented a session at Collabsphere 2018 called ‘What is Node.js?’

In it I gave an introduction to Node and covered how to set it up and create a simple web server. I also talked about how Domino 10 will integrate with it, and about some cool new features of JavaScript you may not be aware of.

Luckily my session was recorded and the video is now available on the YouTube Collabsphere channel.

The slides from this session are also available on slideshare.

If you are interested in learning about Node.js (especially with the npm integration coming up in Domino 10) then its worth a look.

Many thanks to Richard Moy and the Collabsphere team for putting on such a great show!

Improving Node.js with Express

By Tim Davis - Director of Development

In my previous post I talked about what Node.js is and described how to create a very simple Node web server. In this post I would like to build on that and look at how to flesh out our server into something more substantial and how to use add-on modules.

To do this we will use the Express module as an example. This is a middleware module that provides a huge variety of pre-built web server functions, and is used on most Node web servers. It is the ‘E’ in the MEAN/MERN stacks.

Why should we use Express, since we already have our web server working? Really for three main reasons. The first is so you don’t have to write all the web stuff yourself in raw Node.js. There is nothing stopping you doing this if you need something very specific, but Express provides it all out of the box. One particularly useful feature is being able to easily set up ‘routes’. Routes are mappings from readable url paths to the more complicated things happening in the background, rather like the Web Site Rules in your Domino server. Express also provides lots of other useful functions for handling requests and responses and all that.

The second reason is that it is one of the most popular Node modules ever. I don’t mean therefore you should use it because everyone else does, but its popularity means that it is a de facto standard and many other modules are designed to integrate with it. This leads us nicely back around to the Node integration with Domino 10. The Node integration with Domino 10 is based on the Loopback adaptor module. Loopback is built to work with Express and is maintained by StrongLoop who are an IBM company, and now StrongLoop are looking after Express as well. Everything fits together.

The third and final reason is a selfish one for you as a developer. If you can build your Node server with Express, then you are halfway towards the classic full JavaScript stack, and it is a small step from there to creating sites with all the froody new client-side frameworks such as Angular and React. Also, you will be able to use Domino 10 as your back-end full-stack datastore and build DEAN/NERD apps.

So, in this post I will take you through how to turn our simple local web server into a proper Node.js app, capable of running stand-alone (e.g. maybe one day in a docker container), and then modify the code to use the Express module. This can form the basis of almost any web server project in the future.

First of all we should take a minute or two to set up our project more fully. We do this by running a few simple commands with the npm package manager that we installed alongside Node last time.

We first need to create one special file to sit alongside our server.js, namely ‘package.json’. This is a text file which contains various configuration settings for our app, and because we want to use an add-on module we especially need its ‘dependencies’ section.

What is great is we don’t have to create this ourselves. It will be automatically created by npm. In our project folder, we type the following in a terminal or command window:

npm init

This prompts you for the details of your app, such as name, version, description, etc. You can type stuff in or just press enter to accept the defaults for now. When this is done we will have our package.json created for us. It isn’t very interesting to look at yet.

We don’t have to edit this file ourselves, this is done automatically by npm when we install things.

First, lets install a local version of Node.js into our project folder. We installed Node globally last time from the download, but a local version will keep everything contained within our project folder. It makes our project more portable, and we can control versions, etc.

We install Node into our project folder like this:

npm install node

The progress is displayed as npm does its job. We may get some warnings, but we don’t need to worry about these for now.

If we look in our project folder we will see a new folder has been created, ‘node_modules’. This has our Node install in it. Also, if we look inside our package.json file we will see that it has been updated. There is a new “dependencies” section which lists our new “node” install, and a “start” script which is used to start our server with the command “node server.js”. You may remember this command from last time, it is how we started our simple Node server.

We can now start our server using this package.json. We will do this using npm, like this:

npm start

This command runs the “start” script in our package.json, which we saw runs the “node server.js” command which we typed manually last time, and our server starts up just like before, listening away. You can imagine how using a package.json file gives us much more control over how our Node app runs.

Next we want to add the Express module. You can probably already guess what to type.

npm install express

When this is finished and we look inside our package.json, we have a new dependency listed: “express”. We also have many more folders in our node_modules subfolder. Express has a whole load of other modules that it uses and they have all been installed automatically for us by npm.

Now we have everything we need to start using Express functions in our server.js, so lets look at what code we need.

First we ‘require’ Express. We don’t need to require HTTP any more, because Express will handle all this for us. So we can change our require line to this:

const express = require('express')

Next thing to do is to create an Express ‘app’, which will handle all our web stuff. This is done with this line:

const app = express()

Our simple web server currently sends back a Hello World message when someone visits. Lets modify our code to use Express instead of the native Node HTTP module we used last time.

This is how Express sends back a Hello World message:

app.get('/', (req, res) => { 
   res.send('Hello World!') 
} )

Hopefully you can see what this is doing, it looks very similar to the http.createServer method we used previously.

The ‘app.get’ means it will listen for regular browser GET requests. If we were sending in form data, we would probably want to instead listen for a POST request with ‘app.post’.

The ‘/’ is the route path pattern that it is listening for, in this case just the root of the server. This path pattern matching is where the power of Express comes in. We can have multiple ‘app.get’ commands matching different paths to map to different things, and we can use wildcards and other clever features to both match and get information out of the incoming URLs. These are the ‘routes’ I mentioned earlier, sort of the equivalent of Domino Web Site Rules. They make it easy to keep the various, often complex, functions of a web site separate and understandable. I will talk more about what we can do with routes in a future blog.

So our app will listen for a browser request hitting the root of the server, e.g. http://127.0.0.1:3000, which we used last time. The rest of the command is telling the app what to do with it. It is a function (using the arrow ‘=>’ notation) and it takes the request (‘req’) and the response (‘res’) as arguments. We are simply going to send back our Hello World message in the response.

So we now have our simple route set up. The last thing we need to do, same as last time, is to tell our app to start listening:

app.listen(port, hostname, () => { 
   console.log(`Server running at http://${hostname}:${port}/`); 
});

You may notice that this is exactly the same code as last time, except we tell the ‘app’ to listen instead of the ‘server’. This helps illustrate how well Express is built on Node and how integrated it all is.

Our new updated server.js should look like this:

const express = require('express');
const hostname = '127.0.0.1';
const port = 3000;
const app = express();
app.get('/', (req,res)=> {
   res.send("Hello World!")
});
app.listen(port, hostname, () => {
   console.log(`Server running at http://${hostname}:${port}/`);
});

This is one less line than before. If we start the server again by typing ‘npm start’ and then browse to http://127.0.0.1:3000, we get our Hello World message!

Now, this is all well and good, but aren’t we just at the same place as when we started? Our Node server is still just saying Hello World, what was the point of all this?

Well, what we have now, that we did not have before, is the basis of a framework for building proper, sophisticated web applications. We can use npm to manage the app and its add-ons and dependencies, the app is self-contained so we can move it around or containerise it, and we have a fully-featured middleware (i.e. Express) framework ready to handle all our web requests.

Using this basic structure, we can build all sorts of things. We would certainly start by adding the upcoming Domino 10 connector to access our Domino data in new ways, and then we could add Angular or React (or your favourite JS client platform) to build a cool modern web UI, or we could make it the server side of a mobile app. If your CIO is talking about microservices, then we can use it to microserve Domino data. If your CIO is talking about REST then we can provide higher-level business logic than the low-level Domino REST API.

In my next blog I plan to talk about more things we can do with Node, such as displaying web pages, about how it handles data (i.e. JSON), and about how writing an app is both similar and different to writing an app in Domino.

How to build your first Node.js app

By Tim Davis - Director of Development.

I have talked a little in previous posts about how excited I am about Node.js coming to Domino 10 from the perspective of NoSQL datastores, but I thought it would be a good idea to talk about what Node.js actually is, how it works, and how it could be integrated into Domino 10. (I will be giving a session on this topic at MWLUG/CollabSphere 2018 in Ann Arbor, Michigan in July).

So, what is Node.js? Put simply, it is a fully programmable web server. It can serve web pages, it can run APIs, it can be a fully networked application. Sounds a lot like Domino. It is a hugely popular platform and is the ‘N’ in the MEAN/MERN stacks. Also it is free, which doesn’t hurt its popularity.

As you can tell from the ‘.js’ in its name, Node apps are written in JavaScript and so integrate well with other JavaScript-based development layers such as NoSQL datastores and UI frameworks.

Node runs almost anywhere. You can run it in Windows, Linux, macOS, SunOS, AIX, and in docker containers. You can even make a Node app into a desktop app by wrapping it in a framework like Electron.

On its own, Node is capable of doing a lot, but coding something very sophisticated entirely from scratch would be a lot of work. Luckily, there are millions of add-on modules to do virtually anything you can think of and these are all extremely easy to integrate into an app.

Now, suppose you are a Domino developer and have built websites using Forms or XPages. Why should you be interested in all this Node.js stuff? Well, IBM and HCL are positioning the Node integration in Domino 10 as a parallel development path, which is ideal for extending your existing apps into new areas.

For example, a Node front-end to a Domino application is a great way to provide an API for accessing your Domino data and this could allow easy integration with other systems, or mobile apps, or allow you to build microservices, or any number of things which is why many IoT solutions are built with Node as a platform, including those from IBM.

In your current Domino websites, you will likely have written or used some JavaScript to do things on your web forms or XPages, maybe some JQuery, or Server-Side JavaScript. If you are familiar with JavaScript in this context, then you will be ok with JavaScript in Node.

So where do we get Node, how do we install it and how do we run it?

Getting it is easy. Go to https://nodejs.org and download the installer. This will install two separate packages, the Node.js runtime and also the npm package manager.

The npm package manager is used to install and manage add-in modules and optionally launch our Node apps. As an example, a popular add-on module is Express, which makes handling HTTP requests much easier (think of Express as acting like Domino internet site documents).  Express is the ‘E’ in the MEAN/MERN stacks. If we wanted to use Express we would simply install it with the simple command: ‘npm install express’, and npm would go and get the module from its server and install it for you. All the best add-on modules are installed using the ‘npm install xxxxxx’ command (more on this later!).

Once Node is installed, we can run it by simply typing ‘node’ into a terminal or command window. This isn’t very interesting, it just opens a shell that does pretty much nothing on its own. (Press control-c twice to exit if you tried this).

To get Node to actually do stuff, we need to write some JavaScript. A good starting example is from the Node.js website, where we build a simple web server, so let’s run through it now.

Node apps run within a project folder, so create a folder called my-project.

In our folder, create a new JavaScript file, lets call it ‘server.js’. Open this in your favourite code editor (mine is Visual Studio Code), and we can start writing some server code.

This is going to be a web server, so we require the app to handle HTTP requests. Notice how I used the word ‘require’ instead of ‘need’. If we ‘require’ our Node app to do anything we just tell it to load that module with some JavaScript:

const http = require('http');

This line essentially just tells our app to load the built-in HTTP module. We can also use the require() function to load any other add-on modules we may want, but we don’t need any in this example.

So we have loaded our HTTP module, lets tell Node to set up a HTTP server, and we do this with the createServer() method. This takes a function as a parameter, and this function tells the server what to do if it receives a request.

In our case, lets just send back a plain text ‘Hello World’ message to the browser. Like this:

const server = http.createServer((req, res) => {
 res.statusCode = 200;
 res.setHeader('Content-Type', 'text/plain');
 res.end('Hello World!\n');
});

There is some funny stuff going on with the arrow ‘=>’ which you may not be familiar with, but hopefully it is clear what we are telling our server to do.

The (req, res) => {…} is our request handling function. The ‘req’ is the request that came in, and the ‘res’ is the response we will send back. This arrow notation is used a lot in Node. It is more or less equivalent to the usual JavaScript syntax: function(req, res) {…}, but behaves slightly differently in ways that are useful to Node apps. We don’t need to know about these differences right now to get our server working.

In our function, you can see that we set the response status to 200 which is ‘Success OK’, then we make sure the browser will understand we are sending plain text, and finally we add our message and tell the server that we are finished and it can send back the response.

Now we have our server all set up and it knows what it is supposed to do with any request that comes in. The last thing we want it to do is to actually start listening for these requests.

const hostname = '127.0.0.1';
const port = 3000;
server.listen(port, hostname, () => {
 console.log(`Server running at http://${hostname}:${port}/`);
});

This code tells the server to listen on port 3000, and to write a message to the Node console confirming that it has started.

That is all we need, just these 11 lines.

Now we want to get this Node app running, and we do this from our terminal or command window. Make sure we are still in our ‘my-project’ folder, and then type:

node server.js

This starts Node and runs our JavaScript. You will see our console message displayed in the terminal window. Our web server is now sitting there and happily listening on port 3000.

To give it something to listen to, open a browser and go to http://127.0.0.1:3000. Hey presto, it says Hello World!

Obviously, this is a very simple example, but hopefully you can see how you could start to extend it. You could check the ‘req’ to see what the details of the request are (it could be an API call), and send back different responses. You could do lookups to find the data to send back in your response.

Wait, what? Lookups? Yes, that is where the Node integration in Domino 10 comes in. I know this will sound silly, but one of the most exciting things I have seen in recent IBM and HCL presentations is that we will be able to do this:

npm install dominodb

We will be able to install a dominodb connector, and use it in our Node apps to get at and update our Domino data.

Obviously, there is a lot more to all this than I have covered in the above example, but I hope this has given you an idea of why what IBM and HCL are planning for Domino 10 is so exciting. In future blogs I plan to expand more on what we can do with Node and Domino 10.

 

Domino 10 vs NoSQL

By Tim Davis – Director of Development.

With Domino 10 bringing Node.js, and my experience of  Javascript stacks over the past few years, I am very excited about the opportunities this brings for both building new apps and extending existing ones. I plan to talk about Node in future blogs, and I am giving an introduction to Node.js at MWLUG/CollabSphere 2018 in Ann Arbor, Michigan in July.  However, I would like to digress from the main topic of Node and Domino itself, and talk a little about an awareness side effect that I am hoping this new feature will have, i.e. moving Domino into the Javascript full stack development space.

There are a plethora of NoSQL database products. In theory, Domino could always have been a NoSQL database layer, but there was no real reason for any Javascript stack developer to even consider it. It would never appear in any suggested lists or articles, and would require some work to provide an appropriate API.

The thing is, working in the Javascript stack world, I was made very aware that pretty much all the available NoSQL database products did not appear very sophisticated compared to Domino (or most other major Enterprise databases - Oracle, DB2, SAP, MS-SQL, etc). The emphasis seemed always on ease of use and very simple code capabilities and not much else.

Now in and of itself this is a worthy goal, but it doesn’t take long before you begin to notice the features that are missing. Especially when you compare them to Domino, which can now properly call itself a Javascript stack NoSQL database.

Popular NoSQL databases are MongoDb, Redis, Cassandra, and CouchDb. As with all of the NoSQL databases, each was built to solve a particular problem.

MongoDb is the one you have probably most likely heard of. It is the ‘M’ in the MEAN/MERN stacks. It is very good at scaling and ‘sharding’ which is sharing workload across many servers. It also has a basic replication model for redundancy.

Redis is an open source database whose power is speed. It holds its data in RAM which is super-fast but not so scalable.

Cassandra came from Facebook, and is a kind of mix of table data and NoSQL and is good for very large volumes of data such as IoT stuff.

CouchDb was originally developed by Damian Katz from Lotus and its key feature is replication including to local devices, making it good for mobile/offline solutions. It also has built-in document versioning which improves reliability but can result in large storage requirements.

Each product has its own flavour and would be suited to different applications but there are many key features which Domino provides, that we are used to being able to utilise, and while some of these products may have similar features, none of them have a proper equivalent for all.

Read and Edit Access: Domino has incredibly sophisticated read and edit control, to individual documents and even down to field level. You can provide access through names, groups and roles, and all of this is built-in. In the other products, anything like this has to be pretty much obfuscated by specifying filters in queries. You are effectively rolling your own security. In Domino, if you are a user not in the reader field then you can’t read the document, no matter how you try to access it.

Replication and Clustering: One of Domino’s main strengths has always been its replication and clustering model. Its power and versatility is still unsurpassed. There are some solutions such as MongoDb and CouchDb which have their own replication features and these do provide valuable resilience and/or offline potential but Domino has the most fine control and distributed data capabilities.

Encryption: Domino really does encryption well. Most other NoSQL products do not have any. Some have upgrades or add-on products that provide encryption services to some degree, but certainly none have document-level or field-level encryption. You would have to write your own code to encrypt and decrypt the individual data elements.

Full Text Indexing: Some of the other products such as MongoDb do have a full text index feature, but these tend to be somewhat limited in their search syntax. You can use add-ons which provide good search solutions, such as Solr or Elasticsearch, but Domino has indexing built-in and those indexing solutions themselves have little security.

Other Built-in Services: Domino is not just a database engine. It has other services that extend its reach. It has a mail server, it has an agent manager, it has LDAP, it has DAOS. With the other products you would need to provide your own solution for each of these.

Historically, a big advantage for the other datastores was scalability, but with Domino 10 now supporting databases up to 256Gb this becomes less of an issue.

In general, all the other NoSQL products do have the same main advantage, the one which gave rise to their popularity in the first place, and this is ease of use and implementation. In most cases a developer can spin up a NoSQL database without needing the help of an admin. Putting aside the issue of whether this is actually a good idea for enterprise solutions, with containerization Domino can now be installed just as easily.

I hope this brief overview of the NoSQL world has been helpful. I believe Domino 10 will have a strong offering in a fast growing and popular development space. My dream is that at some point, Domino becomes known as a full stack datastore, and because of its full and well-rounded feature set, new developers in startups looking for database solutions will choose it, and CIOs in large enterpises with established Domino app suites will approve further investment in the platform.