Redesigning Templates - A HCL Design Project

I’m very pleased to announce the launch of a new project I am working on with HCL to reinvigorate the Notes and Domino templates as they roll out on new client platforms such as iPhone, tablet and a lightweight web client. See here for more information on the project.

My role is team co-ordinator and I will be responsible for bringing in the right resources to meet each stage of the project. I’ll be working closely with the team leads including HCLs design lead Felix Kalka, Theo Heselmans, Carl Tyler and Tim Davis. More about our team here

We’ll be keeping you updated on progress, asking for feedback and gathering ideas through the blog and surveys. We want to start with a very quick survey on discussions and teamrooms, please take a few minutes to complete it.

We really need your input if we’re going to get this right https://www.surveymonkey.co.uk/r/96LYTDK

If you are attending the Factory Tour in Chelmsford in July I will be there with Felix to talk about our work so far, to hear about your use cases and to share ideas for each client platform. If you aren’t at the Factory Tour and would like to discuss your use of any of the standard templates, please reach out directly to me via twitter (@gabturtle) or email (gabriella at turtlepartnership.com).

Details of the project, the team and our goals can be round on the new Template Experience blog here https://templateexperience.com. We plan to keep the blog updated with our progress, sharing ideas and asking you to complete surveys as we move forward.

This is a great opportunity to push our design thinking beyond the traditional framework and I thank HCL for commiting to the project and work involved.

Domino 11 Jam Coming To London

The Domino jams continue, now onto Domino 11 and with a date of January 15th in London. No location yet but I’d be very surprised if it’s not IBM South Bank.

I attended a couple of jams last year and I can confirm many of the comments made and items requested ended up in the v10 products and several have already been prioritised into v11.  If you are interested in the future of the collaboration products and especially Domino then you will want to contribute ideas to the jam so email Brendan McGuire (MCGUIREB@uk.ibm.com) and ask to attend.

We all hope to be there investing in the future or products we believe in.  Hope to see you there as well.

If you are interested in locations other than London check out this URL  where there are already locations and some dates announced.

#dominoforever

Deploying The AppDev Pack - An Admins Guide

Over here on the blog is Tim’s next entry talking about Node development and Domino, this time he explains how to use the early release of the app dev package to access (read and write) Domino data via Node.  However I don’t let developers do Domino admin so this is the bit where I explain how to configure Domino.  It’s all very easy and also all still early release so things may well change for GA.

First you will need to request the early release package which you can do here. What you’ll then get is a series of .tgz files including one entitled ‘domino-appdev-docs-site.tgz’ which, once extracted, gives you the index.html with instructions for installing.

You need to bear in mind that at least initially this only runs on Linux and Domino 10 and that Domino 10 on Linux 64bit officially means RHEL 7.4 or higher, or SLES 12. I went with RHEL 7.5.

Next we need to install  “Proton” so it can be run as a Domino server task which just means extracting the file ‘proton-addin.tgz’ into the /opt/ibm/domino/notes/latest/linux directory.   There is also some checking to make sure files are present and setting permissions but I don’t want to repeat the install instructions here as I would rather you refer to the latest official version of those.  Suffice it to say this is a 5 minute job at most.

Once the files are in place you can start and stop Proton as you would any other Domino task by doing “load Proton”, “tell Proton quit”, etc.

Then there are a few notes.ini settings you can choose to set including:

PROTON_SSL
= if you want the traffic between the Proton task and Node server to be encrypted (0/1).

PROTON_LISTEN_PORT= what port you want Proton to listen and be accessed by Node on (default 3002 ).

PROTON_LISTEN_ADDRESS= if you want Proton to listen on a specific address on your Domino server such as 127.0.0.1 which would require Node to be installed locally or 0.0.0.0 which will listen on any available address.

PROTON_AUTHENTICATION= how Proton handles authentication.  There are currently two options, client_cert or anonymous.  With authentication set to anonymous all requests that come from the Node application are done as an “anonymous” Domino user and your Domino application must allow Anonymous rights in the ACL.

The “client_cert” option requires the Node application to present a client certificate to the Proton task and for the Domino administrator to have already mapped that certificate to a specific person document by importing it.  Note that “client_cert” still means that all activity from that Node application will be done as a single identified user that must be in the ACL but does mean you need not allow anonymous access.  You can also use different identities in different Node applications.

Of course, what we all want is OAuth or an authentication model that allows individual user identities and this is hopefully why the product is still considered “early release”.   Both the “anonymous” and “client_cert” models are of limited use in production.

PROTON_KEYFILE
= the keyfile to use if you want PROTON to be communicating using SSL.  This isn’t releated to the Domino keyfile (although it could be) and since this is only for communication between your Node server and your Domino Proton task and never for client-facing traffic you could use entirely internally-generated keys since they only need to be shared with the Node server itself.

HCL have kindly provided scripts to generate all the certificates you need for your testing.

Finally we need to create a design catalog for Proton to use.  You can add individual databases to the design catalog and the first one you add actually creates the catalog.  There must be a catalog with at least one database in it for Proton to work at all.

The catalog contains an index of all the design elements in a Domino database so to add a new database to the catalog you would type:
load updall <database> -e

This isn’t dynamically maintained though, so if you change the design of a database you must update its entry in the catalog if you want to have new design elements added or updated, like this:
load updall <database path> -d

The purpose of the catalog is to speed up DQL’s access to the Domino data.  It’s not required that every database be catalogued but obviously doing so speeds up access and opens up things like view scanning using the <‘View or folder name’>.<Columnname> syntax.

So that’s my very quick admin guide to what I did that enabled Tim to do what he does. It’s very possible (even probable) that this entire blog will be obsolete when the GA release ships but hopefully this and Tim’s blog help you get started with the early release.

The story of Async in JavaScript

By Tim Davis – Director of Development

In my last post I talked about some Javascript concepts that will be useful when starting out with Node.js. This time I would like to talk about a potentially awkward part of JavaScript, i.e. asynchronous (async) operations. It is a bit of a long story, but it does have a happy ending.

So what is an asynchronous operation? Basically, it means a function or command that goes off and does its own thing while the rest of the code continues. It can be really useful or really annoying depending on the circumstances.

You may have used async code if you ever did AJAX calls to Domino web agents for lookups on web pages. The rest of the page loads while the lookup to the web agent comes back, and the user is happy because part of the page updates in the background. This is brilliant and is the classic use case for an async function.

This asynchronous behaviour is built into JavaScript through and through and you need to bear it in mind when you do any programming in Node.

So how does this async behaviour manifest itself? Lets look at an example. Suppose we have an asynchronous function that goes and does a lookup somewhere.

function doAsyncLookup() {
    ... do the lookup ...
    console.log("got data");
}

Then suppose we call this function from our main code, something like this:

console.log("start");
doAsyncLookup();
console.log("finish");

The output will be this:

start
finish
got data

By the time the lookup has completed it is too late, the code has moved on.

So how do you handle something like this? How can you possibly control your processes if things finish on their own?

The original way JavaScript async functions allowed you to handle this was with ‘callbacks’.

A callback is a function that the async function calls when it is finished. So instead of your code continuing after the async function is called, it continues inside the async function.

In our example a callback could look something like this:

function myCallback() {
    console.log("finish");
}
console.log("start");
doAsyncLookup( myCallback );

Now, the output would be this:

start
got data
finish

This is much better. Usually, the callback function receives the results of the async function as a parameter, so it can act on those results. So in examples of callbacks around the web, you might see something like:

function myCallback( myResults ) { 
    displayResults( myResults );
    console.log("finish"); 
} 
console.log("start"); 
doAsyncLookup( myCallback );

Often the callback function doesn’t need to be defined separately and is defined inside the async function itself as a sort of shorthand, so you will probably see a lot of examples looking like this:

console.log("start"); 
doAsyncLookup( function ( myResults ) { 
    displayResults( myResults ); 
    console.log("finish"); 
} );

This is all great, but the problem with callbacks is that you can easily get a confusing chain of callbacks within callbacks within callbacks if you want to do other asynchronous stuff with the results.

For example, suppose you do a lookup to get a list, then want to look up something else for each item in the list, and then maybe update a record based on that lookup, and finally write updates to the screen in a UI framework. In a JavaScript environment it is highly likely that each of these operations is asynchronous. You end up with a confusing chain of functions calling functions calling functions stretching off to the right, with all the attendant risk of coding errors that you would expect:

console.log("start"); 
doAsyncLookup( function ( myResults ) { 
    lookupItemDetails( myResults, function ( myDetails ) {
        saveDetails( myDetails, function ( saveStatus ) {
            updateUIDisplay( saveStatus, function ( updatedOK ) {
                console.log("finish");
            } );
        } );
    } );    
} );

It gets even worse if you add in error handling. We may have solved the async problem, but at the penalty of terrible code patterns.

Well, after putting up with this for a while the JavaScript world came up with a better version of callbacks, called Promises.

Promises are much more readable than callbacks and have some useful additional features. You pass the results of each function to the next with a ‘then’, and you can just add more ‘thens’ on the end if you have more async things to do.

Our nightmare-indented example above becomes something like this (here I am using the popular arrow notation for functions, see my previous article for more on them):

console.log("start"); 
doAsyncLookup()
.then( (myResults) => { return lookupItemDetails(myResults) } )
.then( (myDetails) => { return saveDetails(myDetails) } )
.then( (saveStatus) => { return updateUIDisplay(saveStatus) } )
.then( (updatedOK) => { console.log("finish") } );

This is much nicer. We don’t have all that ugly nesting.

Error handling is easier, too, because you can add a ‘catch’ to the end (or in the middle if you need) and it is all still much more clear and understandable:

console.log("start"); 
doAsyncLookup() 
.then( (myResults) => { return lookupItemDetails(myResults) } ) 
.then( (myDetails) => { return saveDetails(myDetails) } ) 
.then( (saveStatus) => { return updateUIDisplay(saveStatus) } ) 
.then( (updatedOK) => { console.log("finish") } )
.catch( (err) => { ... handle err ... } );

What is really neat is that you can create your own promises from existing callbacks, so you can tidy up any older messy async functions.

Promises also have some great added features which help with other async problems. For example, with ‘Promises.all’ you can force a list of async calls to be made in order.

So promises solved the callback nesting problem, but The Gods of JavaScript were still not satisfied.

Even with all these improvements, this code is still too ‘asynchronous’. It is still a chain of function after function and you have to pay attention to what is passed from one to the next, and remember that these are all asynchronous and be careful with your error handling.

Once upon a time, Willy Wonka gave us ‘square sweets that look round’, and so now TGOJ have given us ‘asynchronous functions that look synchronous’.

The latest and greatest advance in async handling is Async/Await.

All you need to do is make your main function ‘async’, and you can ‘await’ all your promises:

async function myAsyncStuff() {
    console.log("start"); 
    let myResults = await doAsyncLookup();
    let myDetails = await lookupItemDetails(myResults);
    let saveStatus = await saveDetails(myDetails);
    let updatedOK = await updateUIDisplay(saveStatus); 
    console.log("finish");
 }

How cool is this? Each asynchronous function runs in order, with no messy callbacks or chains of ‘thens’. They all sit in their own line of code just like regular functions. You can do things in between them, and you can wrap them in the usual try/catch error handling blocks. All the async handling stuff is gone, and this is done with just two little keywords.

Plus, the functions are all still promises, so you can do promise-y things with them if you want to, and you can create and ‘await’ your own promises to refactor and revive old callback code.

Async/Await is fully supported by Node.js, by popular UI frameworks like Angular and React, and by all modern browsers.

One of the biggest headaches in JavaScript development now has an elegant and usable solution and they all lived happily ever after.

I hope you enjoyed this little story. I told you it had a happy ending.

Things to know with JavaScript - JSON, let, const, and arrows

By Tim Davis - Director of Development

While we eagerly await the arrival of the npm domino-db module with Domino 10, I thought I would spend this instalment of my blog series on Node.js talking a little about some concepts in JavaScript that are used a lot in Node development. If you haven’t looked at JavaScript much since Domino web forms or XPages SSJS then you may not have come across them. You will see them in examples and articles on Node around the web and will want to use them in your own projects as they will make your life easier when starting out.

JSON

The first is JavaScript Object Notation, or JSON, which I talked about briefly in my blog on NoSQL.

Basically, all the data in Node is JSON. This makes it great for storing in backend NoSQL data stores and for handling in front-end JavaScript frameworks.

JSON is a very readable way of describing data, and it looks like this:

{
    "orderNo" : "00101",
    "orderLines": [
        { "quantity" : 7 },
        { "quantity" : 11 },
        { "quantity" : 3 }
    ],
    "status": "Invoiced"
}

An object is denoted by the curly brackets { }. An array is denoted by square brackets [ ]. The items inside the object are name-value pairs. Items are separated by commas in both objects and arrays.

You can type this sort of thing directly into your code if you like, but you would normally just get it from somewhere else, like a database.

You reference the object by name and can access or update its properties using dot notation:

currentOrder.status = "Invoiced";
if ( orderLine.quantity > 100 ) {
    ... your code here ...
}

JSON is hierarchical and you can nest objects inside objects, and arrays inside objects inside arrays, etc, etc. It is a bit like XML in that way, but much easier to read.

You can access nested objects inside arrays inside objects (etc) using dot notation like this:

orders[i].orderLines[j].quantity = 10

JSON arrays are just regular arrays, so you can loop through them:

for ( i = 0; i < currentOrder.orderLines.length; i++ ) {
    ...
}

One great side effect of JSON being so readable is that it is easily converted to and from strings. Converting to strings is a great way to pass data around between different systems. You can use the built-in JSON object to do this:

JSON.stringify( currentOrder )
JSON.parse( '{ "status":"Invoiced", "orderNo":"00101" }' )

You should use these methods because they handle all the formatting and escaping of special characters for you.

Let and Const

These are two new ways of defining variables in Javascript and you will see them a lot. You will already be familiar with using ‘var’, like this:

var count = 0;

You use ‘let’ and ‘const’ in the same way as ‘var’:

let count = 0;
const domain = "mydomain.com";

‘Let’ and ‘const’ are similar to ‘var’, but they behave in a way that helps you avoid problems in your code.

As you can probably guess, ‘const’ is for constant values that will never change. If you try to set another value you will get an error. This will help prevent you overwriting something important in another part of your code.

What ‘let’ does that is different from ‘var’ is more subtle and is all about the variable’s scope, i.e. where in your code it exists.

If you define a variable using ‘var’, then it exists everywhere inside the enclosing function, i.e. everywhere inside the function you are currently in. This is a very wide area, and it is easy to forget and lose track of variable names and values and get confused. This is especially common when you have lots of loops inside loops inside one function.

With ‘let’, a variable only exists inside the current set of curly brackets, i.e code block. So for example a variable would only exist inside a particular loop and not exist outside in the parent function. This helps avoid all sorts of conflicts and overwriting errors.

Here is an example of how let and var work differently inside and outside curly brackets. Notice how ‘var’ overwrites the value while ‘let’ does not:

let cat = "meow";
var dog = "bark";
console.log("cat "+cat); // will be meow
console.log("dog "+dog); // will be bark
if (true) {
    let cat = "scratch";
    var dog = "wag";
    console.log("cat "+cat); // will be scratch
    console.log("dog "+dog); // will be wag
}
console.log("cat "+cat); // will be meow
console.log("dog "+dog); // will be wag

The cat inside the curly brackets is a different cat from the one outside them, but the dog is the same everywhere. This is why the dog gets confused.

Arrow functions

If you read articles on Node or look at code examples, you may have seen functions defined something like this, with the ‘=>’ arrow notation:

( arg1, arg2 ) => { ... }

This is more or less equivalent to

function( arg1, arg2 ) { ... }

The main difference is in how the keyword ‘this’ works.

In a regular function(), when you use ‘this’ it refers to what calls the function. With an arrow function, ‘this’ is from outside what calls the function. This is a pretty arcane distinction, and worth reading up about, but it is very useful in avoiding coding errors.

As an example, when developing in Node you often have functions defined inside methods as callbacks. A callback is a function that is called when a process has finished, usually to go ahead and do something with the results of that process. These usually look something like this:

myOrderDb.getOrders( function( myOrders ) {
    ... do something with myOrders ...
} );

Here you can see that the parameter in the getOrders method is a function. This is a callback function which is called when getOrders finishes and takes the result, ‘myOrders’, and does something with it.

Consider the following example code. I want my app (i.e. ‘this’) to get records from a database and, when that is done, to update its display:

this.showLoadingMessage();
let myOrdersDb = this.getDb();
myOrdersDb.getOrders( function(myOrders) {
    // the following line does not work
    this.displayOrders(myOrders);
} );

So what is wrong? I am expecting ‘this’ to refer to my app so I can go ahead and update the app display with the orders, but the ‘this’ inside the function actually points to myOrderDb because it is myOrderDb that is calling the function. The object that ‘this’ refers to gets overwritten inside regular functions. Keeping track of ‘this’ can be a nightmare when you have a complicated series of callbacks and this can be an easy mistake to make.

However, if you use an arrow function then ‘this’ is not overwritten. It is the same inside the function as it was outside it. So an arrow function version of our code would be:

this.showLoadingMessage();
let myOrdersDb = this.getDb();
myOrdersDb.getOrders( (myOrders) => {
    this.displayOrders(myOrders);
} );

This is only a small change, but now the ‘this’ inside the function is the app, same as outside it, and my call to the app’s displayOrders method will work. With arrow functions everything behaves much more how you would expect it to.

Next Up

In this post I have touched on callbacks, and next time I plan to expand on this topic and talk in detail about a classic bugbear in JavaScript development, asynchronous functions.

Improving Node.js with Express

By Tim Davis - Director of Development

In my previous post I talked about what Node.js is and described how to create a very simple Node web server. In this post I would like to build on that and look at how to flesh out our server into something more substantial and how to use add-on modules.

To do this we will use the Express module as an example. This is a middleware module that provides a huge variety of pre-built web server functions, and is used on most Node web servers. It is the ‘E’ in the MEAN/MERN stacks.

Why should we use Express, since we already have our web server working? Really for three main reasons. The first is so you don’t have to write all the web stuff yourself in raw Node.js. There is nothing stopping you doing this if you need something very specific, but Express provides it all out of the box. One particularly useful feature is being able to easily set up ‘routes’. Routes are mappings from readable url paths to the more complicated things happening in the background, rather like the Web Site Rules in your Domino server. Express also provides lots of other useful functions for handling requests and responses and all that.

The second reason is that it is one of the most popular Node modules ever. I don’t mean therefore you should use it because everyone else does, but its popularity means that it is a de facto standard and many other modules are designed to integrate with it. This leads us nicely back around to the Node integration with Domino 10. The Node integration with Domino 10 is based on the Loopback adaptor module. Loopback is built to work with Express and is maintained by StrongLoop who are an IBM company, and now StrongLoop are looking after Express as well. Everything fits together.

The third and final reason is a selfish one for you as a developer. If you can build your Node server with Express, then you are halfway towards the classic full JavaScript stack, and it is a small step from there to creating sites with all the froody new client-side frameworks such as Angular and React. Also, you will be able to use Domino 10 as your back-end full-stack datastore and build DEAN/NERD apps.

So, in this post I will take you through how to turn our simple local web server into a proper Node.js app, capable of running stand-alone (e.g. maybe one day in a docker container), and then modify the code to use the Express module. This can form the basis of almost any web server project in the future.

First of all we should take a minute or two to set up our project more fully. We do this by running a few simple commands with the npm package manager that we installed alongside Node last time.

We first need to create one special file to sit alongside our server.js, namely ‘package.json’. This is a text file which contains various configuration settings for our app, and because we want to use an add-on module we especially need its ‘dependencies’ section.

What is great is we don’t have to create this ourselves. It will be automatically created by npm. In our project folder, we type the following in a terminal or command window:

npm init

This prompts you for the details of your app, such as name, version, description, etc. You can type stuff in or just press enter to accept the defaults for now. When this is done we will have our package.json created for us. It isn’t very interesting to look at yet.

We don’t have to edit this file ourselves, this is done automatically by npm when we install things.

First, lets install a local version of Node.js into our project folder. We installed Node globally last time from the download, but a local version will keep everything contained within our project folder. It makes our project more portable, and we can control versions, etc.

We install Node into our project folder like this:

npm install node

The progress is displayed as npm does its job. We may get some warnings, but we don’t need to worry about these for now.

If we look in our project folder we will see a new folder has been created, ‘node_modules’. This has our Node install in it. Also, if we look inside our package.json file we will see that it has been updated. There is a new “dependencies” section which lists our new “node” install, and a “start” script which is used to start our server with the command “node server.js”. You may remember this command from last time, it is how we started our simple Node server.

We can now start our server using this package.json. We will do this using npm, like this:

npm start

This command runs the “start” script in our package.json, which we saw runs the “node server.js” command which we typed manually last time, and our server starts up just like before, listening away. You can imagine how using a package.json file gives us much more control over how our Node app runs.

Next we want to add the Express module. You can probably already guess what to type.

npm install express

When this is finished and we look inside our package.json, we have a new dependency listed: “express”. We also have many more folders in our node_modules subfolder. Express has a whole load of other modules that it uses and they have all been installed automatically for us by npm.

Now we have everything we need to start using Express functions in our server.js, so lets look at what code we need.

First we ‘require’ Express. We don’t need to require HTTP any more, because Express will handle all this for us. So we can change our require line to this:

const express = require('express')

Next thing to do is to create an Express ‘app’, which will handle all our web stuff. This is done with this line:

const app = express()

Our simple web server currently sends back a Hello World message when someone visits. Lets modify our code to use Express instead of the native Node HTTP module we used last time.

This is how Express sends back a Hello World message:

app.get('/', (req, res) => { 
   res.send('Hello World!') 
} )

Hopefully you can see what this is doing, it looks very similar to the http.createServer method we used previously.

The ‘app.get’ means it will listen for regular browser GET requests. If we were sending in form data, we would probably want to instead listen for a POST request with ‘app.post’.

The ‘/’ is the route path pattern that it is listening for, in this case just the root of the server. This path pattern matching is where the power of Express comes in. We can have multiple ‘app.get’ commands matching different paths to map to different things, and we can use wildcards and other clever features to both match and get information out of the incoming URLs. These are the ‘routes’ I mentioned earlier, sort of the equivalent of Domino Web Site Rules. They make it easy to keep the various, often complex, functions of a web site separate and understandable. I will talk more about what we can do with routes in a future blog.

So our app will listen for a browser request hitting the root of the server, e.g. http://127.0.0.1:3000, which we used last time. The rest of the command is telling the app what to do with it. It is a function (using the arrow ‘=>’ notation) and it takes the request (‘req’) and the response (‘res’) as arguments. We are simply going to send back our Hello World message in the response.

So we now have our simple route set up. The last thing we need to do, same as last time, is to tell our app to start listening:

app.listen(port, hostname, () => { 
   console.log(`Server running at http://${hostname}:${port}/`); 
});

You may notice that this is exactly the same code as last time, except we tell the ‘app’ to listen instead of the ‘server’. This helps illustrate how well Express is built on Node and how integrated it all is.

Our new updated server.js should look like this:

const express = require('express');
const hostname = '127.0.0.1';
const port = 3000;
const app = express();
app.get('/', (req,res)=> {
   res.send("Hello World!")
});
app.listen(port, hostname, () => {
   console.log(`Server running at http://${hostname}:${port}/`);
});

This is one less line than before. If we start the server again by typing ‘npm start’ and then browse to http://127.0.0.1:3000, we get our Hello World message!

Now, this is all well and good, but aren’t we just at the same place as when we started? Our Node server is still just saying Hello World, what was the point of all this?

Well, what we have now, that we did not have before, is the basis of a framework for building proper, sophisticated web applications. We can use npm to manage the app and its add-ons and dependencies, the app is self-contained so we can move it around or containerise it, and we have a fully-featured middleware (i.e. Express) framework ready to handle all our web requests.

Using this basic structure, we can build all sorts of things. We would certainly start by adding the upcoming Domino 10 connector to access our Domino data in new ways, and then we could add Angular or React (or your favourite JS client platform) to build a cool modern web UI, or we could make it the server side of a mobile app. If your CIO is talking about microservices, then we can use it to microserve Domino data. If your CIO is talking about REST then we can provide higher-level business logic than the low-level Domino REST API.

In my next blog I plan to talk about more things we can do with Node, such as displaying web pages, about how it handles data (i.e. JSON), and about how writing an app is both similar and different to writing an app in Domino.

How to build your first Node.js app

By Tim Davis - Director of Development.

I have talked a little in previous posts about how excited I am about Node.js coming to Domino 10 from the perspective of NoSQL datastores, but I thought it would be a good idea to talk about what Node.js actually is, how it works, and how it could be integrated into Domino 10. (I will be giving a session on this topic at MWLUG/CollabSphere 2018 in Ann Arbor, Michigan in July).

So, what is Node.js? Put simply, it is a fully programmable web server. It can serve web pages, it can run APIs, it can be a fully networked application. Sounds a lot like Domino. It is a hugely popular platform and is the ‘N’ in the MEAN/MERN stacks. Also it is free, which doesn’t hurt its popularity.

As you can tell from the ‘.js’ in its name, Node apps are written in JavaScript and so integrate well with other JavaScript-based development layers such as NoSQL datastores and UI frameworks.

Node runs almost anywhere. You can run it in Windows, Linux, macOS, SunOS, AIX, and in docker containers. You can even make a Node app into a desktop app by wrapping it in a framework like Electron.

On its own, Node is capable of doing a lot, but coding something very sophisticated entirely from scratch would be a lot of work. Luckily, there are millions of add-on modules to do virtually anything you can think of and these are all extremely easy to integrate into an app.

Now, suppose you are a Domino developer and have built websites using Forms or XPages. Why should you be interested in all this Node.js stuff? Well, IBM and HCL are positioning the Node integration in Domino 10 as a parallel development path, which is ideal for extending your existing apps into new areas.

For example, a Node front-end to a Domino application is a great way to provide an API for accessing your Domino data and this could allow easy integration with other systems, or mobile apps, or allow you to build microservices, or any number of things which is why many IoT solutions are built with Node as a platform, including those from IBM.

In your current Domino websites, you will likely have written or used some JavaScript to do things on your web forms or XPages, maybe some JQuery, or Server-Side JavaScript. If you are familiar with JavaScript in this context, then you will be ok with JavaScript in Node.

So where do we get Node, how do we install it and how do we run it?

Getting it is easy. Go to https://nodejs.org and download the installer. This will install two separate packages, the Node.js runtime and also the npm package manager.

The npm package manager is used to install and manage add-in modules and optionally launch our Node apps. As an example, a popular add-on module is Express, which makes handling HTTP requests much easier (think of Express as acting like Domino internet site documents).  Express is the ‘E’ in the MEAN/MERN stacks. If we wanted to use Express we would simply install it with the simple command: ‘npm install express’, and npm would go and get the module from its server and install it for you. All the best add-on modules are installed using the ‘npm install xxxxxx’ command (more on this later!).

Once Node is installed, we can run it by simply typing ‘node’ into a terminal or command window. This isn’t very interesting, it just opens a shell that does pretty much nothing on its own. (Press control-c twice to exit if you tried this).

To get Node to actually do stuff, we need to write some JavaScript. A good starting example is from the Node.js website, where we build a simple web server, so let’s run through it now.

Node apps run within a project folder, so create a folder called my-project.

In our folder, create a new JavaScript file, lets call it ‘server.js’. Open this in your favourite code editor (mine is Visual Studio Code), and we can start writing some server code.

This is going to be a web server, so we require the app to handle HTTP requests. Notice how I used the word ‘require’ instead of ‘need’. If we ‘require’ our Node app to do anything we just tell it to load that module with some JavaScript:

const http = require('http');

This line essentially just tells our app to load the built-in HTTP module. We can also use the require() function to load any other add-on modules we may want, but we don’t need any in this example.

So we have loaded our HTTP module, lets tell Node to set up a HTTP server, and we do this with the createServer() method. This takes a function as a parameter, and this function tells the server what to do if it receives a request.

In our case, lets just send back a plain text ‘Hello World’ message to the browser. Like this:

const server = http.createServer((req, res) => {
 res.statusCode = 200;
 res.setHeader('Content-Type', 'text/plain');
 res.end('Hello World!\n');
});

There is some funny stuff going on with the arrow ‘=>’ which you may not be familiar with, but hopefully it is clear what we are telling our server to do.

The (req, res) => {…} is our request handling function. The ‘req’ is the request that came in, and the ‘res’ is the response we will send back. This arrow notation is used a lot in Node. It is more or less equivalent to the usual JavaScript syntax: function(req, res) {…}, but behaves slightly differently in ways that are useful to Node apps. We don’t need to know about these differences right now to get our server working.

In our function, you can see that we set the response status to 200 which is ‘Success OK’, then we make sure the browser will understand we are sending plain text, and finally we add our message and tell the server that we are finished and it can send back the response.

Now we have our server all set up and it knows what it is supposed to do with any request that comes in. The last thing we want it to do is to actually start listening for these requests.

const hostname = '127.0.0.1';
const port = 3000;
server.listen(port, hostname, () => {
 console.log(`Server running at http://${hostname}:${port}/`);
});

This code tells the server to listen on port 3000, and to write a message to the Node console confirming that it has started.

That is all we need, just these 11 lines.

Now we want to get this Node app running, and we do this from our terminal or command window. Make sure we are still in our ‘my-project’ folder, and then type:

node server.js

This starts Node and runs our JavaScript. You will see our console message displayed in the terminal window. Our web server is now sitting there and happily listening on port 3000.

To give it something to listen to, open a browser and go to http://127.0.0.1:3000. Hey presto, it says Hello World!

Obviously, this is a very simple example, but hopefully you can see how you could start to extend it. You could check the ‘req’ to see what the details of the request are (it could be an API call), and send back different responses. You could do lookups to find the data to send back in your response.

Wait, what? Lookups? Yes, that is where the Node integration in Domino 10 comes in. I know this will sound silly, but one of the most exciting things I have seen in recent IBM and HCL presentations is that we will be able to do this:

npm install dominodb

We will be able to install a dominodb connector, and use it in our Node apps to get at and update our Domino data.

Obviously, there is a lot more to all this than I have covered in the above example, but I hope this has given you an idea of why what IBM and HCL are planning for Domino 10 is so exciting. In future blogs I plan to expand more on what we can do with Node and Domino 10.

 

So What About Domino @ IBM Connect? Review Post #2

Domino was very visible at Connect this year, not only in both of the opening sessions but in about 40% of the sessions overall.   The ones I picked to attend were talking about strategy and futures so that’s what I wanted to talk about here.

Verse on premises which shipped at the end of Dec 2016 is a very nice browser mail client right now which is easy to install on your Domino server (and you should) but it’s missing an updated calendar interface,  so I was pleased to hear the commitment to deliver that and other functionality to bring on premises in line with Verse in the cloud.  If you don’t have Verse installed on premises now on your Domino servers you need to be looking at it as your path forward.

Feature packs continue to be the strategic path with updates coming via FP installers but with template updates slipstreamed in optionally and separately downloadable through Fix Central.  I wouldn’t look for the templates to ship in step with the feature packs so you’re going to have to plan to subscribe to fix central for updates if you aren’t already.

From Barry Rosen’s strategy presentation here are a couple of snapshots showing planned feature pack features including those for FP8 which should ship soon.

Notes Feature Pack highlights screen-shot-2017-02-26-at-20-51-26

Domino Application Development feature pack highlights (FP8 shipping soon)

 

For application design the path IBM appear to be on is one we and many other Business Partners have been pursuing for some time with Domino as a back end data store and a web based UI on whatever platform you choose.  To that end the really good news is that we will finally be getting some extensions to the existing REST APIs including ones for

  • Directory
  • Mail Contacts
  • Mail File Search
  • Polling for changes in databases

In addition the application modernisation story at the conference was focused around partner solutions.  Of particular interest is Panagenda’s ApplicationInsights tool coming in a freemium model to all maintenace customers in Q2.  That version I believe will allow you to analyse your most prominent existing applications and instances to see what is being used by who and how. More information about it can be found here.

So lots of Domino sessions, lots of talk of future client and server developments, lots of confirmation of support at least to 2021.  For a nearly 30 year old product that’s not bad going.  With the investment in Verse and the introduction of cognitive features in on premises applications as well as a cognitive plugin for Notes, I’m feeling positive about where we are and the support IBM are offering.

Oh and my watch word for 2017 continues to be “Hybrid”

 

 

Connections 5 Customisations - Problems With Stylesheets

This weekend we upgraded a site with heavy customisation from Connections 4.5 to Connections 5 CR1.  Part of that migration was using the lc-migrate tool to export and import the artifacts and ensuring the customisations (customizations for any google searches!) were in place.  All seemed to be fine for a couple of weeks but then suddenly our custom stylesheet was replaced by the default Connections 5 theme.

That made no sense, no changes were done and the css and images were still in the right place under /customizations/themes/defaultTheme - where they had always been.  Looking at SystemOut everything seemed fine.  I cleared the temp folders (/profiles/AppSrv01/temp and wstemp as well as /config/temp) and tried updating the version stamp using wsadmin (LCConfigService.updateConfig(“versionStamp”,””) and restarting EVERYTHING but no luck.

Luckily my subsequent PMR ended up with Susan who remembered an internal PMR that referenced changes in how customisations work.  Specifically that relative URLs for images no longer work either when used in stylesheets so

(“images/customersite.gif”

has to become

(“/com.ibm.lconn.core.styles.oneui3/gen4Theme/images/customersite.gif”)

The detail for this isn’t in the documentation that I could find but this IBM’er has a great blog piece on it

Paul Godby Connections Customization

In addition the defaultTheme folder (as specified in the documentation) no longer works for custom stylesheets. You have to use a folder called gen4Theme and move the stylesheets in there.

Luckily I was working with the amazing Mark Myers on this who pulled out the stops and got the CSS changed and working (dynamic sizing and all) overnight.

..aaaannnndd we’re back in business.   Go Team!

p.s. the reason it had looked fine for us for weeks across several people/ machines and browsers was caching of the original design elements.

Connections DB Schemas

A fantastic visual representation of the key relationships in Connection database schemas by Mark Myers.  None of this is documented by IBM publicly so this is entirely Mark’s effort to take apart and document.  Some of us have tried it in pieces but this is by far the most comprehensive and useful attempt to document the underlying architecture I’ve seen.

Another one for the bookmarks…

http://www.stickfight.co.uk/blog/Connections-Db-Schema-Tip2-Finding-the-UserID