Building a stack in Node.js

[Update: Since I posted this article, I have been informed that the domino-db node integration will be available in beta with Domino10 in a week’s time!]

With Domino 10 nearly upon us, and the Node integration hopefully following soon after, I thought I would talk about building a full-stack application in Node.js, covering how modern JavaScript UI frameworks can be built on top of Node.js and integrated with Domino in the background as a datastore.

This is all part of the IBM and HCL strategy of having Node.js as a parallel development platform alongside the standard Domino development tools, with Node providing a way for web developers to extend existing Domino apps and datastores. However, you don’t have to wait for the Domino node module to start learning about this.

If you consider the development stack acronyms of DEAN and DERN (or NERD), the UI framework is the ‘A’ and the ‘R”. These initials refer to Angular and React respectively, but generally apply to any JavaScript framework, and there are many very good ones.

The main advantage of a development stack is that the UI layer can be independent of the middleware, server, and datastore layers and so you can replace or modify the UI without impacting the rest of the architecture. As an example, you might want to do this to extend an existing web app onto a mobile platform which may require a different UI.

A review of popular frameworks and their features and advantages is beyond the scope if this post, and I may return to that later, but for now I would like to get into the broader topic of how this all fits together, i.e. how does a JavaScript UI sit ‘on top of’ Node?

The first thing to understand is that UI frameworks such as Angular, React, et al, are nothing to do with Node.js. They are not part of Node.js and do not require Node.js to work. They all run perfectly happily on any web server, including Domino. When we use a UI framework with Node, Node is essentially acting as a web server, serving the framework web pages to a browser which then loads the pages and runs the framework code.

You can run framework components inside a regular Domino web form on a Domino server, but the advantage of using the JavaScript development stack is two-fold. First, the stack is all JavaScript, so it makes it easy to talk between layers because all the data is JSON. Second, we are opening up Domino to a new development arena, with an established community, support resources and third-party products.

JavaScript UI frameworks come in all sizes and flavours, but they all work essentially the same way. You embed some code in your HTML web pages and then call the framework library (usually a js file) to enable the framework within these pages. The key thing is that everything is held in the usual web files and folders which are stored on the web server file system. Your server (whether it is Domino, Node, or Apache) simply serves up these files when a browser hits it.

In my earlier post, I talked about how you can use Express to provide middleware routes in your Node server. Here is an example which uses a static route to serve up the contents of the ‘public’ folder on the server:

const express = require('express');
const app = express();
app.use(express.static(__dirname + '/public'));
const hostname = '127.0.0.1';
const port = 3000;
app.listen(port, hostname, () => {
   console.log(`Server running at http://${hostname}:${port}/`);
});

By default, the server opens the index.html file in that folder. Luckily, this is the default filename used in most UI frameworks, so it is very easy to put your framework web app and all its files and folders in the public folder and run it from your website. The folder structure could look something like this:

node
|   server.js
|   package.json
|
+---node-modules
|   \---(node stuff...)
|
+---public
|      index.html
|      other framework files...
|      +---framework folders...
|      \---etc

The Node server runs from the server.js (using modules such as Express which are installed under the node-modules folder), and serves up the framework UI starting with the index.html in the public folder.

So we have a nice UI running on our node server. Now we need to connect this UI to our data backend. In a Domino web form we would typically make a call to a web agent which would return whatever Domino data we need. Here in Node we do pretty much the same thing, specifically our UI will make a call to an API to get the data.

Node is great at two important roles - being a web server and being an API gateway. So we can add middleware routes to our Node server to handle these API calls.

Let’s suppose we want to get a list of people to display on-screen. In our UI framework we will make a call to an API on the same Node server, something like this:

let uri = myDomain + "/api/people";
request.get( uri ).then( displayTheResults );

So this request would hit our Node server with the route “/api/people”. Currently, our server doesn’t know what to do with this, so let’s add in a new Express route to handle it.

app.get('/api/people', (req, res) => {
   let myResults = doLookup();
   res.json( myResults );
})

This will catch requests coming in to “myDomain.com/api/people”, and then our middleware code does the lookup and sends the results back in the response as JSON.

Notice how these methods handle all the JSON stuff for us, making it easy to pass the data back and forth.

This is all we have to do to to get our server responding to the API call. Now we need to look at how we get data from the backend, i.e. what happens in our doLookup().

If we suppose we are going to be using the upcoming domino-db module, then we can use its methods to run queries on Domino data. The details of exactly how the domino-db module works are still under NDA right now, but it could be something like this:

function doLookup() {
const { domServer } = require('domino-db');
   domServer(serverConfig).then(
      async (server) => {
         const db = await server.useDatabase(dbConfig);
         const docs = await db.bulkReadDocuments(DQLquery);
         return docs;
   });
} 

This example function would run a Domino Query Language query on a Domino database which returns a JSON array of document objects, i.e. ‘docs’.

We pass this back as the return value of our doLookup function and this will be sent out as the response from our API route.

Back in our front-end framework UI, we receive this JSON data in our ‘request.get’ call and we can then go ahead and call our ‘displayTheResults’ function.

This really is pretty much all there is to it. We can easily get data, in a standard JSON format, all the way from the datastore to the UI front-end, without needing fiddly data manipulation and all in only a few lines of code.

Also, what is great is that the UI framework is separate from the Express routes and these are separate from the datastore. They can all be on different servers, and we can even have different UIs accessing the same API routes to get at the same data, for example separate web apps and mobile apps.

I hope this gives you an idea of how we will be able to about go about building a full-stack application in a DEAN/DERN environment. In my next blog I plan to expand on how we can use Express to build useful routes to do all sort of things, such as performing CRUD operations and incorporating business logic.

Deletion Logs - What’s Coming In V10

So deletion logs.. currently (without custom code) we cannot tell who deleted a document and what document they deleted in which database.  With v10 deletion logging is now a standard trigger on the database that creates an entry in a delete.log file in the IBM_TECHNICAL_SUPPORT directory detailing every deletion activity.

So how does it work?

Deletion logging is enabled via the compact task on an individual database basis. The option -dl is used when compacting a database along with the fields in that database you want to be part of the log. For example if I wanted to turn it on for my mail file I might do

load compact mail\gdavis.nsf -dl on subject,posteddate,sendto,recipient

Every deletion after that point would then be logged as a single CSV entry in delete.log.  Note there are standard values that are always logged in addition to the custom fields I requested

“20180210T211516,06+01″,
“Mail\gdavis.nsf”,
“80256487:00352154″, “nserver”,”CN=Traveler/O=Turtle”,
“SOFT”, “0001″,”72C0E3F8:44B53FB5DC4EDBF8:A785466D”,
“from”,”””New Relic”  -
 “<marketing@newrelic.com>”, “sendto”,”gabriella@turtle.com”, “deliveredDate”,”02/10/2018 21:05:05”, “posteddate”,”02/10/2018 16:15:18″

There are several interesting aspects to this approach but I see it being particularly powerful for audit purposes, as it shows not only the message but the timestamp of the deletion and who did it.   Note that the server name in the log entry here tells me my Traveler server did the deletion so it was done from my phone, if it had been deleted in the Notes client it would have my name there as the person who did the deletion.

The delete.log itself rolls over each time the server is restarted but obviously depending on the size of your environment and how widely you deploy deletion logging that’s a CSV file you are going to want to have a strategy for.

7 days and counting

 

Taking Your Pick Of Global Launch Events #Domino10

The countdown is now only 10 days - on October 9th the new version of Domino and Notes v10, the first major release in several years and the first since HCL took over ownership of development has a huge launch. You can attend the launch event in person in Frankfurt (yay  Europe!) or attend via livestream.

To attend the October 9th launch event either in person or remotely register here

The next day on October 10th there are several global post launch events including many in cities across Europe hosted by IBM, HCL and partners to answer your questions in person.

I will be attending the London event at IBM South Bank which is hosted by Andrew Manby, Worldwide Director, Offering Management, IBM.  Turtle have recently become certified as a Domino 10 Partner Ready company and we’ve been working heavily with the latest beta,  I look forward to seeing and talking to you there.  You can register for the London event here

Theo Heselmans and Engage will be hosting an event in Belgium with presenters from both IBM and HCL as well as a presentation from Theo himself. Uffe Sorensen leads IBM’s Notes/Domino Messaging & Collaboration Business world wide and Barry Rosen is the Director for Products and Platforms at HCL Technologies. Register for the Belgium event here

Belsoft and Icon Switzerland will be hosting the event in Zurich with Bob Schultz (Watson Talent & Collaboration General Manager) and Richard Jefts (Vice President and General Manager Collaborative Workflow Platform) from HCL.  Register for the Swiss event here

For a full list of global events you can attend at no cost as well as the speakers in each location on October 10th see (and register) here

 

The story of Async in JavaScript

By Tim Davis – Director of Development

In my last post I talked about some Javascript concepts that will be useful when starting out with Node.js. This time I would like to talk about a potentially awkward part of JavaScript, i.e. asynchronous (async) operations. It is a bit of a long story, but it does have a happy ending.

So what is an asynchronous operation? Basically, it means a function or command that goes off and does its own thing while the rest of the code continues. It can be really useful or really annoying depending on the circumstances.

You may have used async code if you ever did AJAX calls to Domino web agents for lookups on web pages. The rest of the page loads while the lookup to the web agent comes back, and the user is happy because part of the page updates in the background. This is brilliant and is the classic use case for an async function.

This asynchronous behaviour is built into JavaScript through and through and you need to bear it in mind when you do any programming in Node.

So how does this async behaviour manifest itself? Lets look at an example. Suppose we have an asynchronous function that goes and does a lookup somewhere.

function doAsyncLookup() {
    ... do the lookup ...
    console.log("got data");
}

Then suppose we call this function from our main code, something like this:

console.log("start");
doAsyncLookup();
console.log("finish");

The output will be this:

start
finish
got data

By the time the lookup has completed it is too late, the code has moved on.

So how do you handle something like this? How can you possibly control your processes if things finish on their own?

The original way JavaScript async functions allowed you to handle this was with ‘callbacks’.

A callback is a function that the async function calls when it is finished. So instead of your code continuing after the async function is called, it continues inside the async function.

In our example a callback could look something like this:

function myCallback() {
    console.log("finish");
}
console.log("start");
doAsyncLookup( myCallback );

Now, the output would be this:

start
got data
finish

This is much better. Usually, the callback function receives the results of the async function as a parameter, so it can act on those results. So in examples of callbacks around the web, you might see something like:

function myCallback( myResults ) { 
    displayResults( myResults );
    console.log("finish"); 
} 
console.log("start"); 
doAsyncLookup( myCallback );

Often the callback function doesn’t need to be defined separately and is defined inside the async function itself as a sort of shorthand, so you will probably see a lot of examples looking like this:

console.log("start"); 
doAsyncLookup( function ( myResults ) { 
    displayResults( myResults ); 
    console.log("finish"); 
} );

This is all great, but the problem with callbacks is that you can easily get a confusing chain of callbacks within callbacks within callbacks if you want to do other asynchronous stuff with the results.

For example, suppose you do a lookup to get a list, then want to look up something else for each item in the list, and then maybe update a record based on that lookup, and finally write updates to the screen in a UI framework. In a JavaScript environment it is highly likely that each of these operations is asynchronous. You end up with a confusing chain of functions calling functions calling functions stretching off to the right, with all the attendant risk of coding errors that you would expect:

console.log("start"); 
doAsyncLookup( function ( myResults ) { 
    lookupItemDetails( myResults, function ( myDetails ) {
        saveDetails( myDetails, function ( saveStatus ) {
            updateUIDisplay( saveStatus, function ( updatedOK ) {
                console.log("finish");
            } );
        } );
    } );    
} );

It gets even worse if you add in error handling. We may have solved the async problem, but at the penalty of terrible code patterns.

Well, after putting up with this for a while the JavaScript world came up with a better version of callbacks, called Promises.

Promises are much more readable than callbacks and have some useful additional features. You pass the results of each function to the next with a ‘then’, and you can just add more ‘thens’ on the end if you have more async things to do.

Our nightmare-indented example above becomes something like this (here I am using the popular arrow notation for functions, see my previous article for more on them):

console.log("start"); 
doAsyncLookup()
.then( (myResults) => { return lookupItemDetails(myResults) } )
.then( (myDetails) => { return saveDetails(myDetails) } )
.then( (saveStatus) => { return updateUIDisplay(saveStatus) } )
.then( (updatedOK) => { console.log("finish") } );

This is much nicer. We don’t have all that ugly nesting.

Error handling is easier, too, because you can add a ‘catch’ to the end (or in the middle if you need) and it is all still much more clear and understandable:

console.log("start"); 
doAsyncLookup() 
.then( (myResults) => { return lookupItemDetails(myResults) } ) 
.then( (myDetails) => { return saveDetails(myDetails) } ) 
.then( (saveStatus) => { return updateUIDisplay(saveStatus) } ) 
.then( (updatedOK) => { console.log("finish") } )
.catch( (err) => { ... handle err ... } );

What is really neat is that you can create your own promises from existing callbacks, so you can tidy up any older messy async functions.

Promises also have some great added features which help with other async problems. For example, with ‘Promises.all’ you can force a list of async calls to be made in order.

So promises solved the callback nesting problem, but The Gods of JavaScript were still not satisfied.

Even with all these improvements, this code is still too ‘asynchronous’. It is still a chain of function after function and you have to pay attention to what is passed from one to the next, and remember that these are all asynchronous and be careful with your error handling.

Once upon a time, Willy Wonka gave us ‘square sweets that look round’, and so now TGOJ have given us ‘asynchronous functions that look synchronous’.

The latest and greatest advance in async handling is Async/Await.

All you need to do is make your main function ‘async’, and you can ‘await’ all your promises:

async function myAsyncStuff() {
    console.log("start"); 
    let myResults = await doAsyncLookup();
    let myDetails = await lookupItemDetails(myResults);
    let saveStatus = await saveDetails(myDetails);
    let updatedOK = await updateUIDisplay(saveStatus); 
    console.log("finish");
 }

How cool is this? Each asynchronous function runs in order, with no messy callbacks or chains of ‘thens’. They all sit in their own line of code just like regular functions. You can do things in between them, and you can wrap them in the usual try/catch error handling blocks. All the async handling stuff is gone, and this is done with just two little keywords.

Plus, the functions are all still promises, so you can do promise-y things with them if you want to, and you can create and ‘await’ your own promises to refactor and revive old callback code.

Async/Await is fully supported by Node.js, by popular UI frameworks like Angular and React, and by all modern browsers.

One of the biggest headaches in JavaScript development now has an elegant and usable solution and they all lived happily ever after.

I hope you enjoyed this little story. I told you it had a happy ending.

How to build your first Node.js app

By Tim Davis - Director of Development.

I have talked a little in previous posts about how excited I am about Node.js coming to Domino 10 from the perspective of NoSQL datastores, but I thought it would be a good idea to talk about what Node.js actually is, how it works, and how it could be integrated into Domino 10. (I will be giving a session on this topic at MWLUG/CollabSphere 2018 in Ann Arbor, Michigan in July).

So, what is Node.js? Put simply, it is a fully programmable web server. It can serve web pages, it can run APIs, it can be a fully networked application. Sounds a lot like Domino. It is a hugely popular platform and is the ‘N’ in the MEAN/MERN stacks. Also it is free, which doesn’t hurt its popularity.

As you can tell from the ‘.js’ in its name, Node apps are written in JavaScript and so integrate well with other JavaScript-based development layers such as NoSQL datastores and UI frameworks.

Node runs almost anywhere. You can run it in Windows, Linux, macOS, SunOS, AIX, and in docker containers. You can even make a Node app into a desktop app by wrapping it in a framework like Electron.

On its own, Node is capable of doing a lot, but coding something very sophisticated entirely from scratch would be a lot of work. Luckily, there are millions of add-on modules to do virtually anything you can think of and these are all extremely easy to integrate into an app.

Now, suppose you are a Domino developer and have built websites using Forms or XPages. Why should you be interested in all this Node.js stuff? Well, IBM and HCL are positioning the Node integration in Domino 10 as a parallel development path, which is ideal for extending your existing apps into new areas.

For example, a Node front-end to a Domino application is a great way to provide an API for accessing your Domino data and this could allow easy integration with other systems, or mobile apps, or allow you to build microservices, or any number of things which is why many IoT solutions are built with Node as a platform, including those from IBM.

In your current Domino websites, you will likely have written or used some JavaScript to do things on your web forms or XPages, maybe some JQuery, or Server-Side JavaScript. If you are familiar with JavaScript in this context, then you will be ok with JavaScript in Node.

So where do we get Node, how do we install it and how do we run it?

Getting it is easy. Go to https://nodejs.org and download the installer. This will install two separate packages, the Node.js runtime and also the npm package manager.

The npm package manager is used to install and manage add-in modules and optionally launch our Node apps. As an example, a popular add-on module is Express, which makes handling HTTP requests much easier (think of Express as acting like Domino internet site documents).  Express is the ‘E’ in the MEAN/MERN stacks. If we wanted to use Express we would simply install it with the simple command: ‘npm install express’, and npm would go and get the module from its server and install it for you. All the best add-on modules are installed using the ‘npm install xxxxxx’ command (more on this later!).

Once Node is installed, we can run it by simply typing ‘node’ into a terminal or command window. This isn’t very interesting, it just opens a shell that does pretty much nothing on its own. (Press control-c twice to exit if you tried this).

To get Node to actually do stuff, we need to write some JavaScript. A good starting example is from the Node.js website, where we build a simple web server, so let’s run through it now.

Node apps run within a project folder, so create a folder called my-project.

In our folder, create a new JavaScript file, lets call it ‘server.js’. Open this in your favourite code editor (mine is Visual Studio Code), and we can start writing some server code.

This is going to be a web server, so we require the app to handle HTTP requests. Notice how I used the word ‘require’ instead of ‘need’. If we ‘require’ our Node app to do anything we just tell it to load that module with some JavaScript:

const http = require('http');

This line essentially just tells our app to load the built-in HTTP module. We can also use the require() function to load any other add-on modules we may want, but we don’t need any in this example.

So we have loaded our HTTP module, lets tell Node to set up a HTTP server, and we do this with the createServer() method. This takes a function as a parameter, and this function tells the server what to do if it receives a request.

In our case, lets just send back a plain text ‘Hello World’ message to the browser. Like this:

const server = http.createServer((req, res) => {
 res.statusCode = 200;
 res.setHeader('Content-Type', 'text/plain');
 res.end('Hello World!\n');
});

There is some funny stuff going on with the arrow ‘=>’ which you may not be familiar with, but hopefully it is clear what we are telling our server to do.

The (req, res) => {…} is our request handling function. The ‘req’ is the request that came in, and the ‘res’ is the response we will send back. This arrow notation is used a lot in Node. It is more or less equivalent to the usual JavaScript syntax: function(req, res) {…}, but behaves slightly differently in ways that are useful to Node apps. We don’t need to know about these differences right now to get our server working.

In our function, you can see that we set the response status to 200 which is ‘Success OK’, then we make sure the browser will understand we are sending plain text, and finally we add our message and tell the server that we are finished and it can send back the response.

Now we have our server all set up and it knows what it is supposed to do with any request that comes in. The last thing we want it to do is to actually start listening for these requests.

const hostname = '127.0.0.1';
const port = 3000;
server.listen(port, hostname, () => {
 console.log(`Server running at http://${hostname}:${port}/`);
});

This code tells the server to listen on port 3000, and to write a message to the Node console confirming that it has started.

That is all we need, just these 11 lines.

Now we want to get this Node app running, and we do this from our terminal or command window. Make sure we are still in our ‘my-project’ folder, and then type:

node server.js

This starts Node and runs our JavaScript. You will see our console message displayed in the terminal window. Our web server is now sitting there and happily listening on port 3000.

To give it something to listen to, open a browser and go to http://127.0.0.1:3000. Hey presto, it says Hello World!

Obviously, this is a very simple example, but hopefully you can see how you could start to extend it. You could check the ‘req’ to see what the details of the request are (it could be an API call), and send back different responses. You could do lookups to find the data to send back in your response.

Wait, what? Lookups? Yes, that is where the Node integration in Domino 10 comes in. I know this will sound silly, but one of the most exciting things I have seen in recent IBM and HCL presentations is that we will be able to do this:

npm install dominodb

We will be able to install a dominodb connector, and use it in our Node apps to get at and update our Domino data.

Obviously, there is a lot more to all this than I have covered in the above example, but I hope this has given you an idea of why what IBM and HCL are planning for Domino 10 is so exciting. In future blogs I plan to expand more on what we can do with Node and Domino 10.

 

Domino 10 vs NoSQL

By Tim Davis – Director of Development.

With Domino 10 bringing Node.js, and my experience of  Javascript stacks over the past few years, I am very excited about the opportunities this brings for both building new apps and extending existing ones. I plan to talk about Node in future blogs, and I am giving an introduction to Node.js at MWLUG/CollabSphere 2018 in Ann Arbor, Michigan in July.  However, I would like to digress from the main topic of Node and Domino itself, and talk a little about an awareness side effect that I am hoping this new feature will have, i.e. moving Domino into the Javascript full stack development space.

There are a plethora of NoSQL database products. In theory, Domino could always have been a NoSQL database layer, but there was no real reason for any Javascript stack developer to even consider it. It would never appear in any suggested lists or articles, and would require some work to provide an appropriate API.

The thing is, working in the Javascript stack world, I was made very aware that pretty much all the available NoSQL database products did not appear very sophisticated compared to Domino (or most other major Enterprise databases - Oracle, DB2, SAP, MS-SQL, etc). The emphasis seemed always on ease of use and very simple code capabilities and not much else.

Now in and of itself this is a worthy goal, but it doesn’t take long before you begin to notice the features that are missing. Especially when you compare them to Domino, which can now properly call itself a Javascript stack NoSQL database.

Popular NoSQL databases are MongoDb, Redis, Cassandra, and CouchDb. As with all of the NoSQL databases, each was built to solve a particular problem.

MongoDb is the one you have probably most likely heard of. It is the ‘M’ in the MEAN/MERN stacks. It is very good at scaling and ‘sharding’ which is sharing workload across many servers. It also has a basic replication model for redundancy.

Redis is an open source database whose power is speed. It holds its data in RAM which is super-fast but not so scalable.

Cassandra came from Facebook, and is a kind of mix of table data and NoSQL and is good for very large volumes of data such as IoT stuff.

CouchDb was originally developed by Damian Katz from Lotus and its key feature is replication including to local devices, making it good for mobile/offline solutions. It also has built-in document versioning which improves reliability but can result in large storage requirements.

Each product has its own flavour and would be suited to different applications but there are many key features which Domino provides, that we are used to being able to utilise, and while some of these products may have similar features, none of them have a proper equivalent for all.

Read and Edit Access: Domino has incredibly sophisticated read and edit control, to individual documents and even down to field level. You can provide access through names, groups and roles, and all of this is built-in. In the other products, anything like this has to be pretty much obfuscated by specifying filters in queries. You are effectively rolling your own security. In Domino, if you are a user not in the reader field then you can’t read the document, no matter how you try to access it.

Replication and Clustering: One of Domino’s main strengths has always been its replication and clustering model. Its power and versatility is still unsurpassed. There are some solutions such as MongoDb and CouchDb which have their own replication features and these do provide valuable resilience and/or offline potential but Domino has the most fine control and distributed data capabilities.

Encryption: Domino really does encryption well. Most other NoSQL products do not have any. Some have upgrades or add-on products that provide encryption services to some degree, but certainly none have document-level or field-level encryption. You would have to write your own code to encrypt and decrypt the individual data elements.

Full Text Indexing: Some of the other products such as MongoDb do have a full text index feature, but these tend to be somewhat limited in their search syntax. You can use add-ons which provide good search solutions, such as Solr or Elasticsearch, but Domino has indexing built-in and those indexing solutions themselves have little security.

Other Built-in Services: Domino is not just a database engine. It has other services that extend its reach. It has a mail server, it has an agent manager, it has LDAP, it has DAOS. With the other products you would need to provide your own solution for each of these.

Historically, a big advantage for the other datastores was scalability, but with Domino 10 now supporting databases up to 256Gb this becomes less of an issue.

In general, all the other NoSQL products do have the same main advantage, the one which gave rise to their popularity in the first place, and this is ease of use and implementation. In most cases a developer can spin up a NoSQL database without needing the help of an admin. Putting aside the issue of whether this is actually a good idea for enterprise solutions, with containerization Domino can now be installed just as easily.

I hope this brief overview of the NoSQL world has been helpful. I believe Domino 10 will have a strong offering in a fast growing and popular development space. My dream is that at some point, Domino becomes known as a full stack datastore, and because of its full and well-rounded feature set, new developers in startups looking for database solutions will choose it, and CIOs in large enterpises with established Domino app suites will approve further investment in the platform.

What is NoSQL?

By Tim Davis - Director of Development.

I have been working with the MEAN/MERN stacks for a few years and with Domino 10 looking to introduce Node.js support, Domino itself is following me into the ‘World of Node’. This world is the full-stack web developer world of MEAN, MERN, and all things javascript, and in this world NoSQL is king.

The MEAN/MERN development stacks have been around for a while. They stand for MongoDb, Express, Angular/React, and Node. (The other main web development stack is LAMP which is Linux, Apache, MySQL, PHP/Perl/Python).

The reason the MEAN/MERN stacks have become so popular is because they are all based on the same language, i.e. javascript, and they all use JSON to hold data and pass it between each layer. It’s JSON all the way down.

You may already be using Angular or React as a front end in your Domino web applications. With the introduction of Node into the Domino ecosystem, this becomes even more powerful. Domino can become the NoSQL database layer of a full javascript stack (e.g. DEAN or NERD) and, most importantly in my view, Domino becomes a direct competitor to the existing NoSQL data stores such as Mongo and Couch which are so popular with web developers and CIOs.

So what exactly is NoSQL?

As you can tell by the name, it is not SQL. SQL datastores are traditional relational databases and are made up of tables of data which are indexed and can be queried using the SQL syntax. Examples are DB2, Oracle, and MySQL. The tables are made up of rows with fixed columns and all records in a table hold the same fields.

NoSQL data is not held in tables. It is held in individual documents which can each hold any number of different fields of different sizes. You can query these documents to produce collections of documents which you can then work with.

Does this sound familiar? Yes, this is exactly how Domino works! Domino was NoSQL before NoSQL.

The main advantage of NoSQL over SQL in app development is that it allows for more flexibility in data structures, either as requirements evolve or as your project scales. It also allows for something called denormalization, which is where you hold common data in each document rather than doing SQL joins between different tables, and this can make for very efficient queries. Again, this is how Domino has always worked. Notes views are essentially NoSQL queries.

In addition to all this, when NoSQL is used in a javascript development stack the use of JSON as the data format means that the data does not need to be reformatted as it passes up and down the stack, with less chance of errors occurring.

Now obviously the note items inside Domino documents aren’t held as JSON, and this would be a issue when looking to integrate Domino into a javascript stack, but the Node server solution being introduced in Domino 10 solves this problem.

The Node server in Domino 10 comes with a ‘connector’ to do the work of talking to Domino. It is based on Loopback and gRPC (both IBM/HCL initiatives) and promises to be very fast. Having this connector built-in means that you as the developer do not need to worry about how to get data out of Domino. HCL have done that job for you. All you have to worry about is what to do with it once you have it, e.g. send it out as a response from an API, show it in Angular or React, or whatever.

This is all very exciting as a developer, especially one like me who has worked with javascript stacks for a while, but as I mentioned earlier the power of this solution is that it moves Domino into a position to directly compete with other NoSQL databases in IT strategies.

In my next blog I will talk about the advantages that Domino brings to the NoSQL table and why I believe it is the best NoSQL solution for full-stack javascript development.