How to use the new Domino Query Language

By Tim Davis – Director of Development.

Before I talk about building a Domino-based API gateway on Node.js, I thought it would be a good idea to expand a little on how to use the new Domino Query Language (DQL) to run queries on Domino documents.

DQL is the new query language that comes with Domino 10. It is separate from the other query syntaxes in Domino, such as the Full Text Search syntax, or @formula selection criteria. You can use it in other places than Node.js, such as LotusScript and Java, and you can even test your queries from the command line.

What is cool about DQL is that it runs using the new Design Catalog system database, GQFdsgn.nsf. This does a lot of heavy lifting in the background with pre-built design data which makes running the queries more efficient. I am told this will not always be part of the DQL engine, but for now you do need it.

The DQL processing engine also does clever prioritizing of the components of the query to make the process as efficient as possible. You can see how this is done using the ‘explain’ tool (see below).

The syntax is pretty straight-forward. Here are some simple examples to give you an idea:

customer = 'Bob Smith'
age > 21
form = 'Person' and type = 'Client'

You have all the usual operators such as =, >, <, <=, >=, and, or, not. You can wrap elements in parentheses ( ) to control the precedence.

String values are enclosed in single quotes. You can duplicate quotes in your strings to escape them, e.g.:

name = 'James O''Brien'

You can also use lists with the ‘in’ operator, like this:

form = 'Person' and type in ( 'Client', 'Supplier', 'Agency' )

DQL comes with some useful built-in functions that you can use in your queries:

@Created
@ModifiedInThisFile
@DocumentUniqueID

You can search using dates, which are included using the built-in @dt function and are in RFC3339 format. This looks a bit clunky but is pretty clear once you get used to it:

@Created > @dt('2018-01-01T00:00:00+0500')

A cool feature is being able to search in view columns. You use the view name or alias and the programmatic column name, like this:

'PendingOrders'.orderNo = '000101'

These view and column searches are fast because the design analysis of the db has already been done and stored in the design catalog. However if the database isn’t in the catalog then this kind of search will fail.  Adding databases to the catalog is an optional step you would do if you want to perform this kind of query, but being in the catalog helps with the performance of regular searches as well and so is always recommended.

One thing to bear in mind is that currently you can only search on regular summary fields in the documents, which means you can’t search in rich text items yet. I am told this is coming later.

Also, if your query generates an error during the processing then you will get no results returned. You don’t get partial results (unlike some SQL behavior). This is likely not a problem provided you know to expect it.

With all of this you should have everything you need to run pretty much any query you want, with plenty of flexibility and control.

Talking of control, DQL comes with a command line DomQuery tool to test your queries. It has an -e (explain) option that will deconstruct how your query will be processed and details how it will perform. It splits out each element of the search and lists the order they are processed, how long each takes, and how the results get filtered down step by step. You can use this to test out your searches and tune them to make sure they run efficiently and perform well for the users.

However, just in case you do end up running a query that gets out of hand there are limits to protect the server. The limits are:

  • MaxDocsScanned – maximum allowable NSF documents scanned ( 200000 default)
  • MaxEntriesScanned – maximum allowable index entries scanned (200000 default)
  • MaxMsecs – maximum time consumed in milliseconds ( 120000 ) (2 minutes)

You can override these limits system-wide using notes.ini settings if you need to, but you really ought to instead review your query’s behaviour before you consider doing this.

The DQL engine is continuously being enhanced and the new Domino 10.0.1 version adds support for query arguments (also known as substitution variables). You use query arguments when building your queries to help avoid code injection. This works by defining specifically which elements of a query are user input rather than just constructing the query as one string. Without this, a user could maliciously input some DQL syntax into the query and access unexpected documents or data.

I hope all this helps give you an idea of how easy and fast it will be to run DQL queries from a Node.js app. In my next post I plan to bring this together with the domino-db module and build an API gateway.

Using Node.js to access Domino

You will be pleased to hear that the Domino 10 module for Node.js is now in beta (you can request early access here) and in this article I would like to show you how easy it will be to use.

Before we get started using the Domino module in Node, we do need to do some admin stuff on our Domino server. It has to be running Domino 10 and we have to install the Proton add-in, and we also have to create the Design Catalog including at least one database. (The Proton add-in listens on its own port, by default 3002, and is separate from HTTP.)

More detail on these admin tasks will be covered in a companion blog, but here I would like to focus on the app-dev side.

Lets build a simple Node.js system that will read some Domino documents and get field values, and also create some new documents. In my next post we can integrate this into the basic Node stack we developed in my previous post to create an actual API.

Assuming our Domino server is all set up, the first thing we do is install the new dominodb module. When this goes live later in the year, we will do this with the usual  ‘npm install dominodb’, but while we are still in beta we install it from the downloaded beta package:

npm install ../packages/domino-domino-db-1.0.0.tgz --save

Once we have the dominodb connector installed, we can use it in our Node server.js code. The sequence is almost exactly the same as in LotusScript. First, we connect to Domino and open the Server (sort of equivalent to a NotesSession), then from this we open our Database, and then using the Database we can access and update Documents.

We start the same way as with all other Node.js modules, with a ‘require’:

const { useServer } = require('@domino/domino-db');
This ‘useServer’ is a function and is going to create our server connection. We give it the Domino server’s hostname and the Proton port, and it connects to our server like this:
serverConfig = {
    hostName: 'server.mydomain.com',
    connection: { port:'3002'}
};
useServer( serverConfig )
The useServer function returns a ‘server’ object. This is inside a javascript promise (I talked about promises in an earlier post), so we can use the server object inside the promise to open our database like this:
useServer( serverConfig ).then( async server => {
    const database = await server.useDatabase( databaseConfig );
The databaseConfig contains the filepath of our database on the server:
const databaseConfig = {
    filePath: 'orders.nsf'
};

Notice how we are using ‘async await’ to simplify the asynchronous nature of getting the database. This code looks very similar to the LotusScript equivalent.

So we have created a server connection and opened a database. Now we want to get some documents.

The domino-db module uses the new Domino Query Language (DQL) which comes with Domino 10. This can perform very efficient high-performance queries on Domino databases.

It is very much like getting a document collection with a db.search( ) in LotusScript, and the query syntax is similar to selection formulas, for example:

const coll = await database.bulkReadDocuments({
    query: "Form = 'Order' and Customer = 'ACME'"
});
This returns a collection of documents in a JSON array. These documents do not automatically contain all the items from the Notes documents. By default they only have some basic metadata, i.e. unid, created date, and modified date:
{
  "documents":[
    {
      "@unid":"A2504056F3AF6EFE8025833100549873",
      "@created": {"type":"datetime","data":"2018-10-25T15:24:00.51Z"},
      "@modified": {"type":"datetime","data":"2018-10-25T15:24:00.52Z"}
    }
  ],
  "errors":0,
  "documentRange":{"total":1,"start":0,"count":1}
}
To get field data from the documents, you need to specify which fields you want returned, and this is done with an array of itemNames:
const coll = await database.bulkReadDocuments({
  query: "Form = 'Order' and Customer = 'ACME'",
  itemNames: ['OrderNo', 'Qty', 'Price']
});
Then you get the field data included in your query results:
{
"documents":[
  {
    "@unid":"A2504056F3AF6EFE8025833100549873",
    "@created":{"type":"datetime","data":"2018-10-25T15:24:00.51Z"},
    "@modified":{"type":"datetime","data":"2018-10-25T15:24:00.52Z"},
    "OrderNo":"001234",
    "Qty":2,
    "Price":25.49
  }
],
"errors":0,
"documentRange":{"total":1,"start":0,"count":1}
}
We can output the document field data with JSON dot notation in a console.log (with the JSON.stringify method to format it properly):
console.log( "Order No: " + JSON.stringify( coll.documents[0].OrderNo ) );

So this is how to read a collection of documents. Now lets look at creating a new document, which is even easier.

First we build our document data in an ‘options’ object, including the Form name and all our other item values:

const newDocOptions = {
  document: {
    Form: 'Order',
    Customer: 'ACME',
    OrderNo: '000343',
    Qty: 10,
    Price: 49.99
  }
};
Then we simply call the ‘createDocument’ method, like this:
const unid = await database.createDocument( newDocOptions );

We don’t need to call a ‘save’ on the document, it is all handled in one operation. The return value is the unid of the newly-created document, so we can act on it again to update it if we want to.

To update a document, we get it by its unid with the ‘useDocument’ method, and then we can call ‘replaceItems’ on it.

This takes the new values in a ‘replaceItems’ parameter, but it only needs to contain the fields to update:

const document = await database.useDocument({
  unid: coll.documents[0]['@unid']
});
await document.replaceItems({
  replaceItems: { Qty: 11 }
});

Here we are using the ‘@unid’ value from the collection we got earlier. This is a bit fiddly because ‘@’ is a reserved symbol in JSON, but we can use the JSON square bracket notation to get around this.

We have got documents and created and updated documents. In addition to all these, the domino-db module provides a variety of methods for reading, creating and updating documents, allowing you to do anything you would need.

Also, the DQL syntax has sophisticated search facilities that can return large numbers of documents and can even search in views and columns.

There are couple of things to be aware of:

The first is that by default the connection to Domino is unsecured, but you can easily make it use TLS/SSL. You may need help from your Domino admin to provide you with the certificate and key files, but this is all explained in the documentation.

The second thing is that currently this access is either Anonymous or can be set to a single Domino user account in TLS/SSL by the client certificate (which maps to a Domino person document). So in the beta there is no user authentication per se, but this will be coming later with OAuth support.

I hope you found this both useful and exciting. In my next article I plan to show how to build these domino-db methods into a Node HTTP server and create an API gateway.

 

Building a stack in Node.js

[Update: Since I posted this article, I have been informed that the domino-db node integration will be available in beta with Domino10 in a week’s time!]

With Domino 10 nearly upon us, and the Node integration hopefully following soon after, I thought I would talk about building a full-stack application in Node.js, covering how modern JavaScript UI frameworks can be built on top of Node.js and integrated with Domino in the background as a datastore.

This is all part of the IBM and HCL strategy of having Node.js as a parallel development platform alongside the standard Domino development tools, with Node providing a way for web developers to extend existing Domino apps and datastores. However, you don’t have to wait for the Domino node module to start learning about this.

If you consider the development stack acronyms of DEAN and DERN (or NERD), the UI framework is the ‘A’ and the ‘R”. These initials refer to Angular and React respectively, but generally apply to any JavaScript framework, and there are many very good ones.

The main advantage of a development stack is that the UI layer can be independent of the middleware, server, and datastore layers and so you can replace or modify the UI without impacting the rest of the architecture. As an example, you might want to do this to extend an existing web app onto a mobile platform which may require a different UI.

A review of popular frameworks and their features and advantages is beyond the scope if this post, and I may return to that later, but for now I would like to get into the broader topic of how this all fits together, i.e. how does a JavaScript UI sit ‘on top of’ Node?

The first thing to understand is that UI frameworks such as Angular, React, et al, are nothing to do with Node.js. They are not part of Node.js and do not require Node.js to work. They all run perfectly happily on any web server, including Domino. When we use a UI framework with Node, Node is essentially acting as a web server, serving the framework web pages to a browser which then loads the pages and runs the framework code.

You can run framework components inside a regular Domino web form on a Domino server, but the advantage of using the JavaScript development stack is two-fold. First, the stack is all JavaScript, so it makes it easy to talk between layers because all the data is JSON. Second, we are opening up Domino to a new development arena, with an established community, support resources and third-party products.

JavaScript UI frameworks come in all sizes and flavours, but they all work essentially the same way. You embed some code in your HTML web pages and then call the framework library (usually a js file) to enable the framework within these pages. The key thing is that everything is held in the usual web files and folders which are stored on the web server file system. Your server (whether it is Domino, Node, or Apache) simply serves up these files when a browser hits it.

In my earlier post, I talked about how you can use Express to provide middleware routes in your Node server. Here is an example which uses a static route to serve up the contents of the ‘public’ folder on the server:

const express = require('express');
const app = express();
app.use(express.static(__dirname + '/public'));
const hostname = '127.0.0.1';
const port = 3000;
app.listen(port, hostname, () => {
   console.log(`Server running at http://${hostname}:${port}/`);
});

By default, the server opens the index.html file in that folder. Luckily, this is the default filename used in most UI frameworks, so it is very easy to put your framework web app and all its files and folders in the public folder and run it from your website. The folder structure could look something like this:

node
|   server.js
|   package.json
|
+---node-modules
|   \---(node stuff...)
|
+---public
|      index.html
|      other framework files...
|      +---framework folders...
|      \---etc

The Node server runs from the server.js (using modules such as Express which are installed under the node-modules folder), and serves up the framework UI starting with the index.html in the public folder.

So we have a nice UI running on our node server. Now we need to connect this UI to our data backend. In a Domino web form we would typically make a call to a web agent which would return whatever Domino data we need. Here in Node we do pretty much the same thing, specifically our UI will make a call to an API to get the data.

Node is great at two important roles - being a web server and being an API gateway. So we can add middleware routes to our Node server to handle these API calls.

Let’s suppose we want to get a list of people to display on-screen. In our UI framework we will make a call to an API on the same Node server, something like this:

let uri = myDomain + "/api/people";
request.get( uri ).then( displayTheResults );

So this request would hit our Node server with the route “/api/people”. Currently, our server doesn’t know what to do with this, so let’s add in a new Express route to handle it.

app.get('/api/people', (req, res) => {
   let myResults = doLookup();
   res.json( myResults );
})

This will catch requests coming in to “myDomain.com/api/people”, and then our middleware code does the lookup and sends the results back in the response as JSON.

Notice how these methods handle all the JSON stuff for us, making it easy to pass the data back and forth.

This is all we have to do to to get our server responding to the API call. Now we need to look at how we get data from the backend, i.e. what happens in our doLookup().

If we suppose we are going to be using the upcoming domino-db module, then we can use its methods to run queries on Domino data. The details of exactly how the domino-db module works are still under NDA right now, but it could be something like this:

function doLookup() {
const { domServer } = require('domino-db');
   domServer(serverConfig).then(
      async (server) => {
         const db = await server.useDatabase(dbConfig);
         const docs = await db.bulkReadDocuments(DQLquery);
         return docs;
   });
} 

This example function would run a Domino Query Language query on a Domino database which returns a JSON array of document objects, i.e. ‘docs’.

We pass this back as the return value of our doLookup function and this will be sent out as the response from our API route.

Back in our front-end framework UI, we receive this JSON data in our ‘request.get’ call and we can then go ahead and call our ‘displayTheResults’ function.

This really is pretty much all there is to it. We can easily get data, in a standard JSON format, all the way from the datastore to the UI front-end, without needing fiddly data manipulation and all in only a few lines of code.

Also, what is great is that the UI framework is separate from the Express routes and these are separate from the datastore. They can all be on different servers, and we can even have different UIs accessing the same API routes to get at the same data, for example separate web apps and mobile apps.

I hope this gives you an idea of how we will be able to about go about building a full-stack application in a DEAN/DERN environment. In my next blog I plan to expand on how we can use Express to build useful routes to do all sort of things, such as performing CRUD operations and incorporating business logic.

Deletion Logs - What’s Coming In V10

So deletion logs.. currently (without custom code) we cannot tell who deleted a document and what document they deleted in which database.  With v10 deletion logging is now a standard trigger on the database that creates an entry in a delete.log file in the IBM_TECHNICAL_SUPPORT directory detailing every deletion activity.

So how does it work?

Deletion logging is enabled via the compact task on an individual database basis. The option -dl is used when compacting a database along with the fields in that database you want to be part of the log. For example if I wanted to turn it on for my mail file I might do

load compact mail\gdavis.nsf -dl on subject,posteddate,sendto,recipient

Every deletion after that point would then be logged as a single CSV entry in delete.log.  Note there are standard values that are always logged in addition to the custom fields I requested

“20180210T211516,06+01″,
“Mail\gdavis.nsf”,
“80256487:00352154″, “nserver”,”CN=Traveler/O=Turtle”,
“SOFT”, “0001″,”72C0E3F8:44B53FB5DC4EDBF8:A785466D”,
“from”,”””New Relic”  -
 “<marketing@newrelic.com>”, “sendto”,”gabriella@turtle.com”, “deliveredDate”,”02/10/2018 21:05:05”, “posteddate”,”02/10/2018 16:15:18″

There are several interesting aspects to this approach but I see it being particularly powerful for audit purposes, as it shows not only the message but the timestamp of the deletion and who did it.   Note that the server name in the log entry here tells me my Traveler server did the deletion so it was done from my phone, if it had been deleted in the Notes client it would have my name there as the person who did the deletion.

The delete.log itself rolls over each time the server is restarted but obviously depending on the size of your environment and how widely you deploy deletion logging that’s a CSV file you are going to want to have a strategy for.

7 days and counting

 

Taking Your Pick Of Global Launch Events #Domino10

The countdown is now only 10 days - on October 9th the new version of Domino and Notes v10, the first major release in several years and the first since HCL took over ownership of development has a huge launch. You can attend the launch event in person in Frankfurt (yay  Europe!) or attend via livestream.

To attend the October 9th launch event either in person or remotely register here

The next day on October 10th there are several global post launch events including many in cities across Europe hosted by IBM, HCL and partners to answer your questions in person.

I will be attending the London event at IBM South Bank which is hosted by Andrew Manby, Worldwide Director, Offering Management, IBM.  Turtle have recently become certified as a Domino 10 Partner Ready company and we’ve been working heavily with the latest beta,  I look forward to seeing and talking to you there.  You can register for the London event here

Theo Heselmans and Engage will be hosting an event in Belgium with presenters from both IBM and HCL as well as a presentation from Theo himself. Uffe Sorensen leads IBM’s Notes/Domino Messaging & Collaboration Business world wide and Barry Rosen is the Director for Products and Platforms at HCL Technologies. Register for the Belgium event here

Belsoft and Icon Switzerland will be hosting the event in Zurich with Bob Schultz (Watson Talent & Collaboration General Manager) and Richard Jefts (Vice President and General Manager Collaborative Workflow Platform) from HCL.  Register for the Swiss event here

For a full list of global events you can attend at no cost as well as the speakers in each location on October 10th see (and register) here

 

The story of Async in JavaScript

By Tim Davis – Director of Development

In my last post I talked about some Javascript concepts that will be useful when starting out with Node.js. This time I would like to talk about a potentially awkward part of JavaScript, i.e. asynchronous (async) operations. It is a bit of a long story, but it does have a happy ending.

So what is an asynchronous operation? Basically, it means a function or command that goes off and does its own thing while the rest of the code continues. It can be really useful or really annoying depending on the circumstances.

You may have used async code if you ever did AJAX calls to Domino web agents for lookups on web pages. The rest of the page loads while the lookup to the web agent comes back, and the user is happy because part of the page updates in the background. This is brilliant and is the classic use case for an async function.

This asynchronous behaviour is built into JavaScript through and through and you need to bear it in mind when you do any programming in Node.

So how does this async behaviour manifest itself? Lets look at an example. Suppose we have an asynchronous function that goes and does a lookup somewhere.

function doAsyncLookup() {
    ... do the lookup ...
    console.log("got data");
}

Then suppose we call this function from our main code, something like this:

console.log("start");
doAsyncLookup();
console.log("finish");

The output will be this:

start
finish
got data

By the time the lookup has completed it is too late, the code has moved on.

So how do you handle something like this? How can you possibly control your processes if things finish on their own?

The original way JavaScript async functions allowed you to handle this was with ‘callbacks’.

A callback is a function that the async function calls when it is finished. So instead of your code continuing after the async function is called, it continues inside the async function.

In our example a callback could look something like this:

function myCallback() {
    console.log("finish");
}
console.log("start");
doAsyncLookup( myCallback );

Now, the output would be this:

start
got data
finish

This is much better. Usually, the callback function receives the results of the async function as a parameter, so it can act on those results. So in examples of callbacks around the web, you might see something like:

function myCallback( myResults ) { 
    displayResults( myResults );
    console.log("finish"); 
} 
console.log("start"); 
doAsyncLookup( myCallback );

Often the callback function doesn’t need to be defined separately and is defined inside the async function itself as a sort of shorthand, so you will probably see a lot of examples looking like this:

console.log("start"); 
doAsyncLookup( function ( myResults ) { 
    displayResults( myResults ); 
    console.log("finish"); 
} );

This is all great, but the problem with callbacks is that you can easily get a confusing chain of callbacks within callbacks within callbacks if you want to do other asynchronous stuff with the results.

For example, suppose you do a lookup to get a list, then want to look up something else for each item in the list, and then maybe update a record based on that lookup, and finally write updates to the screen in a UI framework. In a JavaScript environment it is highly likely that each of these operations is asynchronous. You end up with a confusing chain of functions calling functions calling functions stretching off to the right, with all the attendant risk of coding errors that you would expect:

console.log("start"); 
doAsyncLookup( function ( myResults ) { 
    lookupItemDetails( myResults, function ( myDetails ) {
        saveDetails( myDetails, function ( saveStatus ) {
            updateUIDisplay( saveStatus, function ( updatedOK ) {
                console.log("finish");
            } );
        } );
    } );    
} );

It gets even worse if you add in error handling. We may have solved the async problem, but at the penalty of terrible code patterns.

Well, after putting up with this for a while the JavaScript world came up with a better version of callbacks, called Promises.

Promises are much more readable than callbacks and have some useful additional features. You pass the results of each function to the next with a ‘then’, and you can just add more ‘thens’ on the end if you have more async things to do.

Our nightmare-indented example above becomes something like this (here I am using the popular arrow notation for functions, see my previous article for more on them):

console.log("start"); 
doAsyncLookup()
.then( (myResults) => { return lookupItemDetails(myResults) } )
.then( (myDetails) => { return saveDetails(myDetails) } )
.then( (saveStatus) => { return updateUIDisplay(saveStatus) } )
.then( (updatedOK) => { console.log("finish") } );

This is much nicer. We don’t have all that ugly nesting.

Error handling is easier, too, because you can add a ‘catch’ to the end (or in the middle if you need) and it is all still much more clear and understandable:

console.log("start"); 
doAsyncLookup() 
.then( (myResults) => { return lookupItemDetails(myResults) } ) 
.then( (myDetails) => { return saveDetails(myDetails) } ) 
.then( (saveStatus) => { return updateUIDisplay(saveStatus) } ) 
.then( (updatedOK) => { console.log("finish") } )
.catch( (err) => { ... handle err ... } );

What is really neat is that you can create your own promises from existing callbacks, so you can tidy up any older messy async functions.

Promises also have some great added features which help with other async problems. For example, with ‘Promises.all’ you can force a list of async calls to be made in order.

So promises solved the callback nesting problem, but The Gods of JavaScript were still not satisfied.

Even with all these improvements, this code is still too ‘asynchronous’. It is still a chain of function after function and you have to pay attention to what is passed from one to the next, and remember that these are all asynchronous and be careful with your error handling.

Once upon a time, Willy Wonka gave us ‘square sweets that look round’, and so now TGOJ have given us ‘asynchronous functions that look synchronous’.

The latest and greatest advance in async handling is Async/Await.

All you need to do is make your main function ‘async’, and you can ‘await’ all your promises:

async function myAsyncStuff() {
    console.log("start"); 
    let myResults = await doAsyncLookup();
    let myDetails = await lookupItemDetails(myResults);
    let saveStatus = await saveDetails(myDetails);
    let updatedOK = await updateUIDisplay(saveStatus); 
    console.log("finish");
 }

How cool is this? Each asynchronous function runs in order, with no messy callbacks or chains of ‘thens’. They all sit in their own line of code just like regular functions. You can do things in between them, and you can wrap them in the usual try/catch error handling blocks. All the async handling stuff is gone, and this is done with just two little keywords.

Plus, the functions are all still promises, so you can do promise-y things with them if you want to, and you can create and ‘await’ your own promises to refactor and revive old callback code.

Async/Await is fully supported by Node.js, by popular UI frameworks like Angular and React, and by all modern browsers.

One of the biggest headaches in JavaScript development now has an elegant and usable solution and they all lived happily ever after.

I hope you enjoyed this little story. I told you it had a happy ending.

Things to know with JavaScript - JSON, let, const, and arrows

By Tim Davis - Director of Development

While we eagerly await the arrival of the npm domino-db module with Domino 10, I thought I would spend this instalment of my blog series on Node.js talking a little about some concepts in JavaScript that are used a lot in Node development. If you haven’t looked at JavaScript much since Domino web forms or XPages SSJS then you may not have come across them. You will see them in examples and articles on Node around the web and will want to use them in your own projects as they will make your life easier when starting out.

JSON

The first is JavaScript Object Notation, or JSON, which I talked about briefly in my blog on NoSQL.

Basically, all the data in Node is JSON. This makes it great for storing in backend NoSQL data stores and for handling in front-end JavaScript frameworks.

JSON is a very readable way of describing data, and it looks like this:

{
    "orderNo" : "00101",
    "orderLines": [
        { "quantity" : 7 },
        { "quantity" : 11 },
        { "quantity" : 3 }
    ],
    "status": "Invoiced"
}

An object is denoted by the curly brackets { }. An array is denoted by square brackets [ ]. The items inside the object are name-value pairs. Items are separated by commas in both objects and arrays.

You can type this sort of thing directly into your code if you like, but you would normally just get it from somewhere else, like a database.

You reference the object by name and can access or update its properties using dot notation:

currentOrder.status = "Invoiced";
if ( orderLine.quantity > 100 ) {
    ... your code here ...
}

JSON is hierarchical and you can nest objects inside objects, and arrays inside objects inside arrays, etc, etc. It is a bit like XML in that way, but much easier to read.

You can access nested objects inside arrays inside objects (etc) using dot notation like this:

orders[i].orderLines[j].quantity = 10

JSON arrays are just regular arrays, so you can loop through them:

for ( i = 0; i < currentOrder.orderLines.length; i++ ) {
    ...
}

One great side effect of JSON being so readable is that it is easily converted to and from strings. Converting to strings is a great way to pass data around between different systems. You can use the built-in JSON object to do this:

JSON.stringify( currentOrder )
JSON.parse( '{ "status":"Invoiced", "orderNo":"00101" }' )

You should use these methods because they handle all the formatting and escaping of special characters for you.

Let and Const

These are two new ways of defining variables in Javascript and you will see them a lot. You will already be familiar with using ‘var’, like this:

var count = 0;

You use ‘let’ and ‘const’ in the same way as ‘var’:

let count = 0;
const domain = "mydomain.com";

‘Let’ and ‘const’ are similar to ‘var’, but they behave in a way that helps you avoid problems in your code.

As you can probably guess, ‘const’ is for constant values that will never change. If you try to set another value you will get an error. This will help prevent you overwriting something important in another part of your code.

What ‘let’ does that is different from ‘var’ is more subtle and is all about the variable’s scope, i.e. where in your code it exists.

If you define a variable using ‘var’, then it exists everywhere inside the enclosing function, i.e. everywhere inside the function you are currently in. This is a very wide area, and it is easy to forget and lose track of variable names and values and get confused. This is especially common when you have lots of loops inside loops inside one function.

With ‘let’, a variable only exists inside the current set of curly brackets, i.e code block. So for example a variable would only exist inside a particular loop and not exist outside in the parent function. This helps avoid all sorts of conflicts and overwriting errors.

Here is an example of how let and var work differently inside and outside curly brackets. Notice how ‘var’ overwrites the value while ‘let’ does not:

let cat = "meow";
var dog = "bark";
console.log("cat "+cat); // will be meow
console.log("dog "+dog); // will be bark
if (true) {
    let cat = "scratch";
    var dog = "wag";
    console.log("cat "+cat); // will be scratch
    console.log("dog "+dog); // will be wag
}
console.log("cat "+cat); // will be meow
console.log("dog "+dog); // will be wag

The cat inside the curly brackets is a different cat from the one outside them, but the dog is the same everywhere. This is why the dog gets confused.

Arrow functions

If you read articles on Node or look at code examples, you may have seen functions defined something like this, with the ‘=>’ arrow notation:

( arg1, arg2 ) => { ... }

This is more or less equivalent to

function( arg1, arg2 ) { ... }

The main difference is in how the keyword ‘this’ works.

In a regular function(), when you use ‘this’ it refers to what calls the function. With an arrow function, ‘this’ is from outside what calls the function. This is a pretty arcane distinction, and worth reading up about, but it is very useful in avoiding coding errors.

As an example, when developing in Node you often have functions defined inside methods as callbacks. A callback is a function that is called when a process has finished, usually to go ahead and do something with the results of that process. These usually look something like this:

myOrderDb.getOrders( function( myOrders ) {
    ... do something with myOrders ...
} );

Here you can see that the parameter in the getOrders method is a function. This is a callback function which is called when getOrders finishes and takes the result, ‘myOrders’, and does something with it.

Consider the following example code. I want my app (i.e. ‘this’) to get records from a database and, when that is done, to update its display:

this.showLoadingMessage();
let myOrdersDb = this.getDb();
myOrdersDb.getOrders( function(myOrders) {
    // the following line does not work
    this.displayOrders(myOrders);
} );

So what is wrong? I am expecting ‘this’ to refer to my app so I can go ahead and update the app display with the orders, but the ‘this’ inside the function actually points to myOrderDb because it is myOrderDb that is calling the function. The object that ‘this’ refers to gets overwritten inside regular functions. Keeping track of ‘this’ can be a nightmare when you have a complicated series of callbacks and this can be an easy mistake to make.

However, if you use an arrow function then ‘this’ is not overwritten. It is the same inside the function as it was outside it. So an arrow function version of our code would be:

this.showLoadingMessage();
let myOrdersDb = this.getDb();
myOrdersDb.getOrders( (myOrders) => {
    this.displayOrders(myOrders);
} );

This is only a small change, but now the ‘this’ inside the function is the app, same as outside it, and my call to the app’s displayOrders method will work. With arrow functions everything behaves much more how you would expect it to.

Next Up

In this post I have touched on callbacks, and next time I plan to expand on this topic and talk in detail about a classic bugbear in JavaScript development, asynchronous functions.